When registering type inference for Python values, as in the Interval Example, the function supplied must accept two parameters:
@typeof_impl.register(Interval)
def typeof_index(val, c):
return interval_type
The first, val
, is sensible - it is the value being typed. The second appears to serve no purpose, and if I hack things a little bit so that it is always None
, then nothing appears to go wrong - tested on CI here: https://github.com/numba/numba/pull/6021
This was originally added in https://github.com/numba/numba/commit/6f08a572fa37dbf543f534919c49c8813e15c9e5, with its sole purpose appearing to be differentiating int types depending on whether they were arguments or constants: https://github.com/numba/numba/commit/6f08a572fa37dbf543f534919c49c8813e15c9e5#diff-dbbece582dfc6d7bc7f0567ca773d3a8R95
This use of it for differentiating int types was then removed shortly afterwards: https://github.com/numba/numba/commit/461bb0daeb445dc4ff1b14b6812b6d3c05b8de4d#diff-dbbece582dfc6d7bc7f0567ca773d3a8L79-L81
It feels like it would be nice to remove the c
parameter, so that the example would become e.g.:
@typeof_impl.register(Interval)
def typeof_index(val):
return interval_type
rather than every typeof
implementation carrying round an extra redundant parameter. I realise this will have a knock-on effect for anyone using it to extend Numba, but it seems a shame to leave in a vestige like this.
How would others feel about this change?