Weird type-infer bug(?) in parallel mode?

I am using Anaconda on Windows 11 with Python 3.11.5 and Numba 0.58.1.

The following snippet causes compilation error claiming that the addressing of Js is wrong: the return value of min seems to be inferred as float64.

INT = 'int64'

@nb.njit(f'{INT}({INT})')
def f(j):
    return 12

@nb.njit#(f'{INT}({INT}, {INT})')
def imin(j,k):
    return j if j<k else k

@nb.njit(f'void({INT}[:,:])', parallel=True)
def g(Js):
    for j0 in nb.prange(20):
        J = Js[min(12, j0)]
        #J = Js[j0]
        fa = f(J[0])

It does of course nothing, but it is a stripped down version of an actual code. The weird thing is that whatever I remove from this, it starts working:

  • removing parallel compiles;
  • removing min compiles(despite everything seems to suggest that j0 is inferred floating);
  • most weirdly: removing the function call f(J[0]) compiles.

But it does not compile for any integer type INT, or when trivially rewriting min without casting.

Am I missing something? Or is this really bug? Is there anything that I could do better than casting the result of min to int?

Hi @winnyec

I do not think that this is a bug. I suspect the problem lies in the type unification of the integer literal 12 and j0. If you set parallel=True, Numba uses a uint64 for j0, if I remember correctly. With parallel=False Numba falls back to the default range and uses an int64. Now remember that Numba uses int64 as the default integer. So the literal 12 is of type int64. Recall also that int64 and uint64 are unified into a float64, since this is the only type that covers the ranges of int64 and uint64 without underflow or overflow.

Now how would this explain your observations?

  • Observation 1: j0 is an int64. The minimum of two int64 is an int64, which is a valid indexing type.
  • Observation 2: You do not index in Js with a float64.
  • Observation 3: If you do not use J, Numba may remove the expression before checking if it is valid (this is speculation).

Yes, uint64 could explain it. But if I replace the line with

J = Js[min(int(12), int(j0))]

still does not compile. Maybe int infers an optimal kind of int in both cases? But it is true that some Numpy-type cast will compile like

min(12, np.dtype(INT).type(j0))

So thanks, I think I’ll use this.

I still find it weird that it does compile if I delete the line

fa = f(J[0])

which appears completely irrelevant otherwise. Maybe the compiler bravely optimises the code to omit the calculation of J, since then it is not used?

Thanks again.