Numba promotes dtype differently to numpy

When adding (or multiplying, dividing, subtracting etc…) a python float to a numpy array, in numpy the dtype of the array is preserved, whereas in numba the array is promoted to float64. How can I modify the overload of ndarray.__add__ etc… to change the dtype of the python float to match that of the array so that the result has the same dtype?

Code to demonstrate the issue, would like consistency with numpy in a function decorated with njit:

import numpy as np
import numba as nb

def func(array):
    return array + 1.0

numba_func = nb.njit(func)

a_f64 = np.ones(1, dtype=np.float64)
a_f32 = np.ones(1, dtype=np.float32)

for i in (a_f64, a_f32):
    print(i.dtype)
    print(func(i).dtype)
    print(numba_func(i).dtype, end="\n\n")

Output (with numpy 2.1.3 and numba 0.61.0):

float64
float64
float64

float32
float32
float64

A simple solution is to nb.float32(1.0).

@milton Yes I could do, but if I have 100s of instances of this in my code, then that isn’t a solution. Also, the code in question is compatible with the python array api standard so also has to work with other libraries without numba import. Therefore, I’m looking for a way to change the overload of these functions to handle the dtype consistently with numpy.

But isn’t the issue that the literal 1.0 is auto-typed as float64? So numba probably simply doesn’t want to make ad hoc assumptions about the literal’s type when it adds it to the array of float32s.

Yes, that’s the issue, but I don’t want to have to go through all my code to find where I add an array and a python scalar and then cast the scalar to the same dtype as the array. Rather, modify the overload of + so that if one of the arguments is an array and the other a python scalar, the scalar is cast to the dtype of the array.

Hey @Nin17 ,
many Numba functions probably rely on the existing promotion rules. Changing them globally via operator overloading might lead to unexpected issues elsewhere in your code. As @milton suggested, using explicit conversions like typed_scalar = array.dtype.type(1.0) might be the safer option. Have you considered opening an issue on GitHub if this hasn’t been addressed yet?

Surely it’s up to @Nin17 to decide if he wants to take that risk. Does anyone have an idea accomplish the requested result?

For reference, there’s active work in progress to implement a new type system (tracked in issue #9409) that will support NEP50 promotion rules for Python scalars. The last(?) “NumPy 2.x support community update” was in Nov 2024 by @kc611.