Below snippet is throwing following error.
It is possible to assign values back in a loop but that won’t be most efficient.
Is there a way to “enable NRT”?
Exception: Failed in cuda mode pipeline (step: native lowering)
NRT required but not enabled
During: lowering “$48binary_subscr.10[$68build_tuple.22] = d2_array” at …
def fill_arrays(grouped_data, cuda_array_of_arrays):
_index = cuda.grid(1)
if _index < len(grouped_data): # grouped_data is a tuple of 2d CuPy arrays
d2_array = grouped_data[_index]
arr_rows = d2_array.shape
cuda_array_of_arrays[_index][0: arr_rows, 0: d2_array.shape] = d2_array
There isn’t a way to enable the Numba Runtime (NRT) on the CUDA target. It is used to support dynamic memory allocation. I suspect that because the expression:
cuda_array_of_arrays[_index][0: arr_rows, 0: d2_array.shape]
creates a slice, the error message you see is produced (this error message could probably be better for the CUDA target).
To rewrite this in a way that is compatible with the CUDA target, you instead need to do something like:
for i in range(arr_rows):
for j in range(d2_array.shape):
cuda_array_of_arrays[_index][i, j] = d2_array[i, j]
If this turns out not to be the right workaround, could you please post an executable reproducer that I can check / experiment with?