I watched the presentation announced in Numba Meeting June 6 2023: Presentation on Numba-MLIR - #2 by sklam and installed Numba-MLIR with:
conda install numba-mlir -c dppy/label/dev -c intel -c conda-forge -c numba
A simple example:
from numba_mlir import njit
import numpy as np
@njit (parallel=True)
def foo(a, b):
return a + b
result = foo(np.array([1,2,3]), np.array([4,5,6]))
print(result)
runs fine. There is no documentation yet and I don’t know how to control where this script is run: CPU vs. GPU. What is the way to offload it to an Intel integrated GPU?
Cross-posted to How to offload to Intel integrated GPU? · Issue #131 · numba/numba-mlir · GitHub