Is it possible to cache cuda jit compiled code? I tried to pass cache=True in @cuda.jit but can’t find any compiled code in the cache dir.
Thanks in advance!
Best,
Kris
Is it possible to cache cuda jit compiled code? I tried to pass cache=True in @cuda.jit but can’t find any compiled code in the cache dir.
Thanks in advance!
Best,
Kris
Are you using a Python script or Jupyter notebook?
Thanks for your reply. For both, in a jupyter notebook and in a python script, there have been no entries generated in the corresponding cache directories. There are, however, entries for compiled CPU code.
This is a simple test code which generates code for subtract but not for sub_cuda:
from numba import cuda
import numba as nb
import numpy as np
@nb.jit("int64[:](int64[:],int64[:])",cache=True)
def subtract(a,b):
return a - b
@cuda.jit("void(int64[:],int64[:],int64[:])",cache=True)
def sub_cuda(a,b,c):
start = cuda.grid(1)
stride = cuda.gridsize(1)
for i in range(start, a.shape[0], stride):
c[i] = a[i] - b[i]
a = np.ones(3,dtype=np.int64)
b = np.ones(3,dtype=np.int64)
c = np.ndarray(3,dtype=np.int64)
sub_cuda[64,128](a,b,c)
d = subtract(a,b)
print(c,d)
Unfortunately there is no support for caching in the CUDA target at present. (Apologies for brevity, on mobile)