Any way to parallelize cfunc constructed from scipy.integrate?

Hi! I am trying to extensively use numba to speed up my scientific calculations.
One of the operations I am trying to speed up is multiple calculation of 3D integrals in spherical coordinates using scipy.integrate.nquad.
I have already wrapped the integral using cfunc decorator. Now I’m interested is it possible to parallelize it?
Here is MWE of my func:
‘’’
def integrator(resolution=30):
from scipy import integrate, LowLevelCallable
from numba import cfunc, jit
from numba.types import intc, CPointer, float64
def jit_integrand_function(integrand_function):
jitted_function = jit(integrand_function, nopython=True)

    @cfunc(float64(intc, CPointer(float64)))
    def wrapped(n, xx):
        return jitted_function(xx[0], xx[1], xx[2])
    return LowLevelCallable(wrapped.ctypes)

@jit_integrand_function
def cell_density (R, phi, theta):
    return 100*exp(-0.5/R)*sin(theta)*(1-abs(cos(phi)))

d_a = np.radians(resolution)/2
phi = np.radians(np.arange(-180, 180-1, resolution))
theta = np.radians(np.arange(0, 180, resolution))
# theta = np.radians(np.arange(85, 95, resolution))

map_density = np.zeros(shape=[len(phi),len(theta)]).T * np.nan

for i, _t in enumerate(theta):
    for j, _p in enumerate(phi):
        map_density[i,j] = integrate.nquad(cell_density, [[0, 1], [_p-d_a, _p+d_a],[_t-d_a, _t+d_a]])[0]

return phi, theta, map_density

‘’’