Vectrozie Bessel function

Hello, I have a code that runs smoothly on older versions of Numba (0.56.4 ) and SciPy (1.7.3 ). However, it encounters issues with the latest releases of these libraries. Specifically, I am looking for a way to integrate Numba’s vectorize decorator with the Bessel functions J0 and J1 from SciPy’s special module. Has anyone successfully accomplished this, especially in the context of the most recent versions of Numba and SciPy? I’d appreciate any insights or code examples. Thank you.

import numba
import scipy.special as sc
from numba import vectorize
# import numba_special  # The import generates Numba overloads for special

@numba.vectorize('float64(float64,float64,float64)',nopython=True)
def f(ro, rs, delta):
    return rs / ro * np.exp((-1 / delta) * (ro - rs))

@numba.vectorize('float64(float64,float64,float64)',nopython=True)
def mz(ro, rs, delta):
    return (1 - f(ro, rs, delta)**2) / (1 + f(ro, rs, delta)**2)

@numba.vectorize('float64(float64,float64,float64)',nopython=True)
def mro(ro, rs, delta):
    return (2 * f(ro, rs, delta) ) / (1 + f(ro, rs, delta)**2)

@vectorize('float64(float64)',nopython=True)
def jj0(x):
    return(sc.j0(x))

@vectorize('float64(float64)',nopython=True)
def jj1(x):
    return(sc.j1(x))

@vectorize('float64(float64,float64,int64,float64,float64,float64,float64,float64)',nopython=True)
def h_ro(a,b,N,ro,z,r1,r2,lz):
    zeta = r2/r1
    beta = lz/r1
    x = np.linspace(a+(b-a)/(2*N), b-(b-a)/(2*N), N)
    fx = jj1(x*ro)*(zeta*jj1(x*zeta)-jj1(x))*np.exp(x*beta*z)*(1-np.exp(-x*beta))
    area = np.sum(fx)*(b-a)/N
    return area

1 Like

What error are you getting? What happens if you import and use the functions without the module prefix ‘sc.’? Like

from scipy.special import j0, j1

As always, a complete minimal program that demonstrates the problem can be helpful.

Also, this might be interesting

Yes you are partially right, it works, but only with forceobj=True mode. Not with nopython=True - so it’s 30 times slower.

from numba import njit, vectorize, types
from numba.extending import overload
import numpy as np
from numba import njit
from scipy import special
from scipy.special import j0, j1

x = np.linspace(-10, 10, 1000)

@vectorize('float64(float64)',forceobj=True)
def jitted_j0(x):
    return j0(x)

print(jitted_j0(x))

All my codes, and functions works with old verion of python, numba and scipy.

I’m not a scipy.special expert and in particular am unfamiliar with j0/j1, but this link describes getting at some other scipy functions in nopython jitted code.

Hey @kkingstoun,

These scipy functions are already vectorized and probably optimized ufuncs.
I have compared the speed for scipy.special.j0 and your example np.linspace(-10, 10, 1000) in different ways.

These are the results:
258 ms <= Original scipy function
8205 ms <= vectorized function with forceobj=True
540 ms <= jitted with nopython=False
253 ms <= vectorized function with numba-scipy extension installed

There seems to be no speed-up using numba in this case.
If you need a numba version then try to install numba-scipy. This version is at least not slower than the original function.
I have used conda install -c "numba/label/dev" numba-scipy to install version 0.3 together with Python 3.10.
The latest version of the numba-scipy extension might run under Python 3.9. I have not checked it.
In that case you could do:
conda install -c numba numba-scipy

I hope that helps.

import timeit
import numpy as np
import numba as nb
from scipy.special import j0

@nb.vectorize('float64(float64)', forceobj=True)
def j0_vec_obj(x):
    return j0(x)

@nb.jit(['float64(float64)', 'float64[:](float64[:])'], nopython=False)
def j0_jit_obj(x):
    return j0(x)

# This works only if the Numba extension package numba-scipy is installed!!!
@nb.vectorize('float64(float64)')
def j0_vec(x):
    return j0(x)


x = np.linspace(-10, 10, 1000)

j0_vec_obj(x)
j0_jit_obj(x)
j0_vec(x)

functions = [j0, j0_vec_obj, j0_jit_obj, j0_vec]
function_names = ["j0", "j0_vec_obj", "j0_jit_obj", "j0_vec"]

for func, name in zip(functions, function_names):
    stmt = f"{name}(x)"
    setup = f"from __main__ import {name}, x"
    execution_time = timeit.timeit(stmt, setup, number=10000) * 1000
    print(f"{name}: {execution_time:.2f} ms")
# j0: 258.78 ms
# j0_vec_obj: 8205.06 ms
# j0_jit_obj: 540.84 ms
# j0_vec: 253.60 ms