Implementing a function including ARIMA Model to run in a CUDA Kernel

Dear All,

I am trying to implement a CUDA function to run on GPU cores to get high computational power for time series analysis and forecasting.
For time series data forecasting, I thought to use ARIMA model. I was able to implement a function (from statsmodels.tsa.arima_model import ARIMA) which works fine on CPU and I got the predicted results too.

Now I am trying to optimize the same function to run on GPU using @cuda.jit. But, I got an error as below.

**TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Untyped global name ‘customer_df’: Cannot determine Numba type of <class ‘pandas.core.frame.DataFrame’>
File “”, line 16:

[customer_df : is a panda data frame that I defined to to get the csv data for processing]

I tried some simple matrix calculations in Python to run on GPU using numba. That worked fine. Issue is with the function fore time series forecasting.

I went through this resources too numba.pydata.org/numba-doc/latest/cuda/cudapysupported.html. Does it mean that I cannot use methods from ‘statsmodels.tsa.arima_model’ or panda frames etc in a function decorated with ‘@cuda.jit’?

I am a newbie for numba and CUDA and I am in a kind of mess without a clue for this issue.

Appreciate if anyone can shed some light.

Thank you and regards