 # Overflow Error!

Hi, I’m trying to do some numerical simulation and I wanted to try out Numba to speed up my code.
I’ve to do several for loops and I use only numpy, thus I think that @jit should work well.

I cannot post all the code because it is extremely long but this is one simple functions that is giving me issues (and its error). C&R are two matrices that contains values of order 1.

Can anyone give a check and see if there is an error that I didn’t see?
I don’t understand how that calculation can lead to an overflow…

Error:

``````<ipython-input-11-c859736ae262>:6: RuntimeWarning: overflow encountered in multiply
temp3 += dt * np.sum(fun2(C[t][0:t], 3) * R[t][0:t] * C[t][0:t])
<ipython-input-3-05212490067d>:6: RuntimeWarning: overflow encountered in double_scalars
temp = -mu[t] * C[t][t1]
<ipython-input-3-05212490067d>:8: RuntimeWarning: overflow encountered in multiply
temp3 += dt * np.sum(fun2(C[t][0:t], 3) * R[t][0:t] * (np.transpose(C))[t1][0:t])
<ipython-input-3-05212490067d>:13: RuntimeWarning: invalid value encountered in double_scalars
return temp + temp3 + temp4;
C:\ProgramData\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py:87: RuntimeWarning: invalid value encountered in reduce
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
``````

And here is the code:

``````@njit
def fun1(q, d): # fun1 = first derivative, fun2 = second derivative
return 0.5 * d * q**(d-1)
@njit
def evol_R(t, t1):
temp = 0; temp3 = 0; temp4 = 0;

temp += - mu[t]*R[t][t1]
temp3 += dt * np.sum(fun2(C[t][t1:t], 3) * R[t][t1:t] * (np.transpose(R))[t1][t1:t])
temp4 += dt * np.sum(fun2(C[t][t1:t], 4) * R[t][t1:t] * (np.transpose(R))[t1][t1:t])

return temp + temp3 + temp4``````

Hi @Tede, welcome to the forum Numerical code is always annoying as hell to debug, I feel your pain It is tricky to tell what’s going on without having some approximate inputs to these functions.
I cannot see any obvious problems here.

So I would suggest to remove the decorators (if possible) and see if you receive the same warnings. You may also want to check the dtype of all the arrays you are using. Maybe a short datatype has snuck in somewhere that flows over faster than you would think.

As far as I can tell `evol_R` calls `fun2`, but you have only shared `fun1`, are they similar? `fun1` could potentially diverge quickly if `d<1` and `-1<q<1`

If the code is too long to post here, do you have a repository that people could check out, or is the code private?

Best of luck!

Hi @Hannes thanks for your response.

I’ll share a simplified code at the following link: Spin34 - Pastebin.com

It is a statistical physics problem that I’m trying to solve and it involves differential equations and integrals.
When I do not use the decorators it works fine and I’m sure that the results are correct.
If I use `@njit` on the functions `fun0`, `fun1`, it gets a little bit faster (as expected), but if I do that on `evol_R` (and the others) it overflows almost instantly.

The functions `fun0`, `fun1` are simple polynomial derivatives and the degree is always an integer ( `d=3,4`), thus I don’t think that the problem is in these functions.

If you have any advice of any kind, please share them!

Thanks again!

Hi @Tede

I think I see what is going on.

When numba compiles your functions all the variables inside them get turned into constants. That is for example the matrices in your code. When you simulation later changes the values inside these matrices, the compiled functions do not see the updated values.

This should be easy to work around: Just turn everything that changes over time into function arguments that you pass in when you call the function. To modify the content, you do not need to explicitly return the arrays, you can manipulate them inplace and those changes should be reflected everywhere else, since the underlying memory buffer is manipulated. By passing those arrays in as arguments, you make sure that all functions use the same location in memory (i.e. the same array) without running the risk that numba makes a copy for itself during the compilation, which is then “baked” into your code.

Hey @Hannes, thank you very much.
It seems that this was the issue and now I’ll be able to use Numba in this project and also in other ones.

I hope that when the code will be complete it will work effortlessly.
Just one last question, as far as you know is there the possibility that Numba introduces significant numerical-errors compared to the ‘native’ code?
I’m scared that this could happen with “long” simulations, where I’ll need to do multiple integrals and thus concatenate several for-loops.

Thanks again for everything

Hi Tede, good to hear that helped under no circumstances should numba introduce significant numerical errors (unless there is a bug :-P).

However, numba partially does not necessarily use the same algorithms as the vanilla numpy implementations in all places. I seem to remember that np.sum is one of those examples (and possibly things like trapz for integrals). In the sum example I think numpy uses certain tricks to limit error buildup when summing many terms, whereas the numba implementation is closer to what I and you would be doing, namely use normal addition.

For your simulation that probably means that a single function call should never deviate much between numpy and numba, but of course divergence can build up over time. Especially if you are looking at “chaotic” systems with high Lyapunov exponents. But in those situations who is to say that either solution is more correct than the other?

What is definitely a good idea, is to write some tests that assure that the python and numba versions of your functions give approximately equal results (and by approximately I mean close to machine precision for a single call). If you are only using the njit decorator, your functions have a `py_func` attribute with which you can call the uncompiled version of the function. (But I don’t know of the top of my hat what happens when those functions call other compiled functions, I guess they might still call the compiled versions of those?)