The reference manual has a floating point pitfalls section that has a warning against about how numba may produce slightly different results, for divergent functions.
I am working with agent based models, which are models which show emergent properties which are somewhat sensitive to initial conditions. So what exactly are the implications of this warning? Should I be worried about using numba?
Afaik, vanilla python/numpy implementation is also just some ways to represent and compute numbers, so have their own issues with floating point numbers. Now numba is a slightly different implementation. So in essence both are just doing the same thing, and one can’t be said to be more right than the other, correct?