Blurhash-numba project feedback

Hi Numba community,

I have just released the first version of blurhash implementation in numba. You can find the project here -

It would be great if you can share some feedback (any numba-jitsu is welcome) so that I can improve its performance even further.

Thanks & Regards,

That’s a cool project. I noticed one thing when I glanced through the code. When spelling array like this nb.float64[:, :, :] (from here), it might be missing some optimization because:

>>> import numba as nb
>>> nb.float64[:, :, :]
array(float64, 3d, A)    

which is an array type of unknown memory layout (“A” means any layout).

If you use the following instead:

>>> nb.float64[:, :, ::1]
array(float64, 3d, C)

The compiler knows that it is C-contiguous and can emit better loops.

To make it generic, I usually just let the type inference do the work to provide the most precise type info for each argument.

1 Like

Thank you @sklam for this awesome feedback :slight_smile:
The whole reason I went ahead with Eager compilation instead of Lazy compilation (let the type inference do the work) because I wanted to reduce the runtime of the first run. I am still to run some tests on a server less framework (like GCP) to see if Eager compilation vs Lazy compilation matters. So I implemented it theoretically as I not aware how these frameworks cache the compiled code.
Any views on my assumptions will be highly valued.
Thank you again for your feedback.


I’m not familiar with the behavior of GCP. But the eager compilation would be transferring the compilation cost to the import time. If your script is running in a fresh process each time, the compilation overhead will be the same. You might want to explore caching the compilation result e.g. @jit(cache=True).

1 Like

Sure @sklam,

In my past experience it takes a certain cold start time to setup environment until the API(function) is ready to use on GCP. The API also goes cold if it is not being used for certain minutes and gets restarted time-to-time. Does it set-up a new instance for every cold start, I will have to investigate as it will render caching ineffective. Currently, I went ahead with eager compilation to transfer the compilation cost to the cold start time. I will investigate it further and see if I am able to eliminate this cost whole-together.

Thank you for your inputs.