What is the current state Numba Arm support?

Hi everyone,

I’m going to be at Arm (virtually) for the next 3 months and one of the projects that I’ll be having a go at is getting the clifford library running on some Arm processors and different bits of hardware eg. Nvidia Jetson Nano. Clifford relies very heavily on Numba to make it fast and I am presuming (I haven’t tried it out yet) that this is likely to cause a bit of a headache when trying to get things running. So generally I have a few questions:

What is the current state of numba support for Arm processors?
Has anyone attempted to get numba working on the Jetson Nano/other hardware with Arm processors?
Given the Jetson Nano has a Nvidia GPU is it likely that numba.cuda will work?
Is there anything that Arm could do that would help Numba (or its dependencies) to run more efficiently on its hardware?
What other platforms might make a good testbed for Numba + Arm interoperability?

Hugo

Hi Hugo,

Some quick replies, expect others may respond too, but to get you started…

I’m going to be at Arm (virtually) for the next 3 months

Excellent!

and one of the projects that I’ll be having a go at is getting the clifford library running on some Arm processors and different bits of hardware eg. Nvidia Jetson Nano . Clifford relies very heavily on Numba to make it fast and I am presuming (I haven’t tried it out yet) that this is likely to cause a bit of a headache when trying to get things running. So generally I have a few questions:
What is the current state of numba support for Arm processors?

Should work fine, the Numba build farm builds and tests on a Jetson Tx2 and RPi4. I use a RPi3 locally to debug as needed. Conda packages are available here: https://anaconda.org/numba/numba/

Docs on installing for ARMv7l: Installation — Numba 0.50.1 documentation

Docs on installing for AArch64: Installation — Numba 0.50.1 documentation

Has anyone attempted to get numba working on the Jetson Nano/other hardware with Arm processors?

Yes, as above. There’s also an ARM label on the issue tracker which may hint at what others have tried: Issues · numba/numba · GitHub

Given the Jetson Nano has a Nvidia GPU is it likely that numba.cuda will work?

Highly likely.

Is there anything that Arm could do that would help Numba (or its dependencies) to run more efficiently on its hardware?

Can’t think of anything immediately, most of Numba’s efficiency problems stem from areas higher up than LLVM/hardware.

What other platforms might make a good testbed for Numba + Arm interoperability?

It’d be interesting to see how large core count ARM machines perform, this is something we’ve not tested on.


stuart

1 Like

Last time I tried (2 years ago), the CUDA support did work (with a few exceptions):

Note that things should be much easier to test now than back in 2018. We have conda packages now, and Numba should be able to detect the system installed CUDA libraries without setting weird environment variables. (Worst case, you might need to set $CUDA_HOME.)

1 Like

Fantastic, thank you @stuartarchibald and @sseibert for all the information :slight_smile: It’s really encouraging to hear that it is likely to be a pretty painless process to get our library working on these boards, will report back if I run into any nasty issues along the way!