RFC: Moving the CUDA target to a new package maintained by NVIDIA

Hi all,

Presently Numba’s CUDA target is part of Numba itself, in the numba.cuda module - however, we’re planning to move it into a separate package maintained by NVIDIA, and would like to share the plans and ask for community feedback and suggestions.

What is happening?

Many things will remain the same:

  • No code changes should be necessary to use the separate package - existing code using numba.cuda will continue to work unmodified.
  • Using the built-in numba.cuda module will also continue to work, so existing (or new) environments that do not include the numba-cuda package will continue to work.
  • Development of the CUDA target will continue to be open source, though hosted on the NVIDIA Github organization rather than inside the main Numba repository.

From a user perspective, the changes will be:

  • The existing numba.cuda module will remain in Numba for a considerable length of time (possibly indefinitely) but will cease to receive new feature development and eventually be deprecated. Bug fixes to this “upstream” numba.cuda module will be considered on a case-by-case basis.
  • New feature development and bug fixing will proceed in the NVIDIA/numba-cuda repository / package.
  • To use the new NVIDIA package, it will need to be installed (e.g. with conda install numba-cuda or pip install numba-cuda) but no other changes would be needed.

Why are you doing this?

We (Numba maintainers at Anaconda and NVIDIA) would like to do this because we think it will speed up the development and release cycle of the CUDA target and reduce the burden across the maintainers:

  • New releases of the CUDA target will be made independently of Numba’s release schedule - the release process for the CUDA target alone will be more lightweight than the Numba release process, so releases will be made on a frequent basis.
  • PR reviews and testing will be conducted independently of the review process for Numba PRs - this will lessen the burden on Anaconda maintainers and the Anaconda buildfarm, and will speed up the review / test time, as the Anaconda buildfarm requires manual operation and feedback to PR authors.
  • In summary: this will enable the CUDA target to integrate new features more quickly, and to make them generally available through releases sooner after them being merged.

How will this work?

The broad steps in the near term are:

  • Creation of the numba_cuda package:
    • A new numba_cuda package will be created, containing a copy of the CUDA target as of Numba 0.60.
    • The numba_cuda package will install a “redirector” that redirects any import of the numba.cuda module to the version contained within the numba_cuda package.
  • Users can start installing numba_cuda at this point to get the benefit of new features and bug fixes added to the numba_cuda package.
  • Numba point-release: A point release following the last major Numba release will be created, incorporating the following changes:
    • At import time, it will try to obtain numba.cuda from the numba_cuda package, so that the redirector is no longer necessary.
    • It will report on the version of the CUDA target used (either the built-in one or the NVIDIA one) in the sysinfo tool, so that users can easily tell which is in use (and to help direct bug reports).

In addition to the repository and packages, there are other items to address as part of this migration:

  • Documentation: The CUDA target documentation sources will be moved to the numba-cuda repository, and rendered versions of these will be published / hosted by NVIDIA.
    • The CUDA Array Interface specification is also part of the Numba CUDA documentation. It will be part of numba-cuda’s documentation initially, but should eventually be migrated elsewhere and managed independently of Numba, as it is a standard independent of any particular tool or library.
  • Issues: There are ~175 open CUDA issues in Numba. The migration presents an excellent opportunity to triage and review them all, and to move those that are still relevant over to numba-cuda, and to close those that are now resolved / no longer relevant.

What do I (the reader) need to do?

  • I maintain a package that uses numba.cuda: Nothing is required - your code should continue working as it did before, whether your users use the builtin CUDA target or install the numba-cuda version.
    • Caveat: if you want to make use of new features or rely on bug fixes that go into numba-cuda, you will need to use it and have it as a dependency. This should not adversely affect your users, as numba-cuda should be fully compatible with all existing code.
  • I use numba.cuda in my application: Similarly for maintainers, you do not need to change your code. If you want to make use of features added to numba-cuda, you will need to add it as a dependency, again without the expectation of any issues / incompatibilities being created.
  • I contribute to Numba and/or the CUDA target:
    • I’ll reach out to anyone who has an open PR relevant to CUDA about whether to migrate the PR to the numba-cuda repository in due course.

In the short term:

  • Feedback on the plan and questions from all users will be appreciated - please do post thoughts in the thread in response to this post!
    • If you have questions, please ask them - I’ve tried to cover the essentials in this post without making it overly long, so some details may be overlooked.
  • Please try the numba-cuda package in your environment and report any issues using the package or installation troubles that arise. The package can be installed with:
    • Conda: conda install nvidia::numba-cuda
    • Pip: pip install numba-cuda

Links / references:

1 Like

Thanks for notifying us, @gmarkall!

  • To use the new NVIDIA package, it will need to be installed (e.g. with conda install numba-cuda or pip install numba-cuda) but no other changes would be needed.

So, I understand that things are moving over to the NVIDIA/numba-cuda package but will the numba/numba package be adding the NVIDIA/numba-cuda package as a dependency? In other words, I won’t need to explicitly add the NVIDIA/numba-cuda package to the stumpy’s requirements.txt/environment.yml? Currently, stumpy only depends on numpy, scipy, and numba.

Initially I think numba won’t add numba-cuda as a dependency, but we would probably do that in future. Note that even with only numba installed, all existing code will continue to work because the numba.cuda package is not being removed from numba, it’s only overridden by numba-cuda when numba-cuda is installed.

You won’t need to change anything right away. However, if there are changes in numba-cuda in future that you’d like to make use of in stumpy, then you would need to add numba-cuda as a dependency, if at that time numba has not added it as a dependency.

numba-cuda packages are now available - please try them out in your environment and report any issues on the numba-cuda issue tracker or post back here for further discussion.

Install with conda:

conda install nvidia::numba-cuda

Install with pip:

pip install numba-cuda

A conda-forge recipe / package is still forthcoming.

CC @RC_Testers - if you use CUDA, please do try these packages out.

1 Like