Hi all,
Presently Numba’s CUDA target is part of Numba itself, in the numba.cuda
module - however, we’re planning to move it into a separate package maintained by NVIDIA, and would like to share the plans and ask for community feedback and suggestions.
What is happening?
Many things will remain the same:
- No code changes should be necessary to use the separate package - existing code using
numba.cuda
will continue to work unmodified. - Using the built-in
numba.cuda
module will also continue to work, so existing (or new) environments that do not include thenumba-cuda
package will continue to work. - Development of the CUDA target will continue to be open source, though hosted on the NVIDIA Github organization rather than inside the main Numba repository.
From a user perspective, the changes will be:
- The existing
numba.cuda
module will remain in Numba for a considerable length of time (possibly indefinitely) but will cease to receive new feature development and eventually be deprecated. Bug fixes to this “upstream”numba.cuda
module will be considered on a case-by-case basis. - New feature development and bug fixing will proceed in the NVIDIA/numba-cuda repository / package.
- To use the new NVIDIA package, it will need to be installed (e.g. with
conda install numba-cuda
orpip install numba-cuda
) but no other changes would be needed.
Why are you doing this?
We (Numba maintainers at Anaconda and NVIDIA) would like to do this because we think it will speed up the development and release cycle of the CUDA target and reduce the burden across the maintainers:
- New releases of the CUDA target will be made independently of Numba’s release schedule - the release process for the CUDA target alone will be more lightweight than the Numba release process, so releases will be made on a frequent basis.
- PR reviews and testing will be conducted independently of the review process for Numba PRs - this will lessen the burden on Anaconda maintainers and the Anaconda buildfarm, and will speed up the review / test time, as the Anaconda buildfarm requires manual operation and feedback to PR authors.
- In summary: this will enable the CUDA target to integrate new features more quickly, and to make them generally available through releases sooner after them being merged.
How will this work?
The broad steps in the near term are:
- Creation of the
numba_cuda
package:- A new
numba_cuda
package will be created, containing a copy of the CUDA target as of Numba 0.60. - The numba_cuda package will install a “redirector” that redirects any import of the
numba.cuda
module to the version contained within thenumba_cuda
package.
- A new
- Users can start installing
numba_cuda
at this point to get the benefit of new features and bug fixes added to thenumba_cuda
package. - Numba point-release: A point release following the last major Numba release will be created, incorporating the following changes:
- At import time, it will try to obtain
numba.cuda
from thenumba_cuda
package, so that the redirector is no longer necessary. - It will report on the version of the CUDA target used (either the built-in one or the NVIDIA one) in the sysinfo tool, so that users can easily tell which is in use (and to help direct bug reports).
- At import time, it will try to obtain
In addition to the repository and packages, there are other items to address as part of this migration:
- Documentation: The CUDA target documentation sources will be moved to the
numba-cuda
repository, and rendered versions of these will be published / hosted by NVIDIA.- The CUDA Array Interface specification is also part of the Numba CUDA documentation. It will be part of
numba-cuda
’s documentation initially, but should eventually be migrated elsewhere and managed independently of Numba, as it is a standard independent of any particular tool or library.
- The CUDA Array Interface specification is also part of the Numba CUDA documentation. It will be part of
- Issues: There are ~175 open CUDA issues in Numba. The migration presents an excellent opportunity to triage and review them all, and to move those that are still relevant over to
numba-cuda
, and to close those that are now resolved / no longer relevant.
What do I (the reader) need to do?
- I maintain a package that uses
numba.cuda
: Nothing is required - your code should continue working as it did before, whether your users use the builtin CUDA target or install thenumba-cuda
version.- Caveat: if you want to make use of new features or rely on bug fixes that go into
numba-cuda
, you will need to use it and have it as a dependency. This should not adversely affect your users, asnumba-cuda
should be fully compatible with all existing code.
- Caveat: if you want to make use of new features or rely on bug fixes that go into
- I use
numba.cuda
in my application: Similarly for maintainers, you do not need to change your code. If you want to make use of features added tonumba-cuda
, you will need to add it as a dependency, again without the expectation of any issues / incompatibilities being created. - I contribute to Numba and/or the CUDA target:
- I’ll reach out to anyone who has an open PR relevant to CUDA about whether to migrate the PR to the numba-cuda repository in due course.
In the short term:
- Feedback on the plan and questions from all users will be appreciated - please do post thoughts in the thread in response to this post!
- If you have questions, please ask them - I’ve tried to cover the essentials in this post without making it overly long, so some details may be overlooked.
- Please try the
numba-cuda
package in your environment and report any issues using the package or installation troubles that arise. The package can be installed with:- Conda:
conda install nvidia::numba-cuda
- Pip:
pip install numba-cuda
- Conda:
Links / references:
- Repository: The
numba-cuda
repository in the NVIDIA Github organization: https://github.com/NVIDIA/numba-cuda- This will be updated as issues are discovered / resolved, with the aim of first perfectly reproducing the functionality of the built-in CUDA target.
- Once the functionality is stable, new features and bug fixes will be added here.
- Redirector: The redirector is implemented in
_numba_cuda_redirector.py
. - Packages::
- Conda:
conda install nvidia::numba-cuda
/ Numba Cuda | Anaconda.org - Pip:
pip install numba-cuda
/ numba-cuda · PyPI
- Conda:
- Documentation: https://nvidia.github.io/numba-cuda/ - note this is very very similar to the existing documentation for the CUDA target