Ubuntu 22.04 fresh install with conda/numba does not see GPU

My apology for rehashing perhaps benign topic: Not being able to run numba on Ubuntu.
It is a fresh install of Ubuntu with fresh install of conda/numba. I also have Nvidia CUDA installed and can successfully build and run different examples. It is very likely my situation is due to my inexperience since this is the first time I am doing this.
The data
Time Stamp
Report started (local time) : 2023-02-28 16:16:55.961018
UTC start time : 2023-03-01 00:16:55.961021
Running time (s) : 2.635238

Hardware Information
Machine : x86_64
CPU Name : skylake
CPU Count : 16
Number of accessible CPUs : 16
List of accessible CPUs cores : 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
CFS Restrictions (CPUs worth of runtime) : None

CPU Features : 64bit adx aes avx avx2 bmi bmi2
clflushopt cmov cx16 cx8 f16c fma
fsgsbase fxsr invpcid lzcnt mmx
movbe pclmul popcnt prfchw rdrnd
rdseed sahf sgx sse sse2 sse3
sse4.1 sse4.2 ssse3 xsave xsavec
xsaveopt xsaves

Memory Total (MB) : 64238
Memory Available (MB) : 56686

OS Information
Platform Name : Linux-5.19.0-32-generic-x86_64-with-glibc2.35
Platform Release : 5.19.0-32-generic
OS Name : Linux
OS Version : #33~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Jan 30 17:03:34 UTC 2
OS Specific Version : ?
Libc Version : glibc 2.35

Python Information
Python Compiler : GCC 11.2.0
Python Implementation : CPython
Python Version : 3.9.13
Python Locale : en_CA.UTF-8

Numba Toolchain Versions
Numba Version : 0.55.1
llvmlite Version : 0.38.0

LLVM Information
LLVM Version : 11.1.0

CUDA Information
CUDA Device Initialized : False
CUDA Driver Version : ?
CUDA Runtime Version : ?
CUDA NVIDIA Bindings Available : ?
CUDA NVIDIA Bindings In Use : ?
CUDA Detect Output:
None
CUDA Libraries Test Output:
None

SVML Information
SVML State, config.USING_SVML : False
SVML Library Loaded : False
llvmlite Using SVML Patched LLVM : True
SVML Operational : False

Threading Layer Information
TBB Threading Layer Available : True
±->TBB imported successfully.
OpenMP Threading Layer Available : True
±->Vendor: GNU
Workqueue Threading Layer Available : True
±->Workqueue imported successfully.

Numba Environment Variable Information
NUMBA_CUDA_USE_NVIDIA_BINDINGS : 1

Conda Information
Conda Build : 3.22.0
Conda Env : 23.1.0
Conda Platform : linux-64
Conda Python Version : 3.9.13.final.0
Conda Root Writable : True

Installed Packages
_ipyw_jlab_nb_ext_conf 0.1.0 py39h06a4308_1
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
alabaster 0.7.12 pyhd3eb1b0_0
anaconda 2022.10 py39_0
anaconda-client 1.11.0 py39h06a4308_0
anaconda-navigator 2.4.0 py39h06a4308_0
anaconda-project 0.11.1 py39h06a4308_0
anyio 3.5.0 py39h06a4308_0
appdirs 1.4.4 pyhd3eb1b0_0
argon2-cffi 21.3.0 pyhd3eb1b0_0
argon2-cffi-bindings 21.2.0 py39h7f8727e_0
arrow 1.2.2 pyhd3eb1b0_0
astroid 2.11.7 py39h06a4308_0
astropy 5.1 py39h7deecbd_0
atomicwrites 1.4.0 py_0
attrs 21.4.0 pyhd3eb1b0_0
automat 20.2.0 py_0
autopep8 1.6.0 pyhd3eb1b0_1
babel 2.9.1 pyhd3eb1b0_0
backcall 0.2.0 pyhd3eb1b0_0
backports 1.1 pyhd3eb1b0_0
backports.functools_lru_cache 1.6.4 pyhd3eb1b0_0
backports.tempfile 1.0 pyhd3eb1b0_1
backports.weakref 1.0.post1 py_1
bcrypt 3.2.0 py39h5eee18b_1
beautifulsoup4 4.11.1 py39h06a4308_0
binaryornot 0.4.4 pyhd3eb1b0_1
bitarray 2.5.1 py39h5eee18b_0
bkcharts 0.2 py39h06a4308_1
black 22.6.0 py39h06a4308_0
blas 1.0 mkl
bleach 4.1.0 pyhd3eb1b0_0
blosc 1.21.0 h4ff587b_1
bokeh 2.4.3 py39h06a4308_0
boto3 1.24.28 py39h06a4308_0
botocore 1.27.28 py39h06a4308_0
bottleneck 1.3.5 py39h7deecbd_0
brotli 1.0.9 h5eee18b_7
brotli-bin 1.0.9 h5eee18b_7
brotlipy 0.7.0 py39h27cfd23_1003
brunsli 0.1 h2531618_0
bzip2 1.0.8 h7b6447c_0
c-ares 1.18.1 h7f8727e_0
ca-certificates 2022.07.19 h06a4308_0
certifi 2022.9.14 py39h06a4308_0
cffi 1.15.1 py39h74dc2b5_0
cfitsio 3.470 h5893167_7
chardet 4.0.0 py39h06a4308_1003
charls 2.2.0 h2531618_0
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.0.4 py39h06a4308_0
cloudpickle 2.0.0 pyhd3eb1b0_0
clyent 1.2.2 py39h06a4308_1
colorama 0.4.5 py39h06a4308_0
colorcet 3.0.0 py39h06a4308_0
conda 23.1.0 py39h06a4308_0
conda-build 3.22.0 py39h06a4308_0
conda-content-trust 0.1.3 py39h06a4308_0
conda-env 2.6.0 1
conda-pack 0.6.0 pyhd3eb1b0_0
conda-package-handling 1.9.0 py39h5eee18b_0
conda-repo-cli 1.0.20 py39h06a4308_0
conda-token 0.4.0 pyhd3eb1b0_0
conda-verify 3.4.2 py_1
constantly 15.1.0 pyh2b92418_0
cookiecutter 1.7.3 pyhd3eb1b0_0
cryptography 37.0.1 py39h9ce1e76_0
cssselect 1.1.0 pyhd3eb1b0_0
cudatoolkit 11.3.1 h2bc3f7f_2
curl 7.84.0 h5eee18b_0
cycler 0.11.0 pyhd3eb1b0_0
cython 0.29.32 py39h6a678d5_0
cytoolz 0.11.0 py39h27cfd23_0
daal4py 2021.6.0 py39h79cecc1_1
dal 2021.6.0 hdb19cb5_916
dask 2022.7.0 py39h06a4308_0
dask-core 2022.7.0 py39h06a4308_0
dataclasses 0.8 pyh6d0b6a4_7
datashader 0.14.1 py39h06a4308_0
datashape 0.5.4 py39h06a4308_1
dbus 1.13.18 hb2f20db_0
debugpy 1.5.1 py39h295c915_0
decorator 5.1.1 pyhd3eb1b0_0
defusedxml 0.7.1 pyhd3eb1b0_0
diff-match-patch 20200713 pyhd3eb1b0_0
dill 0.3.4 pyhd3eb1b0_0
distributed 2022.7.0 py39h06a4308_0
docutils 0.18.1 py39h06a4308_3
entrypoints 0.4 py39h06a4308_0
et_xmlfile 1.1.0 py39h06a4308_0
expat 2.4.9 h6a678d5_0
fftw 3.3.9 h27cfd23_1
filelock 3.6.0 pyhd3eb1b0_0
flake8 4.0.1 pyhd3eb1b0_1
flask 1.1.2 pyhd3eb1b0_0
fontconfig 2.13.1 h6c09931_0
fonttools 4.25.0 pyhd3eb1b0_0
freetype 2.11.0 h70c0345_0
fsspec 2022.7.1 py39h06a4308_0
future 0.18.2 py39h06a4308_1
gensim 4.1.2 py39h295c915_0
giflib 5.2.1 h7b6447c_0
glib 2.69.1 h4ff587b_1
glob2 0.7 pyhd3eb1b0_0
gmp 6.2.1 h295c915_3
gmpy2 2.1.2 py39heeb90bb_0
greenlet 1.1.1 py39h295c915_0
gst-plugins-base 1.14.0 h8213a91_2
gstreamer 1.14.0 h28cd5cc_2
h5py 3.7.0 py39h737f45e_0
hdf5 1.10.6 h3ffc7dd_1
heapdict 1.0.1 pyhd3eb1b0_0
holoviews 1.15.0 py39h06a4308_0
hvplot 0.8.0 py39h06a4308_0
hyperlink 21.0.0 pyhd3eb1b0_0
icu 58.2 he6710b0_3
idna 3.3 pyhd3eb1b0_0
imagecodecs 2021.8.26 py39hf0132c2_1
imageio 2.19.3 py39h06a4308_0
imagesize 1.4.1 py39h06a4308_0
importlib-metadata 4.11.3 py39h06a4308_0
importlib_metadata 4.11.3 hd3eb1b0_0
incremental 21.3.0 pyhd3eb1b0_0
inflection 0.5.1 py39h06a4308_0
iniconfig 1.1.1 pyhd3eb1b0_0
intake 0.6.5 pyhd3eb1b0_0
intel-openmp 2021.4.0 h06a4308_3561
intervaltree 3.1.0 pyhd3eb1b0_0
ipykernel 6.15.2 py39h06a4308_0
ipython 7.31.1 py39h06a4308_1
ipython_genutils 0.2.0 pyhd3eb1b0_1
ipywidgets 7.6.5 pyhd3eb1b0_1
isort 5.9.3 pyhd3eb1b0_0
itemadapter 0.3.0 pyhd3eb1b0_0
itemloaders 1.0.4 pyhd3eb1b0_1
itsdangerous 2.0.1 pyhd3eb1b0_0
jdcal 1.4.1 pyhd3eb1b0_0
jedi 0.18.1 py39h06a4308_1
jeepney 0.7.1 pyhd3eb1b0_0
jellyfish 0.9.0 py39h7f8727e_0
jinja2 2.11.3 pyhd3eb1b0_0
jinja2-time 0.2.0 pyhd3eb1b0_3
jmespath 0.10.0 pyhd3eb1b0_0
joblib 1.1.0 pyhd3eb1b0_0
jpeg 9e h7f8727e_0
jq 1.6 h27cfd23_1000
json5 0.9.6 pyhd3eb1b0_0
jsonschema 4.16.0 py39h06a4308_0
jupyter 1.0.0 py39h06a4308_8
jupyter_client 7.3.4 py39h06a4308_0
jupyter_console 6.4.3 pyhd3eb1b0_0
jupyter_core 4.11.1 py39h06a4308_0
jupyter_server 1.18.1 py39h06a4308_0
jupyterlab 3.4.4 py39h06a4308_0
jupyterlab_pygments 0.1.2 py_0
jupyterlab_server 2.10.3 pyhd3eb1b0_1
jupyterlab_widgets 1.0.0 pyhd3eb1b0_1
jxrlib 1.1 h7b6447c_2
keyring 23.4.0 py39h06a4308_0
kiwisolver 1.4.2 py39h295c915_0
krb5 1.19.2 hac12032_0
lazy-object-proxy 1.6.0 py39h27cfd23_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libaec 1.0.4 he6710b0_1
libarchive 3.6.1 hab531cd_0
libbrotlicommon 1.0.9 h5eee18b_7
libbrotlidec 1.0.9 h5eee18b_7
libbrotlienc 1.0.9 h5eee18b_7
libclang 10.0.1 default_hb85057a_2
libcurl 7.84.0 h91b91d3_0
libdeflate 1.8 h7f8727e_5
libedit 3.1.20210910 h7f8727e_0
libev 4.33 h7f8727e_1
libevent 2.1.12 h8f2d780_0
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgfortran-ng 11.2.0 h00389a5_1
libgfortran5 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libidn2 2.3.2 h7f8727e_0
liblief 0.11.5 h295c915_1
libllvm10 10.0.1 hbcb73fb_5
libllvm11 11.1.0 h9e868ea_5
libnghttp2 1.46.0 hce63b2e_0
libpng 1.6.37 hbc83047_0
libpq 12.9 h16c4e8d_3
libsodium 1.0.18 h7b6447c_0
libspatialindex 1.9.3 h2531618_0
libssh2 1.10.0 h8f2d780_0
libstdcxx-ng 11.2.0 h1234567_1
libtiff 4.4.0 hecacb30_0
libunistring 0.9.10 h27cfd23_0
libuuid 1.0.3 h7f8727e_2
libwebp 1.2.2 h55f646e_0
libwebp-base 1.2.2 h7f8727e_0
libxcb 1.15 h7f8727e_0
libxkbcommon 1.0.1 hfa300c1_0
libxml2 2.9.14 h74e7548_0
libxslt 1.1.35 h4e12654_0
libzopfli 1.0.3 he6710b0_0
llvmlite 0.38.0 py39h4ff587b_0
locket 1.0.0 py39h06a4308_0
lxml 4.9.1 py39h1edc446_0
lz4 3.1.3 py39h27cfd23_0
lz4-c 1.9.3 h295c915_1
lzo 2.10 h7b6447c_2
markdown 3.3.4 py39h06a4308_0
markupsafe 2.0.1 py39h27cfd23_0
matplotlib 3.5.2 py39h06a4308_0
matplotlib-base 3.5.2 py39hf590b9c_0
matplotlib-inline 0.1.6 py39h06a4308_0
mccabe 0.7.0 pyhd3eb1b0_0
mistune 0.8.4 py39h27cfd23_1000
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py39h7f8727e_0
mkl_fft 1.3.1 py39hd3c417c_0
mkl_random 1.2.2 py39h51133e4_0
mock 4.0.3 pyhd3eb1b0_0
mpc 1.1.0 h10f8cd9_1
mpfr 4.0.2 hb69a4c5_1
mpi 1.0 mpich
mpich 3.3.2 external_0
mpmath 1.2.1 py39h06a4308_0
msgpack-python 1.0.3 py39hd09550d_0
multipledispatch 0.6.0 py39h06a4308_0
munkres 1.1.4 py_0
mypy_extensions 0.4.3 py39h06a4308_1
navigator-updater 0.3.0 py39h06a4308_0
nbclassic 0.3.5 pyhd3eb1b0_0
nbclient 0.5.13 py39h06a4308_0
nbconvert 6.4.4 py39h06a4308_0
nbformat 5.5.0 py39h06a4308_0
ncurses 6.3 h5eee18b_3
nest-asyncio 1.5.5 py39h06a4308_0
networkx 2.8.4 py39h06a4308_0
nltk 3.7 pyhd3eb1b0_0
nose 1.3.7 pyhd3eb1b0_1008
notebook 6.4.12 py39h06a4308_0
nspr 4.33 h295c915_0
nss 3.74 h0370c37_0
numba 0.55.1 py39h51133e4_0
numexpr 2.8.3 py39h807cd23_0
numpy 1.21.5 py39h6c91a56_3
numpy-base 1.21.5 py39ha15fc14_3
numpydoc 1.4.0 py39h06a4308_0
olefile 0.46 pyhd3eb1b0_0
oniguruma 6.9.7.1 h27cfd23_0
openjpeg 2.4.0 h3ad879b_0
openpyxl 3.0.10 py39h5eee18b_0
openssl 1.1.1q h7f8727e_0
packaging 21.3 pyhd3eb1b0_0
pandas 1.4.4 py39h6a678d5_0
pandocfilters 1.5.0 pyhd3eb1b0_0
panel 0.13.1 py39h06a4308_0
param 1.12.0 pyhd3eb1b0_0
parsel 1.6.0 py39h06a4308_0
parso 0.8.3 pyhd3eb1b0_0
partd 1.2.0 pyhd3eb1b0_1
patch 2.7.6 h7b6447c_1001
patchelf 0.13 h295c915_0
pathlib 1.0.1 pyhd3eb1b0_1
pathspec 0.9.0 py39h06a4308_0
patsy 0.5.2 py39h06a4308_1
pcre 8.45 h295c915_0
pep8 1.7.1 py39h06a4308_1
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 9.2.0 py39hace64e9_1
pip 22.2.2 py39h06a4308_0
pkginfo 1.8.2 pyhd3eb1b0_0
platformdirs 2.5.2 py39h06a4308_0
plotly 5.9.0 py39h06a4308_0
pluggy 1.0.0 py39h06a4308_1
ply 3.11 py39h06a4308_0
poyo 0.5.0 pyhd3eb1b0_0
prometheus_client 0.14.1 py39h06a4308_0
prompt-toolkit 3.0.20 pyhd3eb1b0_0
prompt_toolkit 3.0.20 hd3eb1b0_0
protego 0.1.16 py_0
psutil 5.9.0 py39h5eee18b_0
ptyprocess 0.7.0 pyhd3eb1b0_2
py 1.11.0 pyhd3eb1b0_0
py-lief 0.11.5 py39h295c915_1
pyasn1 0.4.8 pyhd3eb1b0_0
pyasn1-modules 0.2.8 py_0
pycodestyle 2.8.0 pyhd3eb1b0_0
pycosat 0.6.3 py39h27cfd23_0
pycparser 2.21 pyhd3eb1b0_0
pyct 0.4.8 py39h06a4308_1
pycurl 7.45.1 py39h8f2d780_0
pydispatcher 2.0.5 py39h06a4308_2
pydocstyle 6.1.1 pyhd3eb1b0_0
pyerfa 2.0.0 py39h27cfd23_0
pyflakes 2.4.0 pyhd3eb1b0_0
pygments 2.11.2 pyhd3eb1b0_0
pyhamcrest 2.0.2 pyhd3eb1b0_2
pyjwt 2.4.0 py39h06a4308_0
pylint 2.14.5 py39h06a4308_0
pyls-spyder 0.4.0 pyhd3eb1b0_0
pyodbc 4.0.34 py39h6a678d5_0
pyopenssl 22.0.0 pyhd3eb1b0_0
pyparsing 3.0.9 py39h06a4308_0
pyqt 5.15.7 py39h6a678d5_1
pyqt5-sip 12.11.0 py39h6a678d5_1
pyqtwebengine 5.15.7 py39h6a678d5_1
pyrsistent 0.18.0 py39heee7806_0
pysocks 1.7.1 py39h06a4308_0
pytables 3.6.1 py39h77479fe_1
pytest 7.1.2 py39h06a4308_0
python 3.9.13 haa1d7c7_1
python-dateutil 2.8.2 pyhd3eb1b0_0
python-fastjsonschema 2.16.2 py39h06a4308_0
python-libarchive-c 2.9 pyhd3eb1b0_1
python-lsp-black 1.2.1 py39h06a4308_0
python-lsp-jsonrpc 1.0.0 pyhd3eb1b0_0
python-lsp-server 1.5.0 py39h06a4308_0
python-slugify 5.0.2 pyhd3eb1b0_0
python-snappy 0.6.0 py39h2531618_3
pytz 2022.1 py39h06a4308_0
pyviz_comms 2.0.2 pyhd3eb1b0_0
pywavelets 1.3.0 py39h7f8727e_0
pyxdg 0.27 pyhd3eb1b0_0
pyyaml 6.0 py39h7f8727e_1

zope.interface 5.4.0 py39h7f8727e_0
zstd 1.5.2 ha4553b6_0

No errors reported.

Warning log
Warning (cuda): CUDA device intialisation problem. Message:Error at driver init: Call to cuInit results in CUDA_ERROR_NO_DEVICE (100)
Exception class: <class ‘numba.cuda.cudadrv.error.CudaSupportError’>
Warning (no file): /sys/fs/cgroup/cpuacct/cpu.cfs_quota_us
Warning (no file): /sys/fs/cgroup/cpuacct/cpu.cfs_period_us


GPU or driver and not identified


and from nvidia-smi

Tue Feb 28 16:04:57 2023
±----------------------------------------------------------------------------+
| NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … Off | 00000000:01:00.0 On | N/A |
| 0% 43C P0 72W / 290W | 980MiB / 8192MiB | 2% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2200 G /usr/lib/xorg/Xorg 450MiB |
| 0 N/A N/A 2661 G …ome-remote-desktop-daemon 3MiB |
| 0 N/A N/A 2702 G /usr/bin/gnome-shell 74MiB |
| 0 N/A N/A 3831 G …918332966337200593,131072 267MiB |
| 0 N/A N/A 5659 G …/usr/bin/telegram-desktop 3MiB |
| 0 N/A N/A 27825 G …veSuggestionsOnlyOnDemand 42MiB |
| 0 N/A N/A 41855 G …b/thunderbird/thunderbird 135MiB |
±----------------------------------------------------------------------------+

a simple python program

from numba import cuda

Press the green button in the gutter to run the script.

if name == ‘main’:
print(cuda.gpus)

results in error
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init: Call to cuInit results in CUDA_ERROR_NO_DEVICE (100)

Can anybody offer any suggestions on where to look for an issue or perhaps resolve this problem?

You have the environment variable NUMBA_CUDA_USE_NVIDIA_BINDINGS set but I can’t see evidence that the NVIDIA cuda-python package is installed. Can you unset that environment variable and test again please?

Thank you very much … I am in a process of another clear install. I successfully managed to mess up the system while changing nvidia drivers.
I will let you know how the behaviour changed once everything is installed.

Update: fresh install. I made sure NUMBA_CUDA_USE_NVIDIA_BINDINGS is unset

Python 3.9.13 (main, Aug 25 2022, 23:26:10)
[GCC 11.2.0] :: Anaconda, Inc. on linux

numba -s
Time Stamp
Report started (local time) : 2023-03-02 08:30:18.438154
UTC start time : 2023-03-02 16:30:18.438157
Running time (s) : 2.492168

Hardware Information
Machine : x86_64
CPU Name : skylake
CPU Count : 16
Number of accessible CPUs : 16
List of accessible CPUs cores : 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
CFS Restrictions (CPUs worth of runtime) : None

CPU Features : 64bit adx aes avx avx2 bmi bmi2
clflushopt cmov cx16 cx8 f16c fma
fsgsbase fxsr invpcid lzcnt mmx
movbe pclmul popcnt prfchw rdrnd
rdseed rtm sahf sgx sse sse2 sse3
sse4.1 sse4.2 ssse3 xsave xsavec
xsaveopt xsaves

Memory Total (MB) : 64238
Memory Available (MB) : 59022

OS Information
Platform Name : Linux-5.19.0-35-generic-x86_64-with-glibc2.35
Platform Release : 5.19.0-35-generic
OS Name : Linux
OS Version : #36~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Feb 17 15:17:25 UTC 2
OS Specific Version : ?
Libc Version : glibc 2.35

Python Information
Python Compiler : GCC 11.2.0
Python Implementation : CPython
Python Version : 3.9.13
Python Locale : en_CA.UTF-8

Numba Toolchain Versions
Numba Version : 0.55.1
llvmlite Version : 0.38.0

LLVM Information
LLVM Version : 11.1.0

CUDA Information
CUDA Device Initialized : False
CUDA Driver Version : ?
CUDA Runtime Version : ?
CUDA NVIDIA Bindings Available : ?
CUDA NVIDIA Bindings In Use : ?
CUDA Detect Output:
None
CUDA Libraries Test Output:
None

SVML Information
SVML State, config.USING_SVML : False
SVML Library Loaded : False
llvmlite Using SVML Patched LLVM : True
SVML Operational : False

Threading Layer Information
TBB Threading Layer Available : True
±->TBB imported successfully.
OpenMP Threading Layer Available : True
±->Vendor: GNU
Workqueue Threading Layer Available : True
±->Workqueue imported successfully.

Numba Environment Variable Information
NUMBA_CUDA_USE_NVIDIA_BINDINGS : 0

Conda Information
Conda Build : 3.22.0
Conda Env : 23.1.0
Conda Platform : linux-64
Conda Python Version : 3.9.13.final.0
Conda Root Writable : True

nvidia-smi
Thu Mar 2 08:36:38 2023
±----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.05 Driver Version: 525.85.05 CUDA Version: 12.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … Off | 00000000:01:00.0 On | N/A |
| 0% 37C P8 10W / 290W | 550MiB / 8192MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2155 G /usr/lib/xorg/Xorg 369MiB |
| 0 N/A N/A 2426 G …ome-remote-desktop-daemon 3MiB |
| 0 N/A N/A 2463 G /usr/bin/gnome-shell 68MiB |
| 0 N/A N/A 3647 G …894695049316347738,131072 96MiB |
| 0 N/A N/A 4104 G …b/thunderbird/thunderbird 8MiB |
±----------------------------------------------------------------------------+

Following instructions from Nvidia → how-to-cuda-python

i tried to install support for python

conda install pyculib

Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.

PackagesNotFoundError: The following packages are not available from current channels:

  • pyculib

Current channels:

So I am exactly where I was before.

Where are these instructions from? pyculib has been deprecated for many years, so there shouldn’t be any mention of it in current instructions.

Can you create and activate a new environment with:

conda create -n numba-cuda python=3.9 numba=0.56.4 numpy=1.23 cudatoolkit=11.8
conda activate numba-cuda

Then run:

python -c "from numba import cuda; cuda.cudadrv.libs.test()"

and

python -c "from numba import cuda; cuda.detect()"

and paste the output of those commands here please?

Also - how are you installing the NVIDIA drivers?

The instructions are prominently displayed on Nvidia Cuda installation page.
I will follow your recommendations but before I have another questions to clarify something what I perhaps do not understand:
I installed Cuda and the only way I can run C programs using Cuda is with sudo.
I develop Python using PyCharm Community and similarly I have to run as root to have them work correctly.

Since I run on Ubuntu 22.04 I selected Software Updater-> Settings → Additional drivers

Thank I installed Cuda environment from Nviadia.

Can you please provide the URL?

I think the CUDA Setup and Installation NVIDIA Developer Forum is the best place to ask to resolve this. I think the Numba issue may persist at least until this issue is resolved.

Thank you for reply.

The URL to nvidia page
https:–developer-dot-nvidia-dot-com/how-to-cuda-python

I did not get traction any traction on Nvidia forum. I found some responses from the moderator that some files have to be created in the runtime under root.

I wanted to ask. In your experience, you can successfully develop in python and run your Cuda enabled applications using Pycharm or VSC without need for root account?

So this is the result I am getting
conda create -n numba-cuda python=3.9 numba=0.56.4 numpy=1.23 cudatoolkit=11.8
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed

PackagesNotFoundError: The following packages are not available from current channels:

  • cudatoolkit=11.8

Keep in mind I currently am on Cuda installation Cuda 12.
Does it mean I need to downgrade my CUDA package?

I am sorry, I am new to dealing with all the toolkits and not necessarily understand all the connections between different parts.

The URL to nvidia page

Thanks for providing the URL - I’ll try to get this looked into. The Numba documentation may be a little more helpful for getting set up: Overview — Numba 0.59.0dev0+179.ga4664180.dirty documentation

So this is the result I am getting

I see - I hadn’t realised that cudatoolkit 11.8 was not available in the Anaconda channels. If you do

conda create -n numba-cuda python=3.9 numba=0.56.4 numpy=1.23 conda-forge::cudatoolkit=11.8

then that should install the toolkit from conda-forge.

Keep in mind I currently am on Cuda installation Cuda 12.
Does it mean I need to downgrade my CUDA package?

There are two components, the driver and the toolkit. Your driver is version 12 (as is the toolkit you had installed, I think), and it’s OK to use an older toolkit with a newer driver. The CUDA 12 toolkit is not yet available in conda-forge, but using 11.8 is fine with a 12.0 driver.

I am sorry, I am new to dealing with all the toolkits and not necessarily understand all the connections between different parts.

No problem, hopefully we can work through and solve the problem!

Sorry, I missed a couple of your questions:

I did not get traction any traction on Nvidia forum. I found some responses from the moderator that some files have to be created in the runtime under root.

I found your post on the forums, I think - is it Permissions for cuda drivers on Ubuntu 22.04 - CUDA Setup and Installation - NVIDIA Developer Forums ?

Some other posts I found suggest it might be a permissions issue related to the groups you’re in, but I don’t have an exact idea of what might need to change if that’s the case. To figure out the next steps, can you provide the output of the groups command for your user please?

For example on my system, I have:

$ groups
gmarkall adm cdrom sudo dip plugdev lpadmin sambashare docker

I wanted to ask. In your experience, you can successfully develop in python and run your Cuda enabled applications using Pycharm or VSC without need for root account?

Yes, it’s normal to use an account without root access - your situation, where root is required, is the first instance of this problem that I’ve been aware of.

Yes this is the post I wrote. In Nvidia documentation for installation there is a statement which suggests if things do not work it maybe permissions issue. But my permissions on the drivers are correct.

My groups are kristof adm cdrom sudo dip video plugdev lpadmin lxd sambashare
I also added myself to sudoers with no effect on Cuda.
I created & added myself to video group, thinking perhaps this will help. When dealing with some communication on Raspberry Pi and using SPI & I2C I had to be in a specific group. But like any other attempts it had no effect.

Thank you for confirmation. So I was understanding it correctly. There is no secret magic there.

My installation is a vanilla installation of Ubuntu. Everything I installed to-date was done following online documentation.
I found a post about issues with Cuda and 22.04: (askubuntu-dot-com/questions/1427289/cuda-install-issues-22-04-lts) but I am getting very sceptical about some of the things I read. A lot of them seem to be sketchy and I do not see a consistent pattern to any solutions.

Thank you for instructions. Successfully executed.
Running my simplistic test program

from numba import cuda

if name == ‘main’:
print(cuda.gpus)

in PyCharm with numba-conda environment activated still suffers from the same issue
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init: Call to cuInit results in CUDA_ERROR_NO_DEVICE (100)

I built an executable using pyinstaller and run it with sudo … no problems

(numba-cuda) kristof@kristof-i9:~/Development/ILLUMISONICS/CUDA-examples/cuda-gpus/dist/main$ sudo ./main
<Managed Device 0>

Running without sudo

(numba-cuda) kristof@kristof-i9:~/Development/ILLUMISONICS/CUDA-examples/cuda-gpus/dist/main$ ./main
Traceback (most recent call last):
File “numba/cuda/cudadrv/driver.py”, line 247, in ensure_initialized
File “numba/cuda/cudadrv/driver.py”, line 320, in safe_cuda_api_call
File “numba/cuda/cudadrv/driver.py”, line 388, in _check_ctypes_error
numba.cuda.cudadrv.driver.CudaAPIError: [100] Call to cuInit results in CUDA_ERROR_NO_DEVICE

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “main.py”, line 5, in
File “numba/cuda/cudadrv/devices.py”, line 43, in str
File “numba/cuda/cudadrv/devices.py”, line 26, in getattr
File “numba/cuda/cudadrv/driver.py”, line 417, in get_device_count
File “numba/cuda/cudadrv/driver.py”, line 285, in getattr
File “numba/cuda/cudadrv/driver.py”, line 251, in ensure_initialized
numba.cuda.cudadrv.error.CudaSupportError: Error at driver init: Call to cuInit results in CUDA_ERROR_NO_DEVICE (100)

Here is my PyCharm setup

@gmarkall I got it to work

I reinstalled Ubuntu 22.04.2 and activated older kernel 5.15.0-46-generic
selected nvidia-driver-515 good enough for cuda 11.7
installed conda with cuda 11.7
now everything works
thank you for helping me along … I probably would not arrived at the solution if not you

Many thanks for the update! Glad you got it to work… I’m not too sure how I helped, I was a bit stumped - do you know what the difference between your current and previous setups is?