I wanted to ask the community of someone also already encountered the error message from the title. I do not quite understand it. I receive this error when I run some code within a Docker container on one of our servers, to which I mounted 112 CPUs. I then receive this error message: ValueError: The number of threads must be between 1 and 112
.
Does anyone know how to work around this? Because the number of mounted CPUs should be fitting in my case…
Are you setting NUMBA_NUM_THREADS
? If not, I wonder if this is a race condition you’re hitting that is more prone to being triggered with a large number of cores. Can you provide the output of numba -s
in the environment in which you hit the issue?
No I did not set NUMBA_NUM_THREADS
. The output of numba -s
is:
System info:
--------------------------------------------------------------------------------
__Time Stamp__
Report started (local time) : 2024-11-18 11:39:41.066498
UTC start time : 2024-11-18 11:39:41.066505
Running time (s) : 0.498412
__Hardware Information__
Machine : x86_64
CPU Name : znver2
CPU Count : 128
Number of accessible CPUs : 16
List of accessible CPUs cores : 16-127
CFS Restrictions (CPUs worth of runtime) : None
CPU Features : 64bit adx aes avx avx2 bmi bmi2
clflushopt clwb clzero cmov crc32
cx16 cx8 f16c fma fsgsbase fxsr
lzcnt mmx movbe mwaitx pclmul
popcnt prfchw rdpid rdrnd rdseed
sahf sha sse sse2 sse3 sse4.1
sse4.2 sse4a ssse3 wbnoinvd xsave
xsavec xsaveopt xsaves
Memory Total (MB) : 515796
Memory Available (MB) : 508046
__OS Information__
Platform Name : Linux-5.15.0-117-generic-x86_64-with-glibc2.35
Platform Release : 5.15.0-117-generic
OS Name : Linux
OS Version : #127-Ubuntu SMP Fri Jul 5 20:13:28 UTC 2024
OS Specific Version : ?
Libc Version : glibc 2.35
__Python Information__
Python Compiler : GCC 11.4.0
Python Implementation : CPython
Python Version : 3.10.12
Python Locale : en_US.UTF-8
__Numba Toolchain Versions__
Numba Version : 0.60.0
llvmlite Version : 0.43.0
__LLVM Information__
LLVM Version : 14.0.6
__CUDA Information__
CUDA Device Initialized : False
CUDA Driver Version : ?
CUDA Runtime Version : ?
CUDA NVIDIA Bindings Available : ?
CUDA NVIDIA Bindings In Use : ?
CUDA Minor Version Compatibility Available : ?
CUDA Minor Version Compatibility Needed : ?
CUDA Minor Version Compatibility In Use : ?
CUDA Detect Output:
None
CUDA Libraries Test Output:
None
__NumPy Information__
NumPy Version : 1.26.4
NumPy Supported SIMD features : ('MMX', 'SSE', 'SSE2', 'SSE3', 'SSSE3', 'SSE41', 'POPCNT', 'SSE42', 'AVX', 'F16C', 'FMA3', 'AVX2')
NumPy Supported SIMD dispatch : ('SSSE3', 'SSE41', 'POPCNT', 'SSE42', 'AVX', 'F16C', 'FMA3', 'AVX2', 'AVX512F', 'AVX512CD', 'AVX512_KNL', 'AVX512_KNM', 'AVX512_SKX', 'AVX512_CLX', 'AVX512_CNL', 'AVX512_ICL')
NumPy Supported SIMD baseline : ('SSE', 'SSE2', 'SSE3')
NumPy AVX512_SKX support detected : False
__SVML Information__
SVML State, config.USING_SVML : False
SVML Library Loaded : False
llvmlite Using SVML Patched LLVM : True
SVML Operational : False
__Threading Layer Information__
TBB Threading Layer Available : False
+--> Disabled due to Unknown import problem.
OpenMP Threading Layer Available : True
+-->Vendor: GNU
Workqueue Threading Layer Available : True
+-->Workqueue imported successfully.
__Numba Environment Variable Information__
None found.
__Conda Information__
Conda not available.
__Installed Packages__
Package Version
---------------------------- -----------
absl-py 2.1.0
alembic 1.13.3
astunparse 1.6.3
certifi 2024.8.30
charset-normalizer 3.4.0
colorlog 6.9.0
contourpy 1.3.0
cycler 0.12.1
dbus-python 1.2.18
filelock 3.16.1
flatbuffers 24.3.25
fonttools 4.54.1
fsspec 2024.10.0
gast 0.6.0
google-pasta 0.2.0
greenlet 3.1.1
grpcio 1.67.1
h5py 3.12.1
idna 3.10
imbalanced-learn 0.12.4
Jinja2 3.1.4
joblib 1.4.2
keras 3.5.0
keras-self-attention 0.51.0
kiwisolver 1.4.7
libclang 18.1.1
llvmlite 0.43.0
Mako 1.3.6
Markdown 3.7
markdown-it-py 3.0.0
MarkupSafe 3.0.2
matplotlib 3.9.2
mdurl 0.1.2
ml-dtypes 0.3.2
mpmath 1.3.0
namex 0.0.8
networkx 3.4.2
numba 0.60.0
numpy 1.26.4
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
opt_einsum 3.4.0
optree 0.13.0
optuna 4.0.0
packaging 24.1
pandas 2.2.3
pillow 11.0.0
pip 22.0.2
protobuf 4.25.5
Pygments 2.18.0
PyGObject 3.42.1
pyparsing 3.2.0
python-dateutil 2.9.0.post0
pytz 2024.2
PyYAML 6.0.2
requests 2.32.3
rich 13.9.3
scikit-base 0.8.3
scikit-learn 1.5.2
scipy 1.14.1
setuptools 59.6.0
six 1.16.0
sktime 0.34.0
SQLAlchemy 2.0.36
sympy 1.13.1
tensorboard 2.16.2
tensorboard-data-server 0.7.2
tensorflow 2.16.2
tensorflow-io-gcs-filesystem 0.37.1
termcolor 2.5.0
threadpoolctl 3.5.0
torch 2.5.1
tqdm 4.66.6
triton 3.1.0
typing_extensions 4.12.2
tzdata 2024.2
urllib3 2.2.3
Werkzeug 3.0.6
wheel 0.37.1
wrapt 1.16.0
No errors reported.
__Warning log__
Warning (cuda): CUDA driver library cannot be found or no CUDA enabled devices are present.
Exception class: <class 'numba.cuda.cudadrv.error.CudaSupportError'>
Warning: Conda not available.
Error was [Errno 2] No such file or directory: 'conda'
Warning (psutil): psutil cannot be imported. For more accuracy, consider installing it.
Warning (no file): /sys/fs/cgroup/cpuacct/cpu.cfs_quota_us
Warning (no file): /sys/fs/cgroup/cpuacct/cpu.cfs_period_us
--------------------------------------------------------------------------------
If requested, please copy and paste the information between
the dashed (----) lines, or from a given specific section as
appropriate.
=============================================================
IMPORTANT: Please ensure that you are happy with sharing the
contents of the information present, any information that you
wish to keep private you should remove before sharing.
=============================================================
Thanks for the reply - I think this is suspect, so I’ve opened an issue: Possible race condition in or around `snt_check()` · Issue #9814 · numba/numba · GitHub
1 Like