I am trying to use multithreading in numba using PyOMP. This doesn’t seem to come with standard numbs installation.
from numba.openmp import openmp_context as openmp
`---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Input In [42], in
----> 1 from numba.openmp import openmp_context as openmp
ModuleNotFoundError: No module named ‘numba.openmp’`
I’m the author so please let me know if you have any problems. We are going to try to move this into mainline Numba after it is developed a bit more. In the meantime, you can use the command above.
You get the exact same error? Did you activate the environment after creating it? Sorry for the simplistic question but I’ve had cases where that was the problem and it’s the first thing to check. Once you’re in the environment, would you please paste the result of “conda list”? Thanks! @roxaaams
I should have asked before but what OS and architectures are your machines? We only support Linux on x64 at the moment. The following tweaked command is better because it will force numba to come from my channel and won’t fall back to conda-forge and pick-up a Numba from there that matches your OS/arch pair. I suspect that is what happened in this case. We’re working on a Windows/x64 build at the moment so that should be coming soon.
Hi.
I’ll try to install on linux x64 systeme with the following command :
conda install drtodd13::numba cffi -c conda-forge --override-channels
Unfortunately when I import njit from numba I obtain this error:
Traceback (most recent call last):
File “”, line 1, in
File “/soft/DL/conda_envs/omp/lib/python3.7/site-packages/numba/init.py”, line 19, in
from numba.core import config
File “/soft/DL/conda_envs/omp/lib/python3.7/site-packages/numba/core/config.py”, line 16, in
import llvmlite.binding as ll
File “/soft/DL/conda_envs/omp/lib/python3.7/site-packages/llvmlite/binding/init.py”, line 4, in
from .dylib import *
File “/soft/DL/conda_envs/omp/lib/python3.7/site-packages/llvmlite/binding/dylib.py”, line 3, in
from llvmlite.binding import ffi
File “/soft/DL/conda_envs/omp/lib/python3.7/site-packages/llvmlite/binding/ffi.py”, line 191, in
raise OSError(“Could not load shared object file: {}”.format(_lib_name))
OSError: Could not load shared object file: libllvmlite.so
Numba works well on the same system in another installation by conda.
Thanks for your help, pyOMP seems very useful
When I tried to build PyOMP from the github project I get this error:
passmanagers.cpp:10:10: fatal error: llvm-c/Transforms/IntrinsicsOpenMP.h: No such file or directory
The IntrinsicsOpenMP.h head file not exist in the llvm-project source code
I get the same error as you when I tied to build llvmliteWithOpenmp from it’s own repository. I’m trying now to build the PyOMP project with all three git submodules checked out in the right locations.
Note that I made a pull request here to fix the git repo locations, since all except for the llvm-project had been moved ownership from @ggeorgakoudis to @Python-for-HPC.
Here are those git submodules needed. I’m not sure if that will help, but it’s where I’m at in the process of trying to build…
Hi I’m having the same issue as the author but I use VSCode instead of Anaconda for Python, is it possible for me to get this openmp subpackage from numba ? (I already installed numba with pip)
Thank you very much, with a clean conda environment it seems to work well, I just have this warning at the execution of the example provided on the GitHub page :
UserWarning: llvmlite version format not recognized!
warnings.warn(“llvmlite version format not recognized!”)
However it does print the value of pi.
Do you have an idea of why I get this warning ?
I’ll need to fix the source and create a new version to permanently get rid of the warning. It is just because the llvmlite version format string that we generate is not what Numba uses and so Numba can’t parse it to verify that it is compatible.
The latest status is that using the above command, you can get pyomp for linux/x86, linux/powerpc, or mac/arm. We’ve put a lot of work into getting this packaged which is quite difficult when combined with the stuff you need for GPU overload support through openmp target directives. Unfortunately, because we use LLVM internally which does not support target offload on windows, we cannot provide that either. We’ve been focused on getting the linux GPU offload builds working but we do plan to go back and do a Windows build, which will have support for CPU only and should include 3.9 and 3.10 support. We’re giving a pyomp tutorial at supercomputing in November so expect to see us have these Windows builds before that point.
@qedrohenrique We are not super familiar with WSL builds. If it appears as linux, then we would expect the last line to work. However, if the OS or architecture isn’t supported then the last line will fall back to the Numba in conda-forge and the problem you see is exactly what I would expect in this case. If you do the last line, can you report what “conda list” says because that should show where it got the Numba package. What happened when you tried the first line though because that should work as well and should give you a conda failure if we aren’t currently providing the os/arch combination that you have. Can you verify what happens with the first line? If it seems to succeed then again verify with “conda list” and let me know what it says. Can you see which OS and architecture conda is trying to find the package for?
thanks.
@qedrohenrique In your py-omp environment, please run “which python” and make sure it is coming from the py-omp conda environment directory. Then, run python and import numba. Assuming that works do print(numba.__file__) and let me know the path you are getting.