LLVM upstream version vs. NVVM LLVM version

Would this make it possible for extensions like Awkward Array to run in numba.cuda.jit ed functions?

This change wouldn’t do anything for extensions - the CUDA target already uses NVVM, it’s just that the pipeline is currently:

Bytecode → Numba IR → LLVM IR → Optimized LLVM IR → NVVM → …

and I’m suggesting making it:

Bytecode → Numba IR → LLVM IR → NVVM → …

NVVM is itself based on LLVM, so it also runs LLVM optimization passes - the proposed change means we only have the optimizations once, instead of the current situtation where we run them twice, first with LLVM 9 then with an earlier LLVM version inside NVVM, which causes some problems.

There are other facilities we should look at to support Awkward Arrays in CUDA jitted functions - I’ve started this topic for that discussion, and I hope we can make some progress there.