Built-in Vector Types in Numba Cuda

Hello! I’m new to Numba Cuda and I’m hoping someone could help me.

Can I use the built-in vector type float3 that exists in Cuda documentation? I know that is possible to use with PyCuda, for example, a kernel like:

addarrs_codetext = """
__global__ void add_3darrs_broadcast(float3 *out, float3 *a, float3 *b, int* SZ)
{
    const int M = SZ[0];
    const int N = SZ[1];
    const int S = SZ[2];
    const int tx = threadIdx.x;
    const int bx = blockIdx.x;
    const int BSZ = blockDim.x;
    int t;
    for (int s=0;s<S;s++)
    {
        t = s*BSZ+tx;
        if(t<N)
            dest[bx*N+t].x = b[t].x + a[bx].x;
            dest[bx*N+t].y = b[t].y + a[bx].y;
            dest[bx*N+t].z = b[t].z + a[bx].z;
        __syncthreads();
    }
}
"""

How could I do the same with Numba Cuda?
Thanks!

Unfortunately the float3 type, and other vector types are not supported by the Numba CUDA target at present. Would you like to raise a Feature Request issue in Issues · numba/numba · GitHub please? If you have some existing code using PyCUDA that you’re porting, could you please link to it? That would provide an excellent motivating use case to help prioritize support for these types.