Any numba equivalent for casting a raw pointer to a StructRef, Dict, List etc?

Moving from here: https://github.com/numba/numba/issues/6493

I’m trying to make a knowledge base class with numba that can utilize a number of user defined types (structrefs or maybe namedtuples). This is of course challenging since numba doesn’t use dynamically typed data structures. I’m hoping to find some way to work around this though.

So my idea is to have my KnowledgeBase just hold untyped pointers to different type specific storage objects and then overload the IO of the knowledgebase so that those pointers get casted to the appropriate types given the input type. For example I might want to declare/assert facts to the knowledgebase in a njitted context.

@njit  
def right_hand_side_of_rule(kb,...):
   ...
   kb.declare('point1',Point(1,2))
   ...

So then to make this work I would have kb.raw_pointers = Dict.empt(unicode_type,i8) and do something like:

@overload(KnowledgeBaseType.declare)
def kb_declare(...):
   #some type stuff is resolved
   typ_str = ....
   storage_object_type = ...

   def impl(...,name,x):
      storage_object = _cast_ptr_to_obj(kb.raw_pointers[typ_str],storage_object_type)
      storage_object.declare(name, x)
   return impl

I noticed that a lot of numba types have a meminfo object. Is there any way to write a _cast_ptr_to_obj(ptr,obj_type) that just takes in the meminfo.data as an integer and pops out a structref (or Dict, List, etc.) that can be used in an njitted function?

From @gmarkall recommendation:

#Assuming an typed structref MyStruct w/ A: i8 and B: unicode_type
from numba.extending import intrinsic
from numba.core import types, cgutils
from numba import njit

@intrinsic
def _struct_from_meminfo(typingctx, struct_type, meminfo):
    inst_type = struct_type.instance_type

    def codegen(context, builder, signature, args):
        _, meminfo = args

        st = cgutils.create_struct_proxy(inst_type)(context, builder)
        st.meminfo = meminfo

        return st._getvalue()

    sig = inst_type(struct_type, types.MemInfoPointer(types.voidptr))
    return sig, codegen


from numba.typed import Dict

@njit
def foo(d):
    meminfo = d[0]
    struct = _struct_from_meminfo(MyStructType,meminfo)
    print(struct.A, struct.B)
    struct.A += 1

s = MyStruct(1,"IT EXISTS")

d = Dict.empty(i8,types.MemInfoPointer(types.voidptr))
d[0] = s._meminfo

print(s.A)
foo(d)
print(s.A, s.B)
foo(d) 

This seems to work, wanted to put it here if others have the same issue. There is some segfaulty weirdness if ._meminfo is passed directly from python and the struct is used a second time, but if you put the meminfos in a Dict first then it seems to work okay. Will update if this bugs out on me down the line.

Glad you got something working @DannyWeitekamp. I think credit goes to @gmarkall for the recommendation in the original issue, thanks @gmarkall, good suggestion! :slight_smile:

Hi, I think the segfault issues are due to the fact you’d need to nrt.incref the meminfo when you store it onto the struct in st.meminfo = meminfo.

Another approach might be to memcopy the value as the implementations for typed.(Dict|List)do. This would work for non-memory managed types as well. I could share a tagged union implementation based on this approach.

Hey @asodeur. Thanks for the tip. That worked perfectly!

st = cgutils.create_struct_proxy(inst_type)(context, builder)
st.meminfo = meminfo
context.nrt.incref(builder, types.MemInfoPointer(types.voidptr), meminfo)

What’s the lifecycle of that incref? Is there any danger of causing a memory leak like this?

I would be very grateful if you could share your tagged union implementation. I have a few use cases that could benefit from something like that.

Just wanted to drop these here in case they were useful to others. Interestingly if you have two structrefs that look essentially like subclasses of each other you can down cast and recast back up without issue. Hopefully this sort of thing is supported more officially in the future, but this has helped me get around a lot of strict typing headaches in the meantime.

from numba import types
from numba.experimental.structref import _Utils, imputils
from numba.extending import intrinsic
from numba.core import cgutils

@intrinsic
def _struct_from_meminfo(typingctx, struct_type, meminfo):
    inst_type = struct_type.instance_type

    def codegen(context, builder, sig, args):
        _, meminfo = args

        st = cgutils.create_struct_proxy(inst_type)(context, builder)
        st.meminfo = meminfo
        #NOTE: Fixes sefault but not sure about it's lifecycle (i.e. watch out for memleaks)
        context.nrt.incref(builder, types.MemInfoPointer(types.voidptr), meminfo)

        return st._getvalue()

    sig = inst_type(struct_type, types.MemInfoPointer(types.voidptr))
    return sig, codegen


@intrinsic
def _meminfo_from_struct(typingctx, val):
    def codegen(context, builder, sig, args):
        [td] = sig.args
        [d] = args

        ctor = cgutils.create_struct_proxy(td)
        dstruct = ctor(context, builder, value=d)
        meminfo = dstruct.meminfo
        context.nrt.incref(builder, types.MemInfoPointer(types.voidptr), meminfo)
        # Returns the plain MemInfo
        return meminfo
        
    sig = meminfo_type(val,)
    return sig, codegen


@intrinsic
def _cast_structref(typingctx, cast_type_ref, inst_type):
    # inst_type = struct_type.instance_type
    cast_type = cast_type_ref.instance_type
    def codegen(context, builder, sig, args):
        # [td] = sig.args
        _,d = args

        ctor = cgutils.create_struct_proxy(inst_type)
        dstruct = ctor(context, builder, value=d)
        meminfo = dstruct.meminfo
        context.nrt.incref(builder, types.MemInfoPointer(types.voidptr), meminfo)

        st = cgutils.create_struct_proxy(cast_type)(context, builder)
        st.meminfo = meminfo

        return st._getvalue()
    sig = cast_type(cast_type_ref, inst_type)
    return sig, codegen

@DannyWeitekamp thanks for sharing. what would the condition to “have two structrefs that look essentially like subclasses of each other”? same fields of the same type? or the fields in one being a strict subset of the other ?

Strict subsets. If one has fields (“A”, i8, “B”, u8) then a castable subclass would be (“A”, i8, “B”, u8, “C”, i8). I imagine the “A” and “B” need to come in the same order in the subclass, but I haven’t tested them out of order.

I implemented subtyping for records in this PR https://github.com/numba/numba/pull/5560. Conceptually it’s very similar (done by strict subsets of fields), but I worked at the typing level, so no intrinsic needed like in your example.

you might be interested in how it’s done. Numba has a standard mechanism to allow conversions between types.

I wonder if StructRef subtyping could be handled at the typing level, without the need for casting intrinsics.

Hey @luk-f-a this looks like a really useful PR.

I think there are conceptually some differences between what we are each trying to achieve though.

Typically numba makes decisions about what ought to run internally based on what types are passed to it from python. All this is well and good if you are dealing just with numerical datatypes that are well defined from the get-go, or if you are building a purely functional program were the inputs and outputs can be well defined.

I am however building a knowledge-base data-structure filled with facts (basically structs) that have fields of just about any type. The issue I have been running into (and have somewhat fixed with this) is that when the user defines a new kind of fact (a specialization of the basefact type) that fact needs to go into an nrt allocated datastructure that holds all the facts as base facts and not in a bunch type specialized data-structures. Keeping everything specialized internally would be a bit of a headache since the knowledge base would need to redefine and reinstantiate itself to accommodate any newly defined facts. Furthermore it would be impossible to write a function that returns multiple kinds of fact if they could only live in the knowledge-base their specialized forms.

TLDR: The issue I’m dealing with has to do with how you store things of varying types, not how functions are specialized in response to things of varying types being passed in.

This being said you’re probably right about using numba’s typing/casting infrastructure to simplify things. I’m just not sure where to start. Maybe looking at your PR will help.

Edit: I have looked through this master, which was updated prior to @luk-f-a’s PR.

@DannyWeitekamp where did you land with this? It looks like you were trying to put different structref types into a typed list or dict, which is what I’m trying to do now.

Did you ever try it with dissimilar types (not a strict subset) or with types that have different member functions?

It looks like @luk-f-a’s PR got merged to main- did that change your approach at all?

Hey @nelson2005 sorry for the late reply. Currently Travelling, so I might have a more detailed answer in a couple weeks or so, but here is a start.

So the repo you linked is stale. The same stuff is all moved here and is much more worked out (turns out ‘numbert’ was a terrible name for a framework because people immediately think it is related somehow to the BERT language model):

**note the dev branch is the best place to look for now
You should poke through utils.py there are many useful intrinsics in there that can help you craft workarounds. Plus structref.py has some nice shortcuts for making structrefs. Lots of examples of structref usage throughout.

In my own projects I’ve come up with a lot of tricks for keeping different types in the same data structures. There are a few key considerations:

  1. Since your typed Dict or List needs to have an established data type you need to have a way of upcasting to a common type. You can either do this manually (the _cast_structref function I shared previously is one way to do this), or register a upcast (if a type is passed as an argument when no overload exists for it then numba will try valid upcasts for that type). For instance here is a snippet from one of my projects:
# from utils.py
def _obj_cast_codegen(context, builder, val, frmty, toty, incref=True):
    ctor = cgutils.create_struct_proxy(frmty)
    
    dstruct = ctor(context, builder, value=val)
    meminfo = dstruct.meminfo
    if(incref and context.enable_nrt):
        context.nrt.incref(builder, types.MemInfoPointer(types.voidptr), meminfo)

    st = cgutils.create_struct_proxy(toty)(context, builder)
    st.meminfo = meminfo
    
    return st._getvalue()

# In another file... Allow any specialization of MatchIteratorType to be upcast to GenericMatchIteratorType
@lower_cast(MatchIteratorType, GenericMatchIteratorType)
def upcast(context, builder, fromty, toty, val):
    return _obj_cast_codegen(context, builder, val, fromty, toty)

The above function makes it so that if I had a function with signature i8(GenericMatchIteratorType) (perhaps to determine the length of the iterator) then numba won’t try to specialize that function for other MatchIteratorTypes (which might be specialized for various kinds of structref types that I’ve defined). It’s often best in these cases like these to explicitly provide the types to njit so that overloads for more generic types get compiled first.

  1. It is possible to produce a raw pointer for an object as a 64-bit integer (see ‘_raw_ptr_from_struct’) which is useful if you want to keep a pointer to an NRT allocated object in a numpy array, compare pointers, use pointers as dict keys etc… Although this form of a pointer isn’t refcounted so I would recommend not using this as the only reference to an object that you are trying to keep as a member of a structref, since otherwise you’ll need to manually incref/decref the raw pointer, which I wouldn’t recommend since it you’ll spend a lot of time struggling with segfaults and memory leaks. (If you went this route in principle you would want to make a custom deconstructor for your structref to decref any raw pointers. This isn’t currently possible to my knowledge… or at least for now I’m too lazy to try to write an intrinsic to do it.)

  2. You cannot have custom member functions quite in the same way that you do in python, since in a compiled context a method is just a syntactic alias for a hard-coded subroutine. Inside your Dict/List all of your types will be upcast to the same type so they will all share the same statically defined methods as defined with @overload_method. There are two ways around this however:
    a) First-class functions are implemented now, so you can implement dynamic methods by having a structref attribute take a FunctionType. I’ve struggled to find a clean way to implement this approach however, since you typically need to pass the function as an argument to the constructor of the structref (or reconstruct the function from its address). If I’m recalling correctly, I haven’t had much luck with assigning functions that are globally defined, at least I doubt that it will cache properly if you care about that.
    b) You can keep an attribute that uniquely defines the type of the object, and implement your method statically with if-else statements to pick the correct implementation. Each particular implementation can down-casts the types as needed.
    **The above is all especially relevant for implementing hash() and __eq__() for objects that you want to use as dictionary keys. I have an example of this here: Cognitive-Rule-Engine/dynamic_exec.py at dev · DannyWeitekamp/Cognitive-Rule-Engine · GitHub

Keep in mind that all nrt allocated objects have a meminfo that points to their underlying data and counts references to them (when the refcounts hits 0 they are freed). As long as you can keep around a upcasted version of the object, the objects’ meminfo, or the address of the meminfo, then you can recast these back into the original object. So this should give you lots of storage options.

Wow, that’s a treasure trove of detailed information, much more than I expected :slight_smile:

I think that gets me going- numbert was helpful even though out of date. My case is simpler than yours, so I think I can maintain lifetime with a plain-python list and just pass around the raw pointers, downcasting as necessary. Item 3) regarding the custom member functions is also quite constructive; it helps me reason about the design space.

Enjoy your travels, and Happy New Year!

Edit: I’m not sure if you’re still actively developing on the rules engine, but there’s another project numbsql that’s doing similarly fun things using jitclasses for full-speed sqlite user-defined functions if you’re at all inclined to take a look.

thanks for the description @DannyWeitekamp

Since the discussion above last year I’ve had a similar problem —how to store subtypes in the same container— which I haven’t been able to solve in an elegant way. I also have the issue of having to multiple-dispatch against those subtypes. If I understood correctly your solution address the first part, does it also address the second part? or is it the case that you didn’t need to specialize functions based on the subtypes?

again, thanks for sharing, it’s great to have a reference to look at.

Sorry both for the delayed reply. I’ve just returned from traveling.

@nelson2005 thanks for the pointer. I’ll take a look. I’ve been avoiding SQL/database stuff for various reasons, a big one being the need for lots of custom functionality, but it might be worth taking a deeper look into that space.

@luk-f-a so if you have something like BaseClass, and SubClass1 and SubClass2 which are subclasses of BaseClass. Then to keep both SubClass1 and SubClass2 in the same container (for instance a typed List) you would need to cast them both to BaseClass add them to the container.

Now lets say that we want to write an njitted function that loops over the elements in our container and does something with them. Our container will have type ListType(BaseClass) so when we iterate through it each item will be of type BaseClass. At this point numba’s multiple dispatch machinery cannot help us because the decision point lives inside code that is eventually compiled down to LLVM and numba’s multiple dispatch machinery lives at the interface between python and the numba runtime (i.e. it decides what to run based off of the python types coming in).

So as I described in 3) we have a few options, which are along the lines of how we would approach the issue in a compiled language like C++. Either we keep an attribute in our BaseClass (like ‘type_id’ or something) that can help us identify the true type of an object (i.e. the type it was instantiated as) so we can run the correct implementation of our target function on it. If you have a small finite number of types you can just use if-else statement to choose the implementation (which might recast the object back to SubClass1 or SubClass2 to utilize attributes not in BaseClass). In this case all possible implementations are compiled into the function that holds our loop.

We can also execute our target implementation dynamically. One possible way at this would be to build a method table i.e. a typed Dict of type_id → FunctionType(out_type(BaseClass)), fill this on the startup of your program and pass it in as an argument with each call to your function. Alternatively you can assign the target implementation function to an attribute of BaseClass.

Here is some code showing some of these ideas in action, forgive the abuse of CRE (my project) utilities, you can poke around the previous link to see their implementation.

from numba import njit, i8
from numba.types import FunctionType
from numba.typed import List, Dict
from cre.structref import define_structref
from cre.utils import cast_structref,_obj_cast_codegen
from numba.experimental.structref import new
from numba.core.imputils import (lower_cast)

base_members = {"type_id" : i8}
BaseClass, BaseClassType = define_structref("BaseClass", 
    base_members, define_constructor=False)

base_exec_members = {**base_members, "get_thing" : FunctionType(i8(BaseClassType))}
BaseExecutable, BaseExecutableType = define_structref("BaseExecutable", 
    base_exec_members, define_constructor=False)

SubClassA, SubClassAType = define_structref("SubClassA", 
    {**base_exec_members, 'A' : i8}, define_constructor=False)
SubClassB, SubClassBType = define_structref("SubClassB", 
    {**base_exec_members, 'B' : i8}, define_constructor=False)

# Allow automatic upcasting from SubclassA to BaseClassType
@lower_cast(SubClassAType, BaseClassType)
def upcast_A(context, builder, fromty, toty, val):
    return _obj_cast_codegen(context, builder, val, fromty, toty)

# Allow automatic upcasting from SubclassA to BaseClassType
@lower_cast(SubClassBType, BaseClassType)
def upcast_B(context, builder, fromty, toty, val):
    return _obj_cast_codegen(context, builder, val, fromty, toty)


# get_thing() implementations for A and B
@njit(i8(BaseClassType), cache=True)
def get_thing_A(st):
    return cast_structref(SubClassAType, st).A

@njit(i8(BaseClassType), cache=True)
def get_thing_B(st):
    return cast_structref(SubClassBType, st).B

ATYPE_ENUM = 0
BTYPE_ENUM = 1

# Constructor for A
@njit(cache=True)
def SubClassA_ctor(A,get_thing_func=None):
    st = new(SubClassAType)
    st.type_id = ATYPE_ENUM
    if(get_thing_func is not None):
        st.get_thing = get_thing_func
    st.A = A
    return st

# Constructor for B
@njit(cache=True)
def SubClassB_ctor(B,get_thing_func=None):
    st = new(SubClassBType)
    st.type_id = BTYPE_ENUM
    if(get_thing_func is not None):
        st.get_thing = get_thing_func
    st.B = B
    return st

# Init 10 of each type 
@njit(cache=True)
def setup(gt_A=None,gt_B=None):
    L = List.empty_list(BaseClassType)
    for i in range(10):
        # At this point we don't need to explicitly cast to BaseClassType because
        #  we used lower_cast() to register A/B -> Base
        L.append(SubClassA_ctor(i, gt_A)) 
    for i in range(10):
        L.append(SubClassB_ctor(i, gt_B))
    return L


@njit(cache=True)
def get_thing_fixed(x):
    '''Example of hard-coding all method implemenations with else-if'''
    if x.type_id == ATYPE_ENUM:
        return cast_structref(SubClassAType,x).A
    elif x.type_id == BTYPE_ENUM:
        return cast_structref(SubClassBType,x).B
    else:
        return -1

# Need to fill the method table at program startup because the function addresses will change
method_table = Dict.empty(i8, FunctionType(i8(BaseClassType)))
method_table[ATYPE_ENUM] = get_thing_A
method_table[BTYPE_ENUM] = get_thing_B


@njit(cache=True)
def get_thing_dynamic_table(x, method_table):
    '''Example of using a method table for dynamic method implemenations'''
    if(x.type_id in method_table):
        return method_table[x.type_id](x)
    else:
        raise KeyError()

@njit(cache=True)
def get_thing_dynamic_attribute(x):
    '''Example of using dynamic method implemenations via a first-class attribute function'''
    f = cast_structref(BaseExecutableType,x).get_thing
    return f(x)

@njit(cache=True)
def sum_of_stuff_fixed(lst):
    return sum([get_thing_fixed(x) for x in lst])

@njit(cache=True)
def sum_of_stuff_dynamic_table(lst, method_table):
    return sum([get_thing_dynamic_table(x, method_table) for x in lst])

@njit(cache=True)
def sum_of_stuff_dynamic_attribute(lst):
    return sum([get_thing_dynamic_attribute(x) for x in lst])
    

container = setup(get_thing_A,get_thing_B)

print(sum_of_stuff_fixed(container))
print(sum_of_stuff_dynamic_table(container,method_table))
print(sum_of_stuff_dynamic_attribute(container))

Note for the sake of making the sum_of_stuff_dynamic_attribute case above work I resorted to passing the target implementations (i.e. get_thing_A/get_thing_B) to the constructors (via setup) for the subtypes. This is the most elegant solution I’ve found that keeps the code cache=True/AOT friendly. Ideally you would want the address of the function for the target implementation to automatically get built into the constructor for your specialized object, but I haven’t figured out how to do this just yet (in principle this would entail some kind of cross linking). In any case, if you find yourself only instantiating things on the python side then you can usually make this cleaner, by for example setting up the __init__, __new__, or __call__ in your StructRefProxy so that it fills in the implementation automatically.

Another trick to keep in mind if moving around/storing first-class functions is giving you trouble is that you can get the address of a function as an integer (via numba.experimental.function_type._get_wrapper_address) , pass that around as you like and reconstruct the function with cre.utils._func_from_address.

Hope this helps. Let me know if you have any questions.

1 Like

Okay, back to the fountain of knowledge… did you explore any way of allocating/initializing a memory-contiguous group of structref? Like a numpy ndarray of structref.

It might not be possible, but I was hoping to avoid the fragmentation of lots individual structref and pack them into a contiguous memory chunk as they’ll be accessed sequentially and I was hoping for good cache locality.

Hey @nelson2005 I have not, but I’ve been hoping for the same thing. I imagine that this would require allocating both meminfos (which are numba’s version of refcounted pointers) and the underlying data for the structref (which the meminfo points to) in alternating blocks. So this would act a little bit different than record array which of course typically only deal with fixed-width types like ints and floats. In a structref you also potentially have object types so a deconstructor (i.e. dtor) is required to free references to its members when its own refcount hits zero. Frankly I’m not sure what it would mean for one of those meminfos to hit a refcount of zero since the meminfos don’t really own their own memory blocks if they are allocated as a group. That piece is worth some thought. I doubt you’d be able to achieve such a thing without writing a custom C method, but if you’re inclined to do so I’d look how the following are related to one another.

NRT_MemInfo_alloc_dtor_safe() in core/runtime/nrt.c
meminfo_alloc_dtor() in core/runtime/context.py
new() in experimental/structref.py

Edit:. Adding link of potential interest

Okay, I was hoping you’d already implemented some C+±like placement-new and destructor :slight_smile:
Thanks for your thoughts, I’ll just stick with typed.List for now.

Just curious… did you test/verify that the classes need to be subsets at all?
That is, if the two classes have no members in common is the casting known to fail?

For the _cast_structref intrinsic above it is possible to cast pretty much any structref into any other structref. Although if the byte offsets of the member types aren’t guaranteed to line up then dereferencing an attribute (like object.my_attribute) on the casted object has unpredictable behavior. For numerical types the new member might evaluate to weird values (because of byte misalignment) or segfault if the retrieval overflows the original allocated data. For object-like types byte misalignment will probably cause a segfault because of an invalid pointer. Although usually if the members are subsets of each other like casting from [A:i8, B:i8, C:i8] to [A:i8,B:i8], but not to [B:i8, C:i8] (because B will probably get mapped to A and C to B), then this (in my experience) works robustly. Something to also keep in mind is that numba might use byte realignment to make attribute retrieval more efficient on certain hardware (typically will happen on smaller types). I haven’t had an issue with this, but it’s something to keep in mind. (see Purpose of memory alignment - Stack Overflow)

2 Likes