This has sat in my backlog for a while, but the recent announcement of a new investment round in Modular.ai (the creators of Mojo) prompted to finish off a blog post over at the Anaconda Engineering Blog talking about Numba, Mojo, and generally how I think about the goals when compiling Python:
If you have any questions, feel free to ask them here!
Thanks for the write up, very interesting! Do you think the same/similar aspects come into play when considering the work that the LPython/LFortran team is doing?
It’s certainly an exiting time given al these initiatives (including the work done on the CPython internals). I really like that each of these compilers really tries to give end users fairly easy control to low-level performance related functionality, without overwhelming them with complexity.
Overall I’m excited to try Mojo once they release it. Putting on my cynical hat; the silly benchmarks are a little off putting. I understand that they want to generate some hype, which is fair, but they know better of course. Anyone confident regarding their performance should put those type of niche-benchmarks up to languages like Fortran/Julia etc, instead of comparing it to Python in a way that no Python-user (concerned with performance) would ever do.
Hey @sseibert ,
Thank you very much for your blog post about Numba and Mojo.
I have a question regarding your wishlist for Mojo. You are disappointed about the provided features for multidimensional array operations. Are you only concerned about the missing features or also about their tensor class design compared to MLIR’s tensor dialect?
There is a short tutorial “Introduction to Tensors in Mojo” on Modulars youtube channel.
At 2:40 min the presenter shows the result of a 3x3 Matrix called
t returned a scalar where a Numpy array would return the first row as an array.
Is their any good reason to use a different syntax than Numpy?
Anyway, I am pretty excited about having a lightning fast programming language that is similar to Python.