Scientific Computing in Rust 2025 Workshop
A few weeks ago, I listened to talks from the Scientific Computing in Rust 2025 workshop. Unfortunately, this post is a little late due to a fairly busy month.
I was interested to attend the workshop to gain some insight into the current state of Rust in the science and HPC world. Whilst I am not really something of a scientist myself I am very interested in simulation software which is a domain that I think that Rust is a great fit for. I’ve been starting to migrate a few of my personal projects and research towards Rust from C++ so was hoping I could draw on the insight of the speakers to further my efforts here.
In this post, I’ll list some of my takeaways from the event and highlight a few interesting points from the talks. If you are interested in Rust for scientific computing, I’d recommend looking into the mailing list and workshop from the link above.
About
The Scientific Computing in Rust workshop ran from the 3rd to the 6th of June. The event has been running since 2023 but as a recent subscriber to the monthly newsletter, this was my first time attending. According to the introductory session there were roughly 500 registered attendees, mostly hailing from Europe or the Americas. I think this is a great turnout for a relatively new event in a somewhat emerging area.
The conference was free to attend, with noted financial support from Zulip (providing free hosting for the workshop’s Zulip chat) and the UCL Advanced Research Computing Centre (providing Zoom licenses for hosting the sessions). It’s seriously great for accessibility that the conference was free - thanks again to the organisers and supporters for spending their time putting the workshop on.
Talks are already available on the workshop’s YouTube channel. The videos were posted very promptly - often the same day. Kudos to the organisers for getting them posted so quickly!
The workshop featured a mix of talks (roughly 10-15 minutes each) with interactive discussion and tutorial sessions. There were also two invited talks from Alice Cecile of the Bevy Foundation and Nathaniel Simard, creator of the Burn ML Framework.
Unfortunately I couldn’t engage in the interactive parts of the workshop which was a shame because I was keen to find out more about the current landscape of linear algebra libraries - one of the discussion sessions on day one. Luckily summaries of the discussion were given in the closing remarks, which was also helpful for others that couldn’t attend two discussions at once! The main note here is that my perspectives on the event are limited to the talk sessions I attended.
Takeaways
Why Rust?
I’ll quickly recap some of the commonly stated benefits of Rust for scientific computing.
No surprises that Rust’s high performance characteristics were often mentioned throughout the talks. One talk placed Rust 5% faster than C++ and 1% slower than Fortran for a set of parallel fluid dynamics benchmarks.
Firstly, a lot of scientific compute work is done in a higher level language like Python or Julia. One talk even explicitly mentioned that Python is “where the scientists are” and any successful project in this domain will need to cater for Python users. Python bindings are easy to create in Rust, using the pyo3
crate.
Secondly, Rust has a philosophy of ‘fearless concurrency’, which aims to address the error-prone nature of concurrent programs. HPC software generally runs in a massively-parallel way - either on shared memory with OpenMP or distributed memory systems with MPI. Concurrent programming in C++ can be error-prone but Rust’s ergonomic, safe concurrency allows implementors to spend more time focussing on their parallel algorithm rather than burying their heads in a debugger. Rust also has great async
support with battle-tested, well-tooled runtimes.
The Rust toolchain system is another great boon. I have personally spent (in many cases too much) time setting up cross platform builds of optimizer libraries or their dependencies for consumption in production tools. This can quickly descend into a fight with multiple toolchains which when you finally emerge may not even yield an easy to consume artifact. The default experience with cargo
is slick because very little configuration is required (unlike tools like CMake) due to Rust’s module semantics.
As someone who has been optimistic about Rust in the scientific compute space for a while now, most of these points were not surprising; however, confirmation from users working in the domain was reassuring.
What next?
Whilst there are some clear reasons to use Rust for scientific compute, there is some work to do to bring it closer to the mainstream.
The primary concern is the lack of a mature library ecosystem. Scientific computing as a domain is many decades old and more established languages (like Fortran and especially C++) have a diverse ecosystem of mature libraries and tools that are in production today. Rust does not have such an advanced ecosystem and it will take time, effort, and community focus to eventually get there. Another good example of a set of well-established scientific compute interfaces is scipy
, which was mentioned as a guiding light for informing Rust scientific compute library interface design.
In the closing remarks of the meetup the organisers listed some ideas for more regular collaboration, including meetups at in-person conferences, tutorials throughout the year, and more published writing about people’s efforts in the area.
Talks
In this section I’ll give a very brief summary of the talks that I was able to attend. This mainly aims to aggregate the links to the speakers’ work and is deliberately spartan in places so as not to rehash any previous points.
Day One
The opening talk was Daniel Boros presenting stochastic-rs
, a quant-finance oriented simulation crate. There is an accompanying blog post if you are interested in this topic.
The second talk discussed Eduardo Martin’s efforts to port the NASA parallel benchmarks, a standard fluid dynamics based benchmark for massively parallel computers, to Rust. This was the aforementioned talk that
placed Rust 5% faster than C++ and 1% slower than Fortran
However, it was noted that Rust with Rayon was slower than both Fortran and C++ with OpenMP. The repository and pre-print are publicly available.
Next, Jonas Pleyer presented his project cellular_raza
, a framework for cellular agent-based modelling in biological systems. This was a great talk in my opinion, that gave lots of great advice about designing component-drive models in a way that is flexible, performant, and easy to consume.
William Gurecky presented ORMATEX, an exponential integrator package being worked on at Oak Ridge National Laboratory. It was noted that a C++ implementation was also being pursued, so perhaps some further comparative benchmarks between languages will emerge from this project in future.
Stefan Abi-Karam gave an overview on the benefits of writing electronic design automation (EDA) software in Rust, which is predominantly written in C++ at present. For me, it was interesting to learn a bit about how software is used to design CPUs, which I had not read much about before.
Next was a worked example of std::simd
given by Andrés Quintero. It covered the paradigm in general with some example usage of the currently-unstable standard library feature. A portable SIMD library is a powerful tool and I wonder how the Rust standard library implementation will fair compared to the ongoing C++ std::simd
proposals for C++26.
Kyle Carrow presented ninterp
- a crate for numerical interpolation in n-dimensions. This is being developed at the National Renewable Energy Laboratory for road and rail simulations.
One of the most practically useful talks of the day was Carl M. Cadie’s “Nine rules for scientific libraries in Rust”. I’ve mentioned before this category of software is not always ergonomic to consume and the talk offered some insightful tips to avoid this trap. The rules were mostly standard advice for software engineers, although many authors of scientific compute packages are not necessarily software engineers. Carl has also published a Medium article containing the rules, if you prefer written content. If you want to publish easy-to-consume scientific packages, I’d recommend giving these some though whichever medium you choose.
The last talk of the day was from Josiah Perry, who spoke about Apache Arrow, which is a standardised in-memory format for data. One interesting use case for using Arrow is as a common way to share data in programs between components that were written using different technologies.
Day Two
The first talk of day two described how to avoid breaking changes in scientific software. A common issue that arises in compiled libraries is when two packages share a compiled dependency that gets updated for one package but not the other. This talk outlined how a combination of supporting both APIs concurrently with conversion methods and serialization can be used to give consumers time to transition between the interfaces.
Next was Richard Neale talking about cross platform SIMD usage in Rust. I think RISC-V could be an interesting proposition for HPC accelerators in future, and this talk mentioned using RVV intrinsics for matrix multiplication. Unfortunately, the speaker mentioned that they ran out of time for their RVV prototyping, but were able to link C code into the build to compensate for this.
Isaïe Muron presented honeycomb
, a crate for combinatorial maps implementation for meshing applications.
The plenary talk of day two “Juice your simulations: what science can learn from game development” was given by Alice Cecile of the Bevy Foundation. The ‘juice’ terminology from the title refers to adding feedback that makes basic actions seem delightful, compelling, and satisfying. Through the medium of a cellular automata environment simulation, Alice showed how tooling your scientific experiments as if you were a game developer can be beneficial for rapid iteration. The talk contained plenty of philosophical pearls of wisdom, notably:
- You can’t improve anything without measuring it, so continuous measurement must be ensured
- Adding polish early is dangerous due to the future cost of change
- The importance of marketing your work and nurturing community around your projects
Day Three
The first talk of day three was from Martin J. Robins, presenting Diffsol, an ODE/DAE solver crate. This was one I was looking forward to because Diffsol uses Enzyme AD, a tool for automatic differentiation of LLVM IR. Diffsol also supports multiple linear algebra backends, which is a useful design pattern to investigate given that there is no de-facto standard Rust linear algebra library yet.
Next was “Pixi: the missing companion to cargo” by Julian Hofer. Pixi is a multi-language build tool with an emphasis on reproducible builds. It supports cargo
, allowing Rust developers to easily consume other libraries - a useful feature which may help address the immaturity of its scientific compute library ecosystem.
Students from the Hamburg University of Technology Robocup team gave an overview of their used of Rust. It was cool to see their use of Bevy to build tooling for monitoring and analyzing their robot. You can follow their coding efforts on GitHub
“Rust is RAD and this is why”, presented by Jason Wohlgemuth and Audrey Carson, gave an overview of the design of ACORN - a command line tool that is used to create analysis-ready research activity data. These types of auxiliary tools highlight another great use case for Rust and the speakers pointed out how the rich set of crates for creating CLIs
Conclusion
In conclusion, Scientific Computing in Rust 2025 was a thoroughly enjoyable event which has left my with some enthusiasm to push some personal projects in this area. I am thankful to the organisers and speakers for giving us the chance to share ideas in this exciting domain. From the closing remarks it sounded like the 2026 event was already in the works with announcements expected around January, which I’ll be looking forward to.