Post

(Virtual) Trip Report: C++ Online 2026

This week I virtually attended C++ Online 2026. The conference ran from the 11th to the 13th of March, with 14 workshops scheduled as far out as May.

Front Matter

I attended C++ Online 2025 last year and in terms of the format of the conference, not much has changed since last year. As such, I won’t rehash a full description of every aspect of the event here - please refer to the previously linked post for that!

This time around, entrance was very competitively priced at £50 + VAT for an “indie” ticket. In the words of the conference:

This year, in an effort to make the conference even more accessible, particularly to those who are early in their career development but don’t qualify for a student discount, we have added a new ticket for individuals and indie developers at half the price of the standard corporate rate ticket (which will remain at the previous price of £99). This new ticket is available to anyone who’s employer is not funding their attendance to the event - we will not be policing this and so we are relying on attendees to pick the appropriate ticket type.

Much like last year, I was self-funding my attendance at C++ Online so took full advantage of this offer. Actually, the reduced price was one of the key factors that convinced me to purchase a ticket. It’s a keen move for the conference in my opinion, as the online-only format is already great for accessibility and this cheaper ticket lowers the floor even further. According to the organisers, this strategy was hugely successful and they are hoping to support a similar offering next year. Employer-funded tickets remained at the same £99 price point, and discounts were still available for students and members of supporting meetups or community sponsor organisations.

One difference from previous years was the extensive program of 14 workshops available after the conference. The conference ticket was included in the purchase of a workshop ticket, so good bang-for-your-buck on offer. Taster sessions for each workshop were interspersed throughout the main conference program, which allowed for a “try before you buy” approach - a pragmatic move given how many workshops there were to choose from! This also entailed a third concurrent track of content running throughout the event, which made choosing what to attend live even more difficult. Apparently, the uptake on post-conference workshops tends to be higher because attendees have had time to digest the main content, which helps them decide what they would like to learn more about.

The rest of this post will cover the talks that I was able to see live. I couldn’t make a session in every timeslot unfortunately, but at C++ Online you typically get access to the talk recordings the same day that they were given so it’s easy to catch up later.

Day 1

The conference started with an opening address containing the typical front matter.

The first talk of the conference for me was “Consensus Critical - How Bitcoin Core uses C++ to Maintain Network Agreement” by Yuvicc. This talk started with a summary some of the C++ code concepts that implement the proof-of-work system used by Bitcoin. There were some discussions of previous CVEs in Bitcoin core, with previous bugs causing many bitcoins to be created from thin air or remote denial of service via duplicate inputs. An interesting tidbit was that floating point arithmetic is not used in the core, due to potential differences in rounding modes, compiler flags, and optimizations levels potentially causing different behaviour across the network, which could cause consensus to drift.

The day one keynote was “Rediscover Software Engineering” by David Sankel, which was introduced as a talk about the impact of AI on software engineering. This talk was seriously good - it captured the more skeptical side of the AI zeitgeist in a humourous way. We were warned that the talk was deliberately prerecorded which served to accommodate many a hyperbolic skit (with characters played by the Sankel family) satirising the stereotypical AI discourse. The talk morphed into a well researched examination of the real-world impact of AI-assisted development. It seems that AI might not yield the productivity gain touted by many of its proponents according to information drawn from the 2025 DORA State of AI-assisted Software Development report and Circle CI’s 2026 State of Software Delivery. Although with the AI space moving so fast recently, who knows when we will get truly accurate statistics? There were many human lessons too, encouraging empathy for junior developers and recognising the embarrassment that more senior contributors can feel when they begin to over-rely on AI tooling to increase their output at the cost of quality. Some guiding principles for software engineering were extracted from Turing Award winner CAR Hoare’s two-page paper on “Software Engineering”, who passed away the week before the conference started.

The penultimate talk I attended on day one was “Monads Meet Mutexes” by Arne Berger. It described a library for functional synchronisation, designed to solve some of the problems associated with manual imperative approaches. The API felt similar to the Rust standard library std::sync module, and was also inspired by C++ 23’s monadic operations on std::optional. The speaker gave an approachable anecdotal talk about how they implemented the library, battling lifetimes in long expressions, carrying l-value references through multiple separate assignments and testing concurrent code. Specifically on the last point, it was cool to see a classic trick to generate an ephemeral type - using decltype of a temporary lambda expression in a template parameter (template<typename T = decltype([](){})>) This introduces a new type every time that the template is instantiated, which is employed in a templatized test mutex type. These unique types allow the tests to continue to run in parallel without accidentally locking the same mutex. I first saw this trick used in a talk about a symbolic calculus library at CppCon ‘24 to uniquely tag elements in symbolic expressions and hadn’t considered its application in test code before.

My last talk of the day was “C++ Search for Database Kernels” by Andrey Abramov, founder and CTO of SereneDB. The talk started by showing the different types of search that make up what we perceive as a single search as a consumer of a website. For example, if we are searching for hotels then we have custom filters, geospatial queries, and ranking (to name but a few). The first part of the talk covered the internal APIs that make up SereneDB, then there were explanations of domain-specific search terminology such as similarity and scoring. I’m not well-versed in this domain and the talk content was dense, detailed and delivered by a speaker that is clearly knowledgeable, so I will certainly need to revisit this topic. Interestingly, the author suggested trying to leverage compiler auto-vectorisation rather than custom SIMD code citing the potential for platform lock-in and the pitfalls of manually manipulating the registers (e.g., register pressure). The code for SereneDB code is available on GitHub if you’re interested in reading more about it.

Day 2

For the first talk of day two, I attended “The Clocks of C++” by Sandor Dargo. It was an overview of std::chrono and its core components: clocks, durations, and time points. I learned that you can define your own clock type (using the clock type trait as a guideline) and that C++20 introduced a few new clocks along with time zones.

The next talk I was able to attend was “Lock-free Queues in the Multiverse of Madness” by Dave Rowland. The speaker works in the audio industry where queues are commonplace, and this talk gave an in-depth overview of many variations (20 written over two weeks!) of queue implementations. Using locks for synchronisation has a couple of shortcomings: they require system calls (so locking does not happen in fixed time) and may cause priority inversion. The main portion of the talk detailed throughput benchmarks for some of these variations. There was also a good summary of memory ordering and a use case for alignas(std::hardware_destructive_interference_size) to prevent fields of a class being stored within the same cache line. The key takeaway was to use the right queue for the job to avoid performance pitfalls. For instance, if you only need to support a single consumer then you will pay extra overhead for the bookkeeping of a queue implementation that supports multiple consumers. Another deep technical talk with a steep learning curve, I think this is one I’ll revisit to fully digest the content. If you are interested in this subject I’d definitely recommend giving it a watch when the talks are available on YouTube.

The keynote of the second day was “I Fixed Move Semantics” by Jason Turner - a clickbait title in the words of the speaker. It opened with a few C++ reference type brainteasers with lots of audience participation in the conference Discord server. In the speaker’s view, the main problems with move semantics are that most people don’t understand that it is just an overload resolution mechanism and that implicit conversions between value categories trip people up. The proposed solution was a new type moving_ref<T> that container only a move constructor and operator T&&() members. This prevents implicit conversions to other value categories. The same kind of type was also outlined for forwarding references.

In the evening I caught some of the lightning talks , which covered topics such as: emulators, Bazel, some cool CMake features (CMAKE_VERIFY_INTERFACE_HEADER_SETS and cmake --build --target=codegen for codegen targets), the pitfalls of forgetting human-first documentation in the age of AI, Phil Nash’s new Catch23 testing library, and the effects of AI on software engineering.

For the final talk of the day, I listened to “C++ without libc” by Henry Wilson, which focussed on performing system calls directly from C++ rather than using C. This gives us the benefits of C++ on top of what is a shown by the speaker to be an error-prone interface through the medium of code review. One example of this, is having syscalls return strongly typed values that allow for assertions, custom conversions, and attribute [[nodiscard]] to generate warnings when syscall results are not checked. Another ergonomics win is how you can replace syscall arguments with STL types, such as std::span where the syscall would take a pointer/length pair, by exploiting how those types are passed in registers, making them compatible with the underlying syscall signature. There was some manual assembly (even manual name mangling) required to achieve this though. This seems like a really interesting idea from the perspective of code size, so I might look more deeply into this in future. One issue that the speaker had was manually coding all the syscalls, but I wonder if something like LLVM TableGen could be used to simplify this process. The source code for the liblinux++ project is hosted on the speaker’s personal site if you want to take a look.

Day 3

My first talk of day three was “C++ for High Performance Web Applications” by Uzochukwu Ochogu. The speaker believes C++ is a good choice for web applications due to its systems programming capabilities being amenable to producing high throughput services. The talk contained a state-of-the-nation survey for C++ web application frameworks and supporting libraries, backed up with a case study of a semantic search web service. Of particular note is the versatile glaze JSON library which is useful for much more than just web services, but was used extensively throughout the talk’s code samples. This was an interesting survey for me since I have largely switched to Rust for web application development in personal projects, so it was good to seed some thoughts for investigating C++ alternatives.

The next talk I attended was “Refactoring Towards Structured Concurrency” by Roi Barkan. In the speaker’s opinion, concurrency is simply the description of dependencies between components. The talk opened with a summary of concurrency methods in C++ (non-blocking APIs, fork-and-join, etc.) and some examples of continuation passing style with Boost ASIO. Such approaches are not necessarily structured concurrency because the handle to the concurrency is the way it is implemented (e.g. std::jthread) rather than work itself. Structured concurrency, on the other hand, focusses on tasks as resources with lifetimes and scopes that can outlive the task. As my day job involves a lot of C#/.NET, I am certainly no stranger to this paradigm! C# has an excellent, ergonomic task-based asynchronicity model geared towards application code rather than systems programming. std::execution and coroutines are some examples of structured concurrency in C++. Respectively, they allow access to object state and coroutine handles which can be manipulated externally. They key tips for migrating to structured concurrency were to keep control of your main function (rather than give all power to a framework) and adopt an existing, mature library. The former point is addressed well in tokio (Rust’s eminent async runtime) with the #[tokio::main] attribute which allows you to auto-generate a sensible fn main, which you could ultimately hand-roll instead if you don’t like the default implementation.

They keynote of the final day was “Code Smarter” by Inbal Levi which covered AI assisted software engineering - certainly the topic du jour. Apparently, the AI industry is projected to reach roughly 3.3 trillion USD which is 2-4% of the global economy. One can only imagine what the wider societal impact of that will be! The majority of the talk focussed on the speaker’s AI workflows through a tool called Cline’s VS Code extension. I have only used VS Code’s Copilot window so it was useful to see another tool in action. Next, there were explanations of the core AI terminologies for developers to be aware of: the model (the “brain” that processes your requests), the context (information provided to the mutable mutably through interactions or immutably through, for example, system prompts), and the framework (the preprocessing before the data is sent on the model). I have found system prompts to be a key factor in success when it comes to AI use in personal projects and I know this can be extended further with “skills” but am yet to try using them. The speaker believes that the nondeterminism of AI systems is a serious limiting factor in their use for automation and as such we still need solid language features (citing C++26 reflection) to form a stable, deterministic base for our work. Some closing recommendations for those looking to experiment with AI included: learn how to use the tools, try to improve iteratively, and mix and match parts of your setup (e.g. different models, approaches to using context, etc.) and see what happens. As someone who has only recently come to trial AI integration more deeply into my workflows, it was an approachable and welcome introduction to the subject.

With the talks drawing to a close, there were some closing remarks from Phil Nash. Phil spoke about the successes of the event and the benefits of the online format for accessibility - which I certainly agree with. It sounds like there will be a C++ Online 2027, potentially with even more content, to look forward to.

Next up was “C++/sys” by Karsten Pedersen, which described an alternate standard library facility with a focus on memory safety and verification. The two main principles are: memory must be locked during the lifetime of access and pointers must never dangle. The library generates deterministic crashes when its memory safety invariants are invalidated. To me this seemed spiritually similar the Fil-C project, but targeting standard library rather than a full compiler toolchain. The library is available on Codeberg.

For the last open content slot of the conference I listed to “From 5000ns to 200ns” by Larry Ge, which presented a FIX protocol parsing library that aims for sub 200 nanosecond parsing. SIMD scanning (inspired by simdjson) is employed to find the SOH characters which delimit each FIX message, with the best SIMD implementation selected when the library is initialised. Memory allocations were optimised by using a polymorphic allocator backed by a “bump allocator” - an emulation of stack memory on the heap that allows for very fast allocations but no individual deallocations. See the bumpalo crate for an example. If you’re interest in the FIX parser code, the repository is available on GitHub.

Conclusion

Overall, it was another great year for C++ Online. The lower price of the indie ticket really leant into the strengths of the format, which I think enhanced the event. I definitely noticed more engagement in the talks than last year and some of the talks really leveraged the medium to its full potential.

Thanks, as always, to all associated with putting this accessible event on! The organisers appeared to be planning an even more ambitious event next year, which I’ll be keeping an ear out for.

This post is licensed under CC BY 4.0 by the author.