<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.8.7">Jekyll</generator><link href="http://cppalliance.org/feed.xml" rel="self" type="application/atom+xml" /><link href="http://cppalliance.org/" rel="alternate" type="text/html" /><updated>2026-04-10T21:16:11+00:00</updated><id>http://cppalliance.org/feed.xml</id><title type="html">The C++ Alliance</title><subtitle>The C++ Alliance is dedicated to helping the C++ programming language evolve. We see it developing as an ecosystem of open source libraries and as a growing community of those who contribute to those libraries..</subtitle><entry><title type="html">On Triremes, Aircraft, and Molecular Modelling Simulations</title><link href="http://cppalliance.org/peter/2026/04/10/PeterTurcan-Q1update.html" rel="alternate" type="text/html" title="On Triremes, Aircraft, and Molecular Modelling Simulations" /><published>2026-04-10T00:00:00+00:00</published><updated>2026-04-10T00:00:00+00:00</updated><id>http://cppalliance.org/peter/2026/04/10/PeterTurcan-Q1update</id><content type="html" xml:base="http://cppalliance.org/peter/2026/04/10/PeterTurcan-Q1update.html">&lt;p&gt;First some personal news. To keep my coding skills sharp(ish), I updated my simulation of Greek and Persian triremes (rowed war-galleys with big bronze rams) with better graphics (splashy bows, greying sea and skies in strong wind, fire pots to illuminate courses) and some better AI. Originally the code was in C++, but changed to C# to work well with XNA graphics. I was unaware of the Ogre graphics library at the time, nor the Boost libraries - which might have given the performance to extend the AI look-ahead from a paltry six seconds to perhaps ten seconds or more (look-ahead is a combinatorial explosion and performance-critical). Called the updated game Trireme Commander 2, and put it up on itch.io. So far, sold one copy - only 999,999 to go and I will have sold a million! One step at a time I guess.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/posts/peterturcan/pandora-in-storm.png&quot; alt=&quot;Triremes&quot; /&gt;&lt;/p&gt;

&lt;p&gt;From discussions with new Cpp Alliance staff, I added two scenarios to the User Guide - aeronautical engineering and bio-tech. I do know something about aeronautical stuff - having worked on the awesome Microsoft Flight Simulator for four years (one of the best jobs I had in about 18 years at Microsoft). Modern aircraft should be considered as flying computers - so many systems working on our behalf and most of those systems (airspace, instrument landing, beacons, runways) well thought out with safety in mind. Writing real-time software for an aircraft though requires following a strict discipline and procedures that most of us are never aware of. If you have four airspeed sensors and two give one number, and two another, what do you do? Terrible things have happened if coders do things like just take the average. I added the scenario to the User Guide with some examples of the procedures and errors that come up with flight software, such as range failure, underflow, and order-dependent drift.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;/images/posts/peterturcan/aerospace-gear.png&quot; alt=&quot;Aircraft gear&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Whereas I know something about airplanes, I know little about bio-tech software - molecular modelling and stuff like that. However, my son works at Carnegie Mellon as a PhD and bio-tech assistant, and I was able to get a meaningful discussion going on phylogenetic trees - modelling evolution, species, and all those things that follow a biological tree structure. Fascinating, and I added the topic again to the User Guide advanced scenarios. Talk about airplanes being flying computers, carbon-based lifeforms are computers too only many times more involved and connected.&lt;/p&gt;

&lt;p&gt;Documentation needs to feel alive too - it needs updated and new stuff added on a regular basis just to show it is active and evolving. Adding topics to the FAQs (User and Contributor) is something I do frequently - often looking at the discussions on #boost slack channels to pick up on current thinking and areas of difficulty or concern, such as compatibility issues, CMake, and conference attendance. Or, for contributors, topics such as documentation, navigation, and useful macros.&lt;/p&gt;

&lt;p&gt;Reading proposed library documentation is always interesting. Always try to get the authors to focus on, or at least mention, use cases. It is use cases that pull in developers - saying what something “does” is so much more important than saying what something “is”. Developers, understandably, are so close to the metal that it can be difficult for them to step back and focus on use over internals. There are many perspectives in looking at the same thing.&lt;/p&gt;</content><author><name></name></author><category term="peter" /><summary type="html">First some personal news. To keep my coding skills sharp(ish), I updated my simulation of Greek and Persian triremes (rowed war-galleys with big bronze rams) with better graphics (splashy bows, greying sea and skies in strong wind, fire pots to illuminate courses) and some better AI. Originally the code was in C++, but changed to C# to work well with XNA graphics. I was unaware of the Ogre graphics library at the time, nor the Boost libraries - which might have given the performance to extend the AI look-ahead from a paltry six seconds to perhaps ten seconds or more (look-ahead is a combinatorial explosion and performance-critical). Called the updated game Trireme Commander 2, and put it up on itch.io. So far, sold one copy - only 999,999 to go and I will have sold a million! One step at a time I guess. From discussions with new Cpp Alliance staff, I added two scenarios to the User Guide - aeronautical engineering and bio-tech. I do know something about aeronautical stuff - having worked on the awesome Microsoft Flight Simulator for four years (one of the best jobs I had in about 18 years at Microsoft). Modern aircraft should be considered as flying computers - so many systems working on our behalf and most of those systems (airspace, instrument landing, beacons, runways) well thought out with safety in mind. Writing real-time software for an aircraft though requires following a strict discipline and procedures that most of us are never aware of. If you have four airspeed sensors and two give one number, and two another, what do you do? Terrible things have happened if coders do things like just take the average. I added the scenario to the User Guide with some examples of the procedures and errors that come up with flight software, such as range failure, underflow, and order-dependent drift. Whereas I know something about airplanes, I know little about bio-tech software - molecular modelling and stuff like that. However, my son works at Carnegie Mellon as a PhD and bio-tech assistant, and I was able to get a meaningful discussion going on phylogenetic trees - modelling evolution, species, and all those things that follow a biological tree structure. Fascinating, and I added the topic again to the User Guide advanced scenarios. Talk about airplanes being flying computers, carbon-based lifeforms are computers too only many times more involved and connected. Documentation needs to feel alive too - it needs updated and new stuff added on a regular basis just to show it is active and evolving. Adding topics to the FAQs (User and Contributor) is something I do frequently - often looking at the discussions on #boost slack channels to pick up on current thinking and areas of difficulty or concern, such as compatibility issues, CMake, and conference attendance. Or, for contributors, topics such as documentation, navigation, and useful macros. Reading proposed library documentation is always interesting. Always try to get the authors to focus on, or at least mention, use cases. It is use cases that pull in developers - saying what something “does” is so much more important than saying what something “is”. Developers, understandably, are so close to the metal that it can be difficult for them to step back and focus on use over internals. There are many perspectives in looking at the same thing.</summary></entry><entry><title type="html">Joining Community, Detecting Communities, Making Community.</title><link href="http://cppalliance.org/arnaud/2026/04/08/Arnaud2026Q1Update.html" rel="alternate" type="text/html" title="Joining Community, Detecting Communities, Making Community." /><published>2026-04-08T00:00:00+00:00</published><updated>2026-04-08T00:00:00+00:00</updated><id>http://cppalliance.org/arnaud/2026/04/08/Arnaud2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/arnaud/2026/04/08/Arnaud2026Q1Update.html">&lt;h2 id=&quot;joining-community&quot;&gt;Joining Community&lt;/h2&gt;

&lt;p&gt;Early in Q1 2026, I joined the C++ Alliance. A very exciting moment.&lt;/p&gt;

&lt;p&gt;So I began to work early January under Joaquin’s mentorship, with the idea of having a clear contribution to Boost Graph by the end of Q1. 
After a few days of auditing the current state of the library versus the literature, it became clear that community detection methods 
(aka graph clustering algorithms) were sorely lacking for Boost.Graph, and that implementing one would be a great start 
to revitalizing the library and fill up maybe the largest methodological gap in its current algorithmic coverage.&lt;/p&gt;

&lt;h2 id=&quot;detecting-communities&quot;&gt;Detecting Communities&lt;/h2&gt;

&lt;p&gt;The vision was (and still is) simple: i) begin 
to implement Louvain algorithm, ii) build upon it to extend to the more complex Leiden algorithm, iii) finally get 
started with the Stochastic Block Model.&lt;/p&gt;

&lt;p&gt;If the plan is straightforward, the Louvain literature is not, and the BGL abstractions even less. 
But under the review and guidance from Joaquin and Jeremy Murphy (maintainer of the BGL), I was able to put up a satisfying implementation:&lt;/p&gt;

&lt;p&gt;Using the Newman-Girvan Modularity as the quality function to optimize, one can simply call:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;double Q = boost::louvain_clustering(
    g, cluster_map, weight_map, gen,
    boost::newman_and_girvan{},  // quality function (default)
    1e-7,                        // min_improvement_inner (per-pass convergence)
    0.0                          // min_improvement_outer (cross-level convergence)
);
// Q = 0.42, cluster_map = {0,0,0, 1,1,1}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As it happens often with heuristics, there is a large number of quality functions out there, and this is not 
because of a lack of consensus: in &lt;a href=&quot;https://www.cs.cornell.edu/home/kleinber/nips15.pdf&quot;&gt;a 2002 paper&lt;/a&gt;, 
computer scientist Jon Kleinberg proved that no clustering quality function 
(Modularity, Goldberg density, Surprise…) can simultaneously be:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;scale-invariant (doubling all edges should not change the clusters),&lt;/li&gt;
  &lt;li&gt;rich (all partitions should be achievable),&lt;/li&gt;
  &lt;li&gt;consistent (shortening distances inside a cluster and expanding distances between clusters should lead to similar results).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In other words, there is no way to implement a single function hoping it would exhibit three basic properties we would genuinely expect.
All we can do is to explore different trade-offs using different quality functions.&lt;/p&gt;

&lt;p&gt;So I left some doors open to be able to inject an arbitrary quality function. 
If this function exposes a minimal, “naive” interface, the algorithm will statically use a 
slow but generic path, and iterate across all the edges of the graph to compute the quality. 
It is slow, yes, but it makes the study of qualities easier, as one does not have to figure out 
the local mathematical decomposition of the function to get started with coding:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;struct my_quality {
    template &amp;lt;typename G, typename CMap, typename WMap&amp;gt;
    typename boost::property_traits&amp;lt;WMap&amp;gt;::value_type
    quality(const G&amp;amp; g, const CMap&amp;amp; c, const WMap&amp;amp; w) {
        // your custom partition quality function
    }
};

double Q = boost::louvain_clustering(g, cluster_map, weight_map, gen, my_quality{});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;However, the Louvain algorithm is extremely popular because it is fast, as it is able to update the 
quality computational state for each vertex it tries to “insert” or “remove” from a neighboring putative community. 
This &lt;em&gt;locality&lt;/em&gt; decomposition has to be figured out mathematically for each quality function, so it’s not trivial.&lt;/p&gt;

&lt;p&gt;I defined a &lt;code&gt;GraphPartitionQualityFunctionIncrementalConcept&lt;/code&gt; that refines the &lt;code&gt;GraphPartitionQualityFunctionConcept&lt;/code&gt; : 
if the algorithm detects that the injected quality function exposes an interface for this incremental update, 
the fast path is taken. One thing I figured out is that the &lt;code&gt;GraphPartitionQualityFunctionIncrementalConcept&lt;/code&gt; is for now too specific 
to the Modularity family. I am currently working on a proposal to increase its scope in future work.&lt;/p&gt;

&lt;p&gt;The current PR has been carefully tested and benchmarked for correctness and performance, and validated by 
Jeremy to be merged on develop branch.&lt;/p&gt;

&lt;p&gt;I wrote a paper to be submitted to the Journal of Open Source Software to publish the current results and benchmarks, 
as we are at least as fast as our competitors, and more generic. There is no equivalent I am aware of.&lt;/p&gt;

&lt;h2 id=&quot;making-community&quot;&gt;Making Community&lt;/h2&gt;

&lt;p&gt;Concurrently, I worked on summoning the Boost.Graph user base, and it quickly became clear a small local workshop would 
be a tremendous start: the Louvain algorithm community is based in Louvain (Belgium), its extension was 
formulated in Leiden (Netherlands) and my PhD graphs network is based in Paris (France) in what has been presented to me 
as “the Temple of the Stochastic Block Model” ! Quite a sign: life finds ways to run in (tight) circles.&lt;/p&gt;

&lt;p&gt;So the goal of this &lt;a href=&quot;https://github.com/boostorg/graph/discussions/466&quot;&gt;workshop&lt;/a&gt; is to bring together a small group 
(10-15 people) of researchers, open-source implementers, and industrial users for 
a day of honest conversation on May 6th 2026. Three questions will anchor the discussions:&lt;/p&gt;
&lt;ol&gt;
  &lt;li&gt;What types of graphs and data structures do you use in practice?&lt;/li&gt;
  &lt;li&gt;What performance, scalability, and interpretability requirements matter most to you?&lt;/li&gt;
  &lt;li&gt;What algorithms are missing today that Boost.Graph could offer?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Ray and Collier from the C++ Alliance will also be there to record the lightning talks and document the process. 
It would also be the occasion to show off the python-based animations I put together for the &lt;a href=&quot;https://www.youtube.com/watch?v=-OVvzRFiYLU&quot;&gt;French C++ User Group 
presentation on March 24th&lt;/a&gt;. 
Those had a nice success and received many compliments, as it pairs well with the visual and 
dynamic nature of graphs and their algorithms, and I hope it will contribute 
to the repopularization of Boost.Graph.&lt;/p&gt;

&lt;p&gt;Graphliiings asseeeeemble !&lt;/p&gt;</content><author><name></name></author><category term="arnaud" /><summary type="html">Joining Community Early in Q1 2026, I joined the C++ Alliance. A very exciting moment. So I began to work early January under Joaquin’s mentorship, with the idea of having a clear contribution to Boost Graph by the end of Q1. After a few days of auditing the current state of the library versus the literature, it became clear that community detection methods (aka graph clustering algorithms) were sorely lacking for Boost.Graph, and that implementing one would be a great start to revitalizing the library and fill up maybe the largest methodological gap in its current algorithmic coverage. Detecting Communities The vision was (and still is) simple: i) begin to implement Louvain algorithm, ii) build upon it to extend to the more complex Leiden algorithm, iii) finally get started with the Stochastic Block Model. If the plan is straightforward, the Louvain literature is not, and the BGL abstractions even less. But under the review and guidance from Joaquin and Jeremy Murphy (maintainer of the BGL), I was able to put up a satisfying implementation: Using the Newman-Girvan Modularity as the quality function to optimize, one can simply call: double Q = boost::louvain_clustering( g, cluster_map, weight_map, gen, boost::newman_and_girvan{}, // quality function (default) 1e-7, // min_improvement_inner (per-pass convergence) 0.0 // min_improvement_outer (cross-level convergence) ); // Q = 0.42, cluster_map = {0,0,0, 1,1,1} As it happens often with heuristics, there is a large number of quality functions out there, and this is not because of a lack of consensus: in a 2002 paper, computer scientist Jon Kleinberg proved that no clustering quality function (Modularity, Goldberg density, Surprise…) can simultaneously be: scale-invariant (doubling all edges should not change the clusters), rich (all partitions should be achievable), consistent (shortening distances inside a cluster and expanding distances between clusters should lead to similar results). In other words, there is no way to implement a single function hoping it would exhibit three basic properties we would genuinely expect. All we can do is to explore different trade-offs using different quality functions. So I left some doors open to be able to inject an arbitrary quality function. If this function exposes a minimal, “naive” interface, the algorithm will statically use a slow but generic path, and iterate across all the edges of the graph to compute the quality. It is slow, yes, but it makes the study of qualities easier, as one does not have to figure out the local mathematical decomposition of the function to get started with coding: struct my_quality { template &amp;lt;typename G, typename CMap, typename WMap&amp;gt; typename boost::property_traits&amp;lt;WMap&amp;gt;::value_type quality(const G&amp;amp; g, const CMap&amp;amp; c, const WMap&amp;amp; w) { // your custom partition quality function } }; double Q = boost::louvain_clustering(g, cluster_map, weight_map, gen, my_quality{}); However, the Louvain algorithm is extremely popular because it is fast, as it is able to update the quality computational state for each vertex it tries to “insert” or “remove” from a neighboring putative community. This locality decomposition has to be figured out mathematically for each quality function, so it’s not trivial. I defined a GraphPartitionQualityFunctionIncrementalConcept that refines the GraphPartitionQualityFunctionConcept : if the algorithm detects that the injected quality function exposes an interface for this incremental update, the fast path is taken. One thing I figured out is that the GraphPartitionQualityFunctionIncrementalConcept is for now too specific to the Modularity family. I am currently working on a proposal to increase its scope in future work. The current PR has been carefully tested and benchmarked for correctness and performance, and validated by Jeremy to be merged on develop branch. I wrote a paper to be submitted to the Journal of Open Source Software to publish the current results and benchmarks, as we are at least as fast as our competitors, and more generic. There is no equivalent I am aware of. Making Community Concurrently, I worked on summoning the Boost.Graph user base, and it quickly became clear a small local workshop would be a tremendous start: the Louvain algorithm community is based in Louvain (Belgium), its extension was formulated in Leiden (Netherlands) and my PhD graphs network is based in Paris (France) in what has been presented to me as “the Temple of the Stochastic Block Model” ! Quite a sign: life finds ways to run in (tight) circles. So the goal of this workshop is to bring together a small group (10-15 people) of researchers, open-source implementers, and industrial users for a day of honest conversation on May 6th 2026. Three questions will anchor the discussions: What types of graphs and data structures do you use in practice? What performance, scalability, and interpretability requirements matter most to you? What algorithms are missing today that Boost.Graph could offer? Ray and Collier from the C++ Alliance will also be there to record the lightning talks and document the process. It would also be the occasion to show off the python-based animations I put together for the French C++ User Group presentation on March 24th. Those had a nice success and received many compliments, as it pairs well with the visual and dynamic nature of graphs and their algorithms, and I hope it will contribute to the repopularization of Boost.Graph. Graphliiings asseeeeemble !</summary></entry><entry><title type="html">Mr.Docs: Niebloids, Reflection, Code Removal, New XML Generator</title><link href="http://cppalliance.org/gennaro/2026/04/06/Gennaros2026Q1Update.html" rel="alternate" type="text/html" title="Mr.Docs: Niebloids, Reflection, Code Removal, New XML Generator" /><published>2026-04-06T00:00:00+00:00</published><updated>2026-04-06T00:00:00+00:00</updated><id>http://cppalliance.org/gennaro/2026/04/06/Gennaros2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/gennaro/2026/04/06/Gennaros2026Q1Update.html">&lt;p&gt;This quarter, I focused on two areas of Mr.Docs: adding first-class support for
function objects, the pattern behind C++20 Niebloids and Ranges CPOs, and
overhauling how the tool turns C++ metadata into documentation output (the
reflection layer).&lt;/p&gt;

&lt;h2 id=&quot;function-objects-documenting-what-users-actually-call&quot;&gt;Function objects: documenting what users actually call&lt;/h2&gt;

&lt;p&gt;In modern C++ libraries, many “functions” are actually global objects whose type
has &lt;code&gt;operator()&lt;/code&gt; overloads. The Ranges library, for instance, defines
&lt;code&gt;std::ranges::sort()&lt;/code&gt; not as a function template but as a variable of some
unspecified callable type. Users call it like a function and expect it to be
documented like one. Before this quarter, Mr.Docs didn’t know the difference: it
would document the variable and its cryptic implementation type.&lt;/p&gt;

&lt;p&gt;The new function-object support (roughly 4,600 lines across 38 files) bridges
this gap. When Mr.Docs encounters a variable whose type is a record with no
public members but &lt;code&gt;operator()&lt;/code&gt; overloads and special member functions, it now
synthesizes free-function documentation entries named after the variable. The
underlying type is marked implementation-defined and hidden from the output.
Multi-overload function objects are naturally grouped by the existing overload
machinery. So, given:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;struct abs_fn {
    double operator()(double x) const noexcept;
};
inline constexpr abs_fn abs = {};
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Mr.Docs documents it as simply:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;double abs(double x) noexcept;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For cases where auto-detection isn’t quite right — for example, when the type
has extra public members — library authors can use the new &lt;code&gt;@functionobject&lt;/code&gt; or
&lt;code&gt;@functor&lt;/code&gt; doc commands. There is also an &lt;code&gt;auto-function-objects&lt;/code&gt; config option
to control the behavior globally. The feature comes with a comprehensive test
fixture covering single and multi-overload function objects, templated types,
and types that live in nested &lt;code&gt;detail&lt;/code&gt; namespaces.&lt;/p&gt;

&lt;h2 id=&quot;reflection-from-boilerplate-to-a-single-generic-template&quot;&gt;Reflection: from boilerplate to a single generic template&lt;/h2&gt;

&lt;p&gt;The bigger effort — and the one that kept surprising me with its scope — was the
reflection refactoring. Mr.Docs converts its internal C++ metadata into a DOM (a
tree of lazy objects) that drives the Handlebars template engine. Before this
quarter, every type in the system required a hand-written &lt;code&gt;tag_invoke()&lt;/code&gt;
overload: one function to map the type’s fields to DOM properties, another to
convert it to a &lt;code&gt;dom::Value&lt;/code&gt;. Adding a new symbol kind meant touching half a
dozen files and following a pattern that was easy to get wrong.&lt;/p&gt;

&lt;p&gt;The goal was simple to state: replace all of that with a single generic template
that works for any type carrying a describe macro.&lt;/p&gt;

&lt;h3 id=&quot;phase-1-boostdescribe&quot;&gt;Phase 1: Boost.Describe&lt;/h3&gt;

&lt;p&gt;The first attempt used Boost.Describe. I added &lt;code&gt;BOOST_DESCRIBE_STRUCT()&lt;/code&gt;
annotations to every metadata type and wrote generic &lt;code&gt;merge()&lt;/code&gt; and
&lt;code&gt;mapReflectedType()&lt;/code&gt; templates that iterated over the described members. This
proved the concept and eliminated a great deal of boilerplate. However, we
didn’t want a public dependency on Boost.Describe, which meant the dependency
was hidden in .cpp files and couldn’t be used in templates living in public
heades,&lt;/p&gt;

&lt;h3 id=&quot;phase-2-custom-reflection-macros&quot;&gt;Phase 2: custom reflection macros&lt;/h3&gt;

&lt;p&gt;So I wrote our own. &lt;code&gt;MRDOCS_DESCRIBE_STRUCT()&lt;/code&gt; and &lt;code&gt;MRDOCS_DESCRIBE_CLASS()&lt;/code&gt;
provide the same compile-time member and base-class iteration as Boost.Describe,
but with no external dependency. The macros live in &lt;code&gt;Describe.hpp&lt;/code&gt; and produce
&lt;code&gt;constexpr&lt;/code&gt; descriptor lists that the rest of the system iterates with
&lt;code&gt;describe::for_each()&lt;/code&gt;.&lt;/p&gt;

&lt;h3 id=&quot;phase-3-removing-the-overloads&quot;&gt;Phase 3: removing the overloads&lt;/h3&gt;

&lt;p&gt;With the describe macros in place, I could write generic implementations of
&lt;code&gt;tag_invoke()&lt;/code&gt; for both &lt;code&gt;LazyObjectMapTag&lt;/code&gt; (DOM mapping) and &lt;code&gt;ValueFromTag&lt;/code&gt;
(value conversion), plus a generic &lt;code&gt;merge()&lt;/code&gt;. Each one replaces dozens of
per-type overloads with a single constrained template. The &lt;code&gt;mapMember()&lt;/code&gt;
function handles the dispatch: optionals are unwrapped, vectors become lazy
arrays, described enums become kebab-case strings, and compound described types
become lazy objects — all automatically.&lt;/p&gt;

&lt;p&gt;Removing the overloads was not as straightforward as I had hoped. The old
overloads were entangled with:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;The Handlebars templates&lt;/strong&gt;, which assumed specific DOM property names.
Renaming &lt;code&gt;symbol&lt;/code&gt; to &lt;code&gt;id&lt;/code&gt;, &lt;code&gt;type&lt;/code&gt; to &lt;code&gt;underlyingType&lt;/code&gt;, and &lt;code&gt;description&lt;/code&gt; to
&lt;code&gt;document&lt;/code&gt; required updating templates and golden tests in lockstep.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;The XML generator&lt;/strong&gt;, which silently skipped types that weren’t described.
Adding &lt;code&gt;MRDOCS_DESCRIBE_STRUCT()&lt;/code&gt; to &lt;code&gt;TemplateInfo&lt;/code&gt; and &lt;code&gt;MemberPointerType&lt;/code&gt;
made the XML output more complete, requiring schema updates and golden-test
regeneration.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;the-result&quot;&gt;The result&lt;/h3&gt;

&lt;p&gt;Out of the original 39 custom &lt;code&gt;tag_invoke(LazyObjectMapTag)&lt;/code&gt; overloads, only 7
remain — each with genuinely non-reflectable logic (computed properties,
polymorphic dispatch, or member decomposition). Roughly 60
&lt;code&gt;tag_invoke(ValueFromTag)&lt;/code&gt; boilerplate overloads were also removed. Adding a new
metadata type to Mr.Docs now requires nothing beyond &lt;code&gt;MRDOCS_DESCRIBE_STRUCT()&lt;/code&gt;
at the point of definition.&lt;/p&gt;

&lt;h2 id=&quot;the-xml-generator-a-full-rewrite-in-350-lines&quot;&gt;The XML Generator: a full rewrite in 350 lines&lt;/h2&gt;

&lt;p&gt;The XML generator was the first major payoff of the reflection work (although it
was initially done when we were using Boost.Describe). The old generator had its
own hand-written serialization for every metadata type, completely independent
of the DOM layer. It was a parallel set of per-type functions that had to be
kept in sync with every schema change.&lt;/p&gt;

&lt;p&gt;I replaced it with a generic implementation built entirely on the describe
macros. The core is about 350 lines: &lt;code&gt;writeMembers()&lt;/code&gt; walks &lt;code&gt;describe_bases&lt;/code&gt; and
&lt;code&gt;describe_members&lt;/code&gt;, &lt;code&gt;writeElement()&lt;/code&gt; dispatches on type traits for primitives,
optionals, vectors, and enums, and &lt;code&gt;writePolymorphic()&lt;/code&gt; handles the handful of
type hierarchies (&lt;code&gt;Type&lt;/code&gt;, &lt;code&gt;TParam&lt;/code&gt;, &lt;code&gt;TArg&lt;/code&gt;, &lt;code&gt;Block&lt;/code&gt;, &lt;code&gt;Inline&lt;/code&gt;) via
.inc-generated switches. The old generator needed a new function for every type;
the new one handles them all, and the 241 files changed in that commit were
almost entirely golden-test updates reflecting the now-more-complete and totally
changed output.&lt;/p&gt;

&lt;h2 id=&quot;smaller-fixes&quot;&gt;Smaller fixes&lt;/h2&gt;

&lt;p&gt;Alongside the two main efforts, I fixed several bugs that came up during
development or were reported by users:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Markdown inline formatting (bold, italic, code) and bullet lists were not
rendering correctly in certain combinations.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;&amp;lt;pre&amp;gt;&lt;/code&gt; tags were missing around HTML code blocks.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;bottomUpTraverse()&lt;/code&gt; was silently skipping &lt;code&gt;ListBlock&lt;/code&gt; items, causing
doc-comment content to be lost.&lt;/li&gt;
  &lt;li&gt;Several CI improvements: faster PR demos, better failure detection, increased
test coverage for the XML generator.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;looking-ahead&quot;&gt;Looking ahead&lt;/h2&gt;

&lt;p&gt;The reflection infrastructure is now in good shape, and most of the mechanical
boilerplate is gone. The remaining &lt;code&gt;tag_invoke()&lt;/code&gt; overloads are genuinely custom
— they compute properties that don’t exist as C++ members, or they dispatch
polymorphically across type hierarchies. Those are worth keeping. Going forward,
I’d like to explore whether the describe macros can replace more of the manual
visitor code throughout the codebase.&lt;/p&gt;

&lt;p&gt;As always, feedback and suggestions are welcome — feel free to open an issue or
reach out on Slack.&lt;/p&gt;</content><author><name></name></author><category term="gennaro" /><summary type="html">This quarter, I focused on two areas of Mr.Docs: adding first-class support for function objects, the pattern behind C++20 Niebloids and Ranges CPOs, and overhauling how the tool turns C++ metadata into documentation output (the reflection layer). Function objects: documenting what users actually call In modern C++ libraries, many “functions” are actually global objects whose type has operator() overloads. The Ranges library, for instance, defines std::ranges::sort() not as a function template but as a variable of some unspecified callable type. Users call it like a function and expect it to be documented like one. Before this quarter, Mr.Docs didn’t know the difference: it would document the variable and its cryptic implementation type. The new function-object support (roughly 4,600 lines across 38 files) bridges this gap. When Mr.Docs encounters a variable whose type is a record with no public members but operator() overloads and special member functions, it now synthesizes free-function documentation entries named after the variable. The underlying type is marked implementation-defined and hidden from the output. Multi-overload function objects are naturally grouped by the existing overload machinery. So, given: struct abs_fn { double operator()(double x) const noexcept; }; inline constexpr abs_fn abs = {}; Mr.Docs documents it as simply: double abs(double x) noexcept; For cases where auto-detection isn’t quite right — for example, when the type has extra public members — library authors can use the new @functionobject or @functor doc commands. There is also an auto-function-objects config option to control the behavior globally. The feature comes with a comprehensive test fixture covering single and multi-overload function objects, templated types, and types that live in nested detail namespaces. Reflection: from boilerplate to a single generic template The bigger effort — and the one that kept surprising me with its scope — was the reflection refactoring. Mr.Docs converts its internal C++ metadata into a DOM (a tree of lazy objects) that drives the Handlebars template engine. Before this quarter, every type in the system required a hand-written tag_invoke() overload: one function to map the type’s fields to DOM properties, another to convert it to a dom::Value. Adding a new symbol kind meant touching half a dozen files and following a pattern that was easy to get wrong. The goal was simple to state: replace all of that with a single generic template that works for any type carrying a describe macro. Phase 1: Boost.Describe The first attempt used Boost.Describe. I added BOOST_DESCRIBE_STRUCT() annotations to every metadata type and wrote generic merge() and mapReflectedType() templates that iterated over the described members. This proved the concept and eliminated a great deal of boilerplate. However, we didn’t want a public dependency on Boost.Describe, which meant the dependency was hidden in .cpp files and couldn’t be used in templates living in public heades, Phase 2: custom reflection macros So I wrote our own. MRDOCS_DESCRIBE_STRUCT() and MRDOCS_DESCRIBE_CLASS() provide the same compile-time member and base-class iteration as Boost.Describe, but with no external dependency. The macros live in Describe.hpp and produce constexpr descriptor lists that the rest of the system iterates with describe::for_each(). Phase 3: removing the overloads With the describe macros in place, I could write generic implementations of tag_invoke() for both LazyObjectMapTag (DOM mapping) and ValueFromTag (value conversion), plus a generic merge(). Each one replaces dozens of per-type overloads with a single constrained template. The mapMember() function handles the dispatch: optionals are unwrapped, vectors become lazy arrays, described enums become kebab-case strings, and compound described types become lazy objects — all automatically. Removing the overloads was not as straightforward as I had hoped. The old overloads were entangled with: The Handlebars templates, which assumed specific DOM property names. Renaming symbol to id, type to underlyingType, and description to document required updating templates and golden tests in lockstep. The XML generator, which silently skipped types that weren’t described. Adding MRDOCS_DESCRIBE_STRUCT() to TemplateInfo and MemberPointerType made the XML output more complete, requiring schema updates and golden-test regeneration. The result Out of the original 39 custom tag_invoke(LazyObjectMapTag) overloads, only 7 remain — each with genuinely non-reflectable logic (computed properties, polymorphic dispatch, or member decomposition). Roughly 60 tag_invoke(ValueFromTag) boilerplate overloads were also removed. Adding a new metadata type to Mr.Docs now requires nothing beyond MRDOCS_DESCRIBE_STRUCT() at the point of definition. The XML Generator: a full rewrite in 350 lines The XML generator was the first major payoff of the reflection work (although it was initially done when we were using Boost.Describe). The old generator had its own hand-written serialization for every metadata type, completely independent of the DOM layer. It was a parallel set of per-type functions that had to be kept in sync with every schema change. I replaced it with a generic implementation built entirely on the describe macros. The core is about 350 lines: writeMembers() walks describe_bases and describe_members, writeElement() dispatches on type traits for primitives, optionals, vectors, and enums, and writePolymorphic() handles the handful of type hierarchies (Type, TParam, TArg, Block, Inline) via .inc-generated switches. The old generator needed a new function for every type; the new one handles them all, and the 241 files changed in that commit were almost entirely golden-test updates reflecting the now-more-complete and totally changed output. Smaller fixes Alongside the two main efforts, I fixed several bugs that came up during development or were reported by users: Markdown inline formatting (bold, italic, code) and bullet lists were not rendering correctly in certain combinations. &amp;lt;pre&amp;gt; tags were missing around HTML code blocks. bottomUpTraverse() was silently skipping ListBlock items, causing doc-comment content to be lost. Several CI improvements: faster PR demos, better failure detection, increased test coverage for the XML generator. Looking ahead The reflection infrastructure is now in good shape, and most of the mechanical boilerplate is gone. The remaining tag_invoke() overloads are genuinely custom — they compute properties that don’t exist as C++ members, or they dispatch polymorphically across type hierarchies. Those are worth keeping. Going forward, I’d like to explore whether the describe macros can replace more of the manual visitor code throughout the codebase. As always, feedback and suggestions are welcome — feel free to open an issue or reach out on Slack.</summary></entry><entry><title type="html">Speed and Safety</title><link href="http://cppalliance.org/matt/2026/04/06/Matts2026Q1Update.html" rel="alternate" type="text/html" title="Speed and Safety" /><published>2026-04-06T00:00:00+00:00</published><updated>2026-04-06T00:00:00+00:00</updated><id>http://cppalliance.org/matt/2026/04/06/Matts2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/matt/2026/04/06/Matts2026Q1Update.html">&lt;p&gt;In my &lt;a href=&quot;https://cppalliance.org/matt/2026/01/15/Matts2025Q4Update.html&quot;&gt;last post&lt;/a&gt; I mentioned that &lt;a href=&quot;https://github.com/cppalliance/int128&quot;&gt;int128&lt;/a&gt; library would be getting CUDA support in the future.
The good news is that the future is now!
Nearly all the functions in the library are available on both host and device.
Any function that has &lt;code&gt;BOOST_INT128_HOST_DEVICE&lt;/code&gt; in its signature in the &lt;a href=&quot;https://develop.int128.cpp.al/overview.html&quot;&gt;documentation&lt;/a&gt; is available for usage.
&lt;a href=&quot;https://develop.int128.cpp.al/examples.html#examples_cuda&quot;&gt;An example&lt;/a&gt; of how to use the types in the CUDA kernels has been added as well.
These can be as simple as:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;using test_type = boost::int128::uint128_t;

__global__ void cuda_mul(const test_type* in1, const test_type* in2, test_type* out, int num_elements)
{
    int i = blockDim.x * blockIdx.x + threadIdx.x;

    if (i &amp;lt; num_elements)
    {
        out[i] = in1[i] * in2[i];
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Other Boost libraries are or will be beneficiaries of this effort as well.
First, Boost.Charconv now supports &lt;code&gt;boost::charconv::from_chars&lt;/code&gt; and &lt;code&gt;boost::charconv::to_chars&lt;/code&gt; for integers being run on device.
This can give you up to an order of magnitude improvement in performance.
These results and benchmarks are available in the &lt;a href=&quot;https://www.boost.org/doc/libs/develop/libs/charconv/doc/html/charconv.html&quot;&gt;Boost.Charconv documentation&lt;/a&gt;.
Next, in the coming months Boost.Decimal will gain CUDA support as part of this effort.
We think users will benefit greatly from being able to perform massively parallel parsing, serialization, and calculations on decimal numbers.
Stay tuned for this likely in Boost 1.92.
In the meantime, enjoy the initial release of Decimal coming in Boost 1.91!&lt;/p&gt;

&lt;p&gt;On the other side of the performance that we’re looking to deliver in coming versions of Boost, we must not forget the importance of safety.
There exist plenty of &lt;a href=&quot;https://en.wikipedia.org/wiki/Integer_overflow#Examples&quot;&gt;examples of damage and death&lt;/a&gt; caused by arithmetic errors in computer programs.
Can we create a library that provides guaranteed safety in arithmetic while minimizing performance losses and integration friction?
How does one guarantee the behavior of their types?
In our implementation, &lt;a href=&quot;https://github.com/cppalliance/safe_numbers&quot;&gt;Boost.Safe_Numbers&lt;/a&gt;, we are investigating the usage of the &lt;a href=&quot;https://why3.org&quot;&gt;Why3&lt;/a&gt; platform for deductive program verification.
By pursuing these formal methods, safety can have real meaning.
We will continue to provide additional details as part of the &lt;a href=&quot;https://develop.safe-numbers.cpp.al/verification.html&quot;&gt;formal verification page&lt;/a&gt; of our documentation.
Since inevitably the library will cause an increase in the number of errors (which is a good thing), we aim to fail as early as possible, and when we do provide the most helpful error message that we can.
For example, we have some static arithmetic errors reported in as few as three lines:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;clang-darwin.compile.c++ ../../../bin.v2/libs/safe_numbers/test/compile_fail_basic_usage_constexpr.test/clang-darwin-21/debug/arm_64/cxxstd-20-iso/threading-multi/visibility-hidden/compile_fail_basic_usage_constexpr.o
../examples/compile_fail_basic_usage_constexpr.cpp:18:22: error: constexpr variable 'z' must be initialized by a constant expression
   18 |         constexpr u8 z {x + y};
      |                      ^ ~~~~~~~
../../../boost/safe_numbers/detail/unsigned_integer_basis.hpp:397:17: note: subexpression not valid in a constant expression
  397 |                 throw std::overflow_error(&quot;Overflow detected in u8 addition&quot;);
      |                 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../examples/compile_fail_basic_usage_constexpr.cpp:18:25: note: in call to 'operator+&amp;lt;unsigned char&amp;gt;({255}, {2})'
   18 |         constexpr u8 z {x + y};
      |                         ^~~~~
1 error generated.
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Our runtime error reporting system fundamentally uses Boost.Throw_Exception so it can report not only the type, operation, file and line, but also up to an entire stack trace when leveraging the optional linking with Boost.Stacktrace.
Not to forget our discussion of CUDA so quickly, the Safe_Numbers library will have CUDA support.
One thing that we will continue to refine is synchronizing error reporting on device as one cannot throw an exception on device.&lt;/p&gt;

&lt;p&gt;We are always looking for users of all the libraries discussed.
If you are a current or prospective user, feel free to reach out and let us know what you’re using it for, or any issues that you find.&lt;/p&gt;</content><author><name></name></author><category term="matt" /><summary type="html">In my last post I mentioned that int128 library would be getting CUDA support in the future. The good news is that the future is now! Nearly all the functions in the library are available on both host and device. Any function that has BOOST_INT128_HOST_DEVICE in its signature in the documentation is available for usage. An example of how to use the types in the CUDA kernels has been added as well. These can be as simple as: using test_type = boost::int128::uint128_t; __global__ void cuda_mul(const test_type* in1, const test_type* in2, test_type* out, int num_elements) { int i = blockDim.x * blockIdx.x + threadIdx.x; if (i &amp;lt; num_elements) { out[i] = in1[i] * in2[i]; } } Other Boost libraries are or will be beneficiaries of this effort as well. First, Boost.Charconv now supports boost::charconv::from_chars and boost::charconv::to_chars for integers being run on device. This can give you up to an order of magnitude improvement in performance. These results and benchmarks are available in the Boost.Charconv documentation. Next, in the coming months Boost.Decimal will gain CUDA support as part of this effort. We think users will benefit greatly from being able to perform massively parallel parsing, serialization, and calculations on decimal numbers. Stay tuned for this likely in Boost 1.92. In the meantime, enjoy the initial release of Decimal coming in Boost 1.91! On the other side of the performance that we’re looking to deliver in coming versions of Boost, we must not forget the importance of safety. There exist plenty of examples of damage and death caused by arithmetic errors in computer programs. Can we create a library that provides guaranteed safety in arithmetic while minimizing performance losses and integration friction? How does one guarantee the behavior of their types? In our implementation, Boost.Safe_Numbers, we are investigating the usage of the Why3 platform for deductive program verification. By pursuing these formal methods, safety can have real meaning. We will continue to provide additional details as part of the formal verification page of our documentation. Since inevitably the library will cause an increase in the number of errors (which is a good thing), we aim to fail as early as possible, and when we do provide the most helpful error message that we can. For example, we have some static arithmetic errors reported in as few as three lines: clang-darwin.compile.c++ ../../../bin.v2/libs/safe_numbers/test/compile_fail_basic_usage_constexpr.test/clang-darwin-21/debug/arm_64/cxxstd-20-iso/threading-multi/visibility-hidden/compile_fail_basic_usage_constexpr.o ../examples/compile_fail_basic_usage_constexpr.cpp:18:22: error: constexpr variable 'z' must be initialized by a constant expression 18 | constexpr u8 z {x + y}; | ^ ~~~~~~~ ../../../boost/safe_numbers/detail/unsigned_integer_basis.hpp:397:17: note: subexpression not valid in a constant expression 397 | throw std::overflow_error(&quot;Overflow detected in u8 addition&quot;); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../examples/compile_fail_basic_usage_constexpr.cpp:18:25: note: in call to 'operator+&amp;lt;unsigned char&amp;gt;({255}, {2})' 18 | constexpr u8 z {x + y}; | ^~~~~ 1 error generated. Our runtime error reporting system fundamentally uses Boost.Throw_Exception so it can report not only the type, operation, file and line, but also up to an entire stack trace when leveraging the optional linking with Boost.Stacktrace. Not to forget our discussion of CUDA so quickly, the Safe_Numbers library will have CUDA support. One thing that we will continue to refine is synchronizing error reporting on device as one cannot throw an exception on device. We are always looking for users of all the libraries discussed. If you are a current or prospective user, feel free to reach out and let us know what you’re using it for, or any issues that you find.</summary></entry><entry><title type="html">The road to C++20 modules, Capy and Redis</title><link href="http://cppalliance.org/ruben/2026/04/06/Ruben2026Q1Update.html" rel="alternate" type="text/html" title="The road to C++20 modules, Capy and Redis" /><published>2026-04-06T00:00:00+00:00</published><updated>2026-04-06T00:00:00+00:00</updated><id>http://cppalliance.org/ruben/2026/04/06/Ruben2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/ruben/2026/04/06/Ruben2026Q1Update.html">&lt;h2 id=&quot;modules-in-using-stdcpp-2026&quot;&gt;Modules in using std::cpp 2026&lt;/h2&gt;

&lt;p&gt;C++20 modules have been in the standard for 6 years already, but we’re not seeing
widespread adoption. The ecosystem is still getting ready. As a quick example,
&lt;code&gt;import std&lt;/code&gt;, an absolute blessing for compile times, requires build system support,
and this is still experimental as of CMake 4.3.1.&lt;/p&gt;

&lt;p&gt;And yet, I’ve realized that writing module-native applications is really enjoyable.
The system is well-thought and allows for better encapsulation,
just as you’d write in a modern programming language.
I’ve been using my &lt;a href=&quot;https://github.com/anarthal/servertech-chat/tree/feature/cxx20-modules&quot;&gt;Servertech Chat project&lt;/a&gt;
(a webserver that uses Boost.Asio and companion libraries) to get a taste
of what modules really look like in real code.&lt;/p&gt;

&lt;p&gt;When writing this, I saw clearly that having big dependencies that can’t be consumed
via &lt;code&gt;import&lt;/code&gt; is a big problem. With the scheme I used, compile times got 66% worse
instead of improving. This is because when writing modules, you tend to have
a bigger number of translation units. These are supposed to be much more lightweight,
but if you’re relying on &lt;code&gt;#include&lt;/code&gt; for third-party libraries, they’re not.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;//
// File: redis_client.cppm. Contains only the interface declaration (somehow like headers do)
//
module;

// No import boost yet - must be in the global module fragment
#include &amp;lt;boost/asio/awaitable.hpp&amp;gt;
#include &amp;lt;boost/system/result.hpp&amp;gt;

module servertech_chat:redis_client;
import std;

namespace chat {

class redis_client
{
public:
    virtual ~redis_client() {}
    virtual boost::asio::awaitable&amp;lt;boost::system::result&amp;lt;std::int64_t&amp;gt;&amp;gt; get_int_key(std::string_view key) = 0;
    // ...
};

}

//
// File: redis_client.cpp. Contains the implementation
//
module;

#include &amp;lt;boost/redis/connection.hpp&amp;gt;

module servertech_chat;
import :redis_client;
import std;

namespace {

class redis_client_impl final : public redis_client { /* ... */ };

}

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I analyze this in much more depth in
&lt;a href=&quot;https://youtu.be/hD9JHkt7e2Y&quot;&gt;the talk I’ve had the pleasure to give at using std::cpp&lt;/a&gt;
this March in Madrid. The TL;DR is that supporting &lt;code&gt;import boost&lt;/code&gt; natively
is very important for any serious usage of Boost in the modules world.&lt;/p&gt;

&lt;h2 id=&quot;import-boost-is-upon-us&quot;&gt;&lt;code&gt;import boost&lt;/code&gt; is upon us&lt;/h2&gt;

&lt;p&gt;As you may know, I prefer doing to saying, and I’ve been writing a prototype to support
&lt;code&gt;import boost&lt;/code&gt; natively while keeping today’s header code as is. This prototype has
seen substantial advancements during these months.&lt;/p&gt;

&lt;p&gt;I’ve developed a &lt;a href=&quot;https://github.com/anarthal/boost-cmake/blob/feature/cxx20-modules/modules.md&quot;&gt;systematic approach for modularization&lt;/a&gt;,
and we’ve settled for the ABI-breaking style, with compatibility headers.
I’ve added support for GCC (the remaining compiler) to the core libraries
that we already supported (Config, Mp11, Core, Assert, ThrowException, Charconv),
and I’ve added modular bindings for Variant2, Compat, Endian, System, TypeTraits,
Optional, ContainerHash, IO and Asio.
These are only tested under Clang yet - it’s part of a discovery process.
The idea is modularizing the flagship libraries
to verify that the approach works, and to measure compile time improvements.&lt;/p&gt;

&lt;p&gt;There is still a lot to do before things become functional.
I’ve received helpful feedback from many community members, which has been invaluable.&lt;/p&gt;

&lt;h2 id=&quot;redis-meets-capy&quot;&gt;Redis meets Capy&lt;/h2&gt;

&lt;p&gt;If you’re a user of Boost.Asio and coroutines, you probably know that there’s a new player
in town - Capy and Corosio. They’re a coroutines-native Asio replacement which promise
a range of benefits, from improved expressiveness to saner compile times,
without performance loss.&lt;/p&gt;

&lt;p&gt;Since I maintain Boost.MySQL and co-maintain Boost.Redis, I know the pain of writing
operations using the universal Asio model. Lifetime management is difficult to follow,
testing is complex, and things must remain header-only (and usually heavily templatized).
Coroutine code is much simpler to write and understand, and it’s what I use whenever I can.
So obviously I’m interested in this project.&lt;/p&gt;

&lt;p&gt;My long-term idea is creating a v2 version of MySQL and Redis that exposes a Capy/Corosio
interface. As a proof-of-concept, I migrated Boost.Redis and some of its tests.
Still some polishing needed, but - it works!
You can read the &lt;a href=&quot;https://lists.boost.org/archives/list/boost@lists.boost.org/thread/FSX5H3MDQSLO3VZFEOUINUZPYQFCIASB/&quot;&gt;full report on the Boost mailing list&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Some sample code as an appetizer:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;
capy::task&amp;lt;void&amp;gt; run_request(connection&amp;amp; conn)
{
    // A request containing only a ping command.
    request req;
    req.push(&quot;PING&quot;, &quot;Hello world&quot;);

    // Response where the PONG response will be stored.
    response&amp;lt;std::string&amp;gt; resp;

    // Executes the request.
    auto [ec] = co_await conn.exec(req, resp);
    if (ec)
        co_return;
    std::cout &amp;lt;&amp;lt; &quot;PING value: &quot; &amp;lt;&amp;lt; std::get&amp;lt;0&amp;gt;(resp).value() &amp;lt;&amp;lt; std::endl;
}

capy::task&amp;lt;void&amp;gt; co_main()
{
    connection conn{(co_await capy::this_coro::executor).context()};
    co_await capy::when_any(
        // Sends the request
        run_request(conn),

        // Performs connection establishment, re-connection, pings...
        conn.run(config{})
    );
}

&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;redis-pubsub-improvements&quot;&gt;Redis PubSub improvements&lt;/h2&gt;

&lt;p&gt;Working with PubSub messages in Boost.Redis has always been more involved than in other libraries.
For example, we support transparent reconnection, but (before 1.91), the user had to explicitly
re-establish subscriptions:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.push(&quot;SUBSCRIBE&quot;, &quot;channel&quot;);
while (conn-&amp;gt;will_reconnect()) {
    // Reconnect to the channels.
    co_await conn-&amp;gt;async_exec(req, ignore);

    // ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Boost 1.91 has added PubSub state restoration. A fancy name but an easy feature:
established subscriptions are recorded, and when a reconnection happens,
the subscription is re-established automatically:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Subscribe to the channel 'mychannel'. If a re-connection happens,
// an appropriate SUBSCRIBE command is issued to re-establish the subscription.
request req;
req.subscribe({&quot;mychannel&quot;});
co_await conn-&amp;gt;async_exec(req);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Boost 1.91 also adds &lt;code&gt;flat_tree&lt;/code&gt;, a specialized container for Redis messages
with an emphasis on memory-reuse, performance and usability.
This container is especially appropriate when dealing with PubSub.
We’ve also added &lt;code&gt;connection::async_receive2()&lt;/code&gt;, a higher-performance
replacement for &lt;code&gt;connection::async_receive()&lt;/code&gt; that consumes messages in batches,
rather than one-by-one, eliminating re-scheduling overhead.
And &lt;code&gt;push_parser&lt;/code&gt;, a view to transform raw RESP3 nodes into user-friendly structures.&lt;/p&gt;

&lt;p&gt;With these improvements, code goes from:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Loop while reconnection is enabled
while (conn-&amp;gt;will_reconnect()) {

    // Reconnect to channels.
    co_await conn-&amp;gt;async_exec(req, ignore);

    // Loop reading Redis pushs messages.
    for (error_code ec;;) {
        // First try to read any buffered pushes.
        conn-&amp;gt;receive(ec);
        if (ec == error::sync_receive_push_failed) {
            ec = {};

            // Wait for pushes
            co_await conn-&amp;gt;async_receive(asio::redirect_error(asio::use_awaitable, ec));
        }

        if (ec)
            break;  // Connection lost, break so we can reconnect to channels.

        // Left to the user: resp contains raw RESP3 nodes, which need to be parsed manually!

        // Remove the nodes corresponding to one message
        consume_one(resp);
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Loop to read Redis push messages.
while (conn-&amp;gt;will_reconnect()) {
    // No need to reconnect, we now have PubSub state restoration
    // Wait for pushes
    auto [ec] = co_await conn-&amp;gt;async_receive2(asio::as_tuple);
    if (ec)
        break; // Cancelled

    // Consume the messages
    for (push_view elem : push_parser(resp.value()))
        std::cout &amp;lt;&amp;lt; &quot;Received message from channel &quot; &amp;lt;&amp;lt; elem.channel &amp;lt;&amp;lt; &quot;: &quot; &amp;lt;&amp;lt; elem.payload &amp;lt;&amp;lt; &quot;\n&quot;;

    // Clear all the batch
    resp.value().clear();
}
&lt;/code&gt;&lt;/pre&gt;</content><author><name></name></author><category term="ruben" /><summary type="html">Modules in using std::cpp 2026 C++20 modules have been in the standard for 6 years already, but we’re not seeing widespread adoption. The ecosystem is still getting ready. As a quick example, import std, an absolute blessing for compile times, requires build system support, and this is still experimental as of CMake 4.3.1. And yet, I’ve realized that writing module-native applications is really enjoyable. The system is well-thought and allows for better encapsulation, just as you’d write in a modern programming language. I’ve been using my Servertech Chat project (a webserver that uses Boost.Asio and companion libraries) to get a taste of what modules really look like in real code. When writing this, I saw clearly that having big dependencies that can’t be consumed via import is a big problem. With the scheme I used, compile times got 66% worse instead of improving. This is because when writing modules, you tend to have a bigger number of translation units. These are supposed to be much more lightweight, but if you’re relying on #include for third-party libraries, they’re not. For example: // // File: redis_client.cppm. Contains only the interface declaration (somehow like headers do) // module; // No import boost yet - must be in the global module fragment #include &amp;lt;boost/asio/awaitable.hpp&amp;gt; #include &amp;lt;boost/system/result.hpp&amp;gt; module servertech_chat:redis_client; import std; namespace chat { class redis_client { public: virtual ~redis_client() {} virtual boost::asio::awaitable&amp;lt;boost::system::result&amp;lt;std::int64_t&amp;gt;&amp;gt; get_int_key(std::string_view key) = 0; // ... }; } // // File: redis_client.cpp. Contains the implementation // module; #include &amp;lt;boost/redis/connection.hpp&amp;gt; module servertech_chat; import :redis_client; import std; namespace { class redis_client_impl final : public redis_client { /* ... */ }; } I analyze this in much more depth in the talk I’ve had the pleasure to give at using std::cpp this March in Madrid. The TL;DR is that supporting import boost natively is very important for any serious usage of Boost in the modules world. import boost is upon us As you may know, I prefer doing to saying, and I’ve been writing a prototype to support import boost natively while keeping today’s header code as is. This prototype has seen substantial advancements during these months. I’ve developed a systematic approach for modularization, and we’ve settled for the ABI-breaking style, with compatibility headers. I’ve added support for GCC (the remaining compiler) to the core libraries that we already supported (Config, Mp11, Core, Assert, ThrowException, Charconv), and I’ve added modular bindings for Variant2, Compat, Endian, System, TypeTraits, Optional, ContainerHash, IO and Asio. These are only tested under Clang yet - it’s part of a discovery process. The idea is modularizing the flagship libraries to verify that the approach works, and to measure compile time improvements. There is still a lot to do before things become functional. I’ve received helpful feedback from many community members, which has been invaluable. Redis meets Capy If you’re a user of Boost.Asio and coroutines, you probably know that there’s a new player in town - Capy and Corosio. They’re a coroutines-native Asio replacement which promise a range of benefits, from improved expressiveness to saner compile times, without performance loss. Since I maintain Boost.MySQL and co-maintain Boost.Redis, I know the pain of writing operations using the universal Asio model. Lifetime management is difficult to follow, testing is complex, and things must remain header-only (and usually heavily templatized). Coroutine code is much simpler to write and understand, and it’s what I use whenever I can. So obviously I’m interested in this project. My long-term idea is creating a v2 version of MySQL and Redis that exposes a Capy/Corosio interface. As a proof-of-concept, I migrated Boost.Redis and some of its tests. Still some polishing needed, but - it works! You can read the full report on the Boost mailing list. Some sample code as an appetizer: capy::task&amp;lt;void&amp;gt; run_request(connection&amp;amp; conn) { // A request containing only a ping command. request req; req.push(&quot;PING&quot;, &quot;Hello world&quot;); // Response where the PONG response will be stored. response&amp;lt;std::string&amp;gt; resp; // Executes the request. auto [ec] = co_await conn.exec(req, resp); if (ec) co_return; std::cout &amp;lt;&amp;lt; &quot;PING value: &quot; &amp;lt;&amp;lt; std::get&amp;lt;0&amp;gt;(resp).value() &amp;lt;&amp;lt; std::endl; } capy::task&amp;lt;void&amp;gt; co_main() { connection conn{(co_await capy::this_coro::executor).context()}; co_await capy::when_any( // Sends the request run_request(conn), // Performs connection establishment, re-connection, pings... conn.run(config{}) ); } Redis PubSub improvements Working with PubSub messages in Boost.Redis has always been more involved than in other libraries. For example, we support transparent reconnection, but (before 1.91), the user had to explicitly re-establish subscriptions: request req; req.push(&quot;SUBSCRIBE&quot;, &quot;channel&quot;); while (conn-&amp;gt;will_reconnect()) { // Reconnect to the channels. co_await conn-&amp;gt;async_exec(req, ignore); // ... } Boost 1.91 has added PubSub state restoration. A fancy name but an easy feature: established subscriptions are recorded, and when a reconnection happens, the subscription is re-established automatically: // Subscribe to the channel 'mychannel'. If a re-connection happens, // an appropriate SUBSCRIBE command is issued to re-establish the subscription. request req; req.subscribe({&quot;mychannel&quot;}); co_await conn-&amp;gt;async_exec(req); Boost 1.91 also adds flat_tree, a specialized container for Redis messages with an emphasis on memory-reuse, performance and usability. This container is especially appropriate when dealing with PubSub. We’ve also added connection::async_receive2(), a higher-performance replacement for connection::async_receive() that consumes messages in batches, rather than one-by-one, eliminating re-scheduling overhead. And push_parser, a view to transform raw RESP3 nodes into user-friendly structures. With these improvements, code goes from: // Loop while reconnection is enabled while (conn-&amp;gt;will_reconnect()) { // Reconnect to channels. co_await conn-&amp;gt;async_exec(req, ignore); // Loop reading Redis pushs messages. for (error_code ec;;) { // First try to read any buffered pushes. conn-&amp;gt;receive(ec); if (ec == error::sync_receive_push_failed) { ec = {}; // Wait for pushes co_await conn-&amp;gt;async_receive(asio::redirect_error(asio::use_awaitable, ec)); } if (ec) break; // Connection lost, break so we can reconnect to channels. // Left to the user: resp contains raw RESP3 nodes, which need to be parsed manually! // Remove the nodes corresponding to one message consume_one(resp); } } To: // Loop to read Redis push messages. while (conn-&amp;gt;will_reconnect()) { // No need to reconnect, we now have PubSub state restoration // Wait for pushes auto [ec] = co_await conn-&amp;gt;async_receive2(asio::as_tuple); if (ec) break; // Cancelled // Consume the messages for (push_view elem : push_parser(resp.value())) std::cout &amp;lt;&amp;lt; &quot;Received message from channel &quot; &amp;lt;&amp;lt; elem.channel &amp;lt;&amp;lt; &quot;: &quot; &amp;lt;&amp;lt; elem.payload &amp;lt;&amp;lt; &quot;\n&quot;; // Clear all the batch resp.value().clear(); }</summary></entry><entry><title type="html">Hubs, intervals and math</title><link href="http://cppalliance.org/joaquin/2026/04/02/Joaquins2026Q1Update.html" rel="alternate" type="text/html" title="Hubs, intervals and math" /><published>2026-04-02T00:00:00+00:00</published><updated>2026-04-02T00:00:00+00:00</updated><id>http://cppalliance.org/joaquin/2026/04/02/Joaquins2026Q1Update</id><content type="html" xml:base="http://cppalliance.org/joaquin/2026/04/02/Joaquins2026Q1Update.html">&lt;p&gt;During Q1 2026, I’ve been working in the following areas:&lt;/p&gt;

&lt;h3 id=&quot;boostcontainerhub&quot;&gt;&lt;code&gt;boost::container::hub&lt;/code&gt;&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/joaquintides/hub&quot;&gt;&lt;code&gt;boost::container::hub&lt;/code&gt;&lt;/a&gt; is a nearly drop-in replacement of
C++26 &lt;a href=&quot;https://eel.is/c++draft/sequences#hive&quot;&gt;&lt;code&gt;std::hive&lt;/code&gt;&lt;/a&gt; sporting a simpler data structure and
providing competitive performance with respect to the de facto reference implementation
&lt;a href=&quot;https://github.com/mattreecebentley/plf_hive&quot;&gt;&lt;code&gt;plf::hive&lt;/code&gt;&lt;/a&gt;. When I first read about &lt;code&gt;std::hive&lt;/code&gt;,
I couldn’t help thinking how complex the internal design of the container is, and wondered
if something leaner could in fact be more effective. &lt;code&gt;boost::container::hub&lt;/code&gt; critically relies
on two realizations:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Identification of empty slots by way of &lt;a href=&quot;https://en.cppreference.com/w/cpp/numeric/countr_zero.html&quot;&gt;&lt;code&gt;std::countr_zero&lt;/code&gt;&lt;/a&gt;
operations on a bitmask is extremely fast.&lt;/li&gt;
  &lt;li&gt;Modern allocators are very fast, too: &lt;code&gt;boost::container::hub&lt;/code&gt; does many more allocations
than &lt;code&gt;plf::hive&lt;/code&gt;, but this doesn’t degrade its performance significantly (although it affects
cache locality).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;boost::container::hub&lt;/code&gt; is formally proposed for inclusion in Boost.Container and will be
officially reviewed April 16-26. Ion Gaztañaga serves as the review manager.&lt;/p&gt;

&lt;h3 id=&quot;using-stdcpp-2026&quot;&gt;using std::cpp 2026&lt;/h3&gt;

&lt;p&gt;I gave my talk &lt;a href=&quot;https://github.com/joaquintides/usingstdcpp2026&quot;&gt;“The Mathematical Mind of a C++ Programmer”&lt;/a&gt;
at the &lt;a href=&quot;https://eventos.uc3m.es/141471/detail/using-std-cpp-2026.html&quot;&gt;using std::cpp 2026&lt;/a&gt; conference
taking place in Madrid during March 16-19. I had a lot of fun preparing the presentation and
delivering the actual talk, and some interesting discussions  were had around it.
This is a subject I’ve been wanting to talk about for decades, so I’m somewhat relieved I finally
got it over with this year. Always happy to discuss C++ and math, so if you have feedback
or want to continue the conversation, please reach out to me.&lt;/p&gt;

&lt;h3 id=&quot;boostunordered&quot;&gt;Boost.Unordered&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Written maintenance fixes
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/328&quot;&gt;PR#328&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/335&quot;&gt;PR#335&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/336&quot;&gt;PR#336&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/337&quot;&gt;PR#337&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/339&quot;&gt;PR#339&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/344&quot;&gt;PR#344&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/345&quot;&gt;PR#345&lt;/a&gt;. Some of these fixes are related
to Node.js vulnerabilities in the Antora setup used for doc building: as the number
of Boost libraries using Antora is bound to grow, maybe we should think of an automated
way to get these vulnerabilities automatically fixed for the whole project.&lt;/li&gt;
  &lt;li&gt;Reviewed and merged
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/317&quot;&gt;PR#317&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/332&quot;&gt;PR#332&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/334&quot;&gt;PR#334&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/341&quot;&gt;PR#341&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/342&quot;&gt;PR#342&lt;/a&gt;. Many thanks to
Sam Darwin, Braden Ganetsky and Andrey Semashev for their contributions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostbimap&quot;&gt;Boost.Bimap&lt;/h3&gt;

&lt;p&gt;Merged
&lt;a href=&quot;https://github.com/boostorg/bimap/pull/31&quot;&gt;PR#31&lt;/a&gt; (&lt;code&gt;std::initializer_list&lt;/code&gt;
constructor) and provided testing and documentation for this new
feature (&lt;a href=&quot;https://github.com/boostorg/bimap/pull/54&quot;&gt;PR#54&lt;/a&gt;). The original
PR was silently sitting on the queue for more than four years and it
was only when it was brought to my attention in a Reddit conversation that
I got to take a look at it. Boost.Bimap needs an active mantainer,
I guess I could become this person.&lt;/p&gt;

&lt;h3 id=&quot;boosticl&quot;&gt;Boost.ICL&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/llvm/llvm-project/pull/161366&quot;&gt;Recent changes&lt;/a&gt; in libc++ v22
code for associative container lookup have resulted in the 
&lt;a href=&quot;https://github.com/boostorg/icl/issues/51&quot;&gt;breakage of Boost.ICL&lt;/a&gt;. 
My understanding is that the changes in libc++ are not
standards conformant, and there’s an &lt;a href=&quot;https://github.com/llvm/llvm-project/issues/187667&quot;&gt;ongoing discussion&lt;/a&gt;
on that; in the meantime, I wrote and proposed a &lt;a href=&quot;https://github.com/boostorg/icl/pull/54&quot;&gt;PR&lt;/a&gt;
to Boost.ICL that fixes the problem (pending acceptance).&lt;/p&gt;

&lt;h3 id=&quot;support-to-the-community&quot;&gt;Support to the community&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;I’ve been helping a bit with Mark Cooper’s very successful
&lt;a href=&quot;https://x.com/search?q=%22Boost%20Blueprint%22&amp;amp;src=typed_query&amp;amp;f=live&quot;&gt;Boost Blueprint&lt;/a&gt;
series on X.&lt;/li&gt;
  &lt;li&gt;Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).&lt;/li&gt;
&lt;/ul&gt;</content><author><name></name></author><category term="joaquin" /><summary type="html">During Q1 2026, I’ve been working in the following areas: boost::container::hub boost::container::hub is a nearly drop-in replacement of C++26 std::hive sporting a simpler data structure and providing competitive performance with respect to the de facto reference implementation plf::hive. When I first read about std::hive, I couldn’t help thinking how complex the internal design of the container is, and wondered if something leaner could in fact be more effective. boost::container::hub critically relies on two realizations: Identification of empty slots by way of std::countr_zero operations on a bitmask is extremely fast. Modern allocators are very fast, too: boost::container::hub does many more allocations than plf::hive, but this doesn’t degrade its performance significantly (although it affects cache locality). boost::container::hub is formally proposed for inclusion in Boost.Container and will be officially reviewed April 16-26. Ion Gaztañaga serves as the review manager. using std::cpp 2026 I gave my talk “The Mathematical Mind of a C++ Programmer” at the using std::cpp 2026 conference taking place in Madrid during March 16-19. I had a lot of fun preparing the presentation and delivering the actual talk, and some interesting discussions were had around it. This is a subject I’ve been wanting to talk about for decades, so I’m somewhat relieved I finally got it over with this year. Always happy to discuss C++ and math, so if you have feedback or want to continue the conversation, please reach out to me. Boost.Unordered Written maintenance fixes PR#328, PR#335, PR#336, PR#337, PR#339, PR#344, PR#345. Some of these fixes are related to Node.js vulnerabilities in the Antora setup used for doc building: as the number of Boost libraries using Antora is bound to grow, maybe we should think of an automated way to get these vulnerabilities automatically fixed for the whole project. Reviewed and merged PR#317, PR#332, PR#334, PR#341, PR#342. Many thanks to Sam Darwin, Braden Ganetsky and Andrey Semashev for their contributions. Boost.Bimap Merged PR#31 (std::initializer_list constructor) and provided testing and documentation for this new feature (PR#54). The original PR was silently sitting on the queue for more than four years and it was only when it was brought to my attention in a Reddit conversation that I got to take a look at it. Boost.Bimap needs an active mantainer, I guess I could become this person. Boost.ICL Recent changes in libc++ v22 code for associative container lookup have resulted in the breakage of Boost.ICL. My understanding is that the changes in libc++ are not standards conformant, and there’s an ongoing discussion on that; in the meantime, I wrote and proposed a PR to Boost.ICL that fixes the problem (pending acceptance). Support to the community I’ve been helping a bit with Mark Cooper’s very successful Boost Blueprint series on X. Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).</summary></entry><entry><title type="html">Systems, CI Updates Q1 2026</title><link href="http://cppalliance.org/sam/2026/03/31/SamsQ1Update.html" rel="alternate" type="text/html" title="Systems, CI Updates Q1 2026" /><published>2026-03-31T00:00:00+00:00</published><updated>2026-03-31T00:00:00+00:00</updated><id>http://cppalliance.org/sam/2026/03/31/SamsQ1Update</id><content type="html" xml:base="http://cppalliance.org/sam/2026/03/31/SamsQ1Update.html">&lt;h3 id=&quot;code-coverage-reports---designing-new-gcovr-templates&quot;&gt;Code Coverage Reports - designing new GCOVR templates&lt;/h3&gt;

&lt;p&gt;A major effort this quarter and continuing on since it was mentioned in the last newsletter is the development of codecov-like coverage reports that run in GitHub Actions and are hosted on GitHub Pages. Instructions: &lt;a href=&quot;https://github.com/boostorg/boost-ci/blob/master/docs/code-coverage.md&quot;&gt;Code Coverage with Github Actions and Github Pages&lt;/a&gt;. The process has really highlighted a phenomenon in open-source software where by publishing something to the whole community, end-users respond back with their own suggestions and fixes, and everything improves iteratively. It would not have happened otherwise. The upstream GCOVR project has taken an interest in the templates and we are working on merging them into the main repository for all gcovr users. Boost contributors and gcovr maintainers have suggested numerous modifications for the templates. Great work by Julio Estrada on the template development.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Better full page scrolling of C++ source code files&lt;/li&gt;
  &lt;li&gt;Include ‘functions’ listings on every page&lt;/li&gt;
  &lt;li&gt;Optionally disable branch coverage&lt;/li&gt;
  &lt;li&gt;Purposely restrict coverage directories to src/ and include/&lt;/li&gt;
  &lt;li&gt;Another scrolling bug fixed&lt;/li&gt;
  &lt;li&gt;Both blue and green colored themes&lt;/li&gt;
  &lt;li&gt;Codacy linting&lt;/li&gt;
  &lt;li&gt;New forward and back buttons. Allows navigation to each “miss” and subsequent pages&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;server-hosting&quot;&gt;Server Hosting&lt;/h3&gt;

&lt;p&gt;This quarter we decommissioned the Rackspace servers which had been in service 10-15 years. Rene provided a nice announcement:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://lists.boost.org/archives/list/boost@lists.boost.org/thread/XYFD42TTQRYHWTLGP6GCIZQ6NHCZLNQT/&quot;&gt;Farewell to Wowbagger - End of an Era for boost.org&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There was more to do then just delete servers, I built a new results.boost.org FTP server replacing the preexisting FTP server used by regression.boost.org. Configured and tested it. Inventoried the old machines, including a monitoring server. Built a replacement wowbagger called wowbagger2 to host a copy of the website - original.boost.org. The monthly cost of a small GCP Compute instance seems to be around 5% of the Rackspace legacy cloud server. Components: Ubuntu 24.04. Apache. PHP 5 PPA. “original.boost.org” continues to host a copy of the earlier boost.org website for comparison and development purposes which is interesting to check.&lt;/p&gt;

&lt;p&gt;Launched server instances for corosio.org and paperflow.&lt;/p&gt;

&lt;h3 id=&quot;fil-c&quot;&gt;Fil-C&lt;/h3&gt;

&lt;p&gt;Working with Tom Kent to add Fil-C https://github.com/pizlonator/fil-c test into the regression matrix https://regression.boost.org/ .
Built a Fil-C container image based on Drone images.
Debugging the build process. After a few roadblocks, the latest news is that Fil-C seems to be successfully building.  This is not quite finished but should be online soon.&lt;/p&gt;

&lt;h3 id=&quot;boost-release-process-boostorgrelease-tools&quot;&gt;Boost release process boostorg/release-tools&lt;/h3&gt;

&lt;p&gt;The boostorg/boost CircleCI jobs often threaten to cross the 1-hour time limit. Increased parallel processes from 4 to 8. Increased instance size from medium to large.
And yet another adjustment: there are 4 compression algorithms used by the releases (gz, bz2, 7z, zip) and it is possible to find drop-in replacement programs that 
go much faster than the standard ones by utilizing parallelization. lbzip2 pigz. The substitute binaries were applied to publish-releases.py recently. Now the same idea in ci_boost_release.py. All of this reduced the CircleCI job time by many minutes.&lt;/p&gt;

&lt;p&gt;Certain boost library pull requests were finally merged after a long delay allowing an upgrade of the Sphinx pip package. Tested a superproject container image for the CircleCI jobs with updated pip packages. Boost is currently in a code freeze so this will not go live until after 1.91.0. Sphinx docs continue to deal with upgrade incompatibilities. I prepared another set of pull requests to send to boost libraries using Sphinx.&lt;/p&gt;

&lt;h3 id=&quot;doc-previews-and-doc-builds&quot;&gt;Doc Previews and Doc Builds&lt;/h3&gt;

&lt;p&gt;Antora docs usually show an “Edit this Page” link. Recently a couple of Alliance developers happened to comment the link didn’t quite work in some of the doc previews, and so that opened a topic to research solutions and make the Antora edit-this-page feature more robust if possible. The issue is that Boost libraries are git submodules. When working as expected submodules are checked out as “HEAD detached at a74967f0” rather than “develop”. If Antora’s edit-this-page code sees “HEAD detached at a74967f0” it will default to the path HEAD. That’s wrong on the GitHub side. A solution we found (credit to Ruben Perez) is to set the antora config to edit_url: ‘{web_url}/edit/develop/{path}’. Don’t leave a {ref} type of variable in the path.&lt;/p&gt;

&lt;p&gt;Rolling out the antora-downloads-extension to numerous boost and alliance repositories. It will retry the ui-bundle download.&lt;/p&gt;

&lt;p&gt;Refactored the release-tools build_docs scripts so that the gems and pip packages are organized into a format that matches Gemfile and requirement.txt files, instead of what the script was doing before “gem install package”. By using a Gemfile, the script becomes compatible with other build systems so content can be copy-pasted easily.&lt;/p&gt;

&lt;p&gt;CircleCI superproject builds use docbook-xml.zip, where the download url broke. Switched the link address. Also hosting a copy of the file at https://dl.cpp.al/misc/docbook-xml.zip&lt;/p&gt;

&lt;h3 id=&quot;boost-website-boostorgwebsite-v2&quot;&gt;Boost website boostorg/website-v2&lt;/h3&gt;

&lt;p&gt;Collaborated in the process of on-boarding the consulting company Metalab who are working on V3, the next iteration of the boost.org website.&lt;/p&gt;

&lt;p&gt;Disable Fastly caching to assist metalab developers.&lt;/p&gt;

&lt;p&gt;Gitflow workflow planning meetings.&lt;/p&gt;

&lt;p&gt;Discussions about how Tools should be present on the libraries pages.&lt;/p&gt;

&lt;p&gt;On the DB servers, adjusted postgresql authentication configurations from md5 to scram-sha-256 on all databases and multiple ansible roles. Actually this turns out to be a superficial change even though it should be done. The reason is that newer postgres will use scram-sha-256 behind-the-scenes regardless.&lt;/p&gt;

&lt;p&gt;Wrote deploy-qa.sh, a script to enable metalab QA engineers to deploy a pull request onto a test server. The precise git SHA commit of any open pull request can be tested.&lt;/p&gt;

&lt;p&gt;Wrote upload-images.sh, a script to store Bob Ostrom’s boost cartoons in S3 instead of the github repo.&lt;/p&gt;

&lt;h3 id=&quot;mailman3&quot;&gt;Mailman3&lt;/h3&gt;

&lt;p&gt;Synced production lists to the staging server. Wrote a document in the cppalliance/boost-mailman repo explaining how the multi-step process of syncing can be done.&lt;/p&gt;

&lt;h3 id=&quot;boostorg&quot;&gt;boostorg&lt;/h3&gt;

&lt;p&gt;Migrated cppalliance/decimal to boostorg/decimal.&lt;/p&gt;

&lt;h3 id=&quot;jenkins&quot;&gt;Jenkins&lt;/h3&gt;

&lt;p&gt;The Jenkins server is building documentation previews for dozens of boostorg and cppalliance repositories where each job is assigned its own “workspace” directory and then proceeds to install 1GB of node_modules. That was happening for every build and every pull request. The disk space on the server was filling up, every few weeks yet another 100GB. Rather than continue to resize the disk, or delete all jobs too quickly, was there the opportunity for optimization? Yes. In the superproject container image relocate the nodejs installation to /opt/nvm instead of root’s home directory. The /opt/nvm installation can now be “shared” by other jobs which reduces space. Conditionally check if mermaid is needed and/or if mermaid is already available in /opt/nvm. With these modifications, since each job doesn’t need to install a large amount of npm packages the job size is drastically reduced.&lt;/p&gt;

&lt;p&gt;Upgraded server and all plugins. Necessary to fix spurious bugs in certain Jenkins jobs.&lt;/p&gt;

&lt;p&gt;Debugging Jenkins runners, set subnet and zone on the cloud server configurations.&lt;/p&gt;

&lt;p&gt;Fixed lcov jobs, that need cxxstd=20&lt;/p&gt;

&lt;p&gt;Migrated many administrative scripts from a local directory on the server to the jenkins-ci repository. Revise, clean, discard certain scripts.&lt;/p&gt;

&lt;p&gt;Dmitry contributed diff-reports that should now appear in every pull request which has been configured for LCOV previews.&lt;/p&gt;

&lt;p&gt;Implemented –flags in lcov build scripts [–skip-gcovr] [–skip-genhtml] [–skip-diff-report] [–only-gcovr]&lt;/p&gt;

&lt;p&gt;Ansible role task: install check_jenkins_queue nagios plugin automatically from Ansible.&lt;/p&gt;

&lt;h3 id=&quot;gha&quot;&gt;GHA&lt;/h3&gt;

&lt;p&gt;Completed a major upgrade of the Terraform installation which had lagged upstream code by nearly two years.&lt;/p&gt;

&lt;p&gt;Deployed a series of GitHub Actions runners for Joaquin’s latest benchmarks at https://github.com/boostorg/boost_hub_benchmarks. Installed latest VS2026. MacOS upgrade to 26.3.&lt;/p&gt;

&lt;h3 id=&quot;drone&quot;&gt;Drone&lt;/h3&gt;

&lt;p&gt;Launched new MacOS 26 drone runners, and FreeBSD 15.0 drone runners.&lt;/p&gt;</content><author><name></name></author><category term="sam" /><summary type="html">Code Coverage Reports - designing new GCOVR templates A major effort this quarter and continuing on since it was mentioned in the last newsletter is the development of codecov-like coverage reports that run in GitHub Actions and are hosted on GitHub Pages. Instructions: Code Coverage with Github Actions and Github Pages. The process has really highlighted a phenomenon in open-source software where by publishing something to the whole community, end-users respond back with their own suggestions and fixes, and everything improves iteratively. It would not have happened otherwise. The upstream GCOVR project has taken an interest in the templates and we are working on merging them into the main repository for all gcovr users. Boost contributors and gcovr maintainers have suggested numerous modifications for the templates. Great work by Julio Estrada on the template development. Better full page scrolling of C++ source code files Include ‘functions’ listings on every page Optionally disable branch coverage Purposely restrict coverage directories to src/ and include/ Another scrolling bug fixed Both blue and green colored themes Codacy linting New forward and back buttons. Allows navigation to each “miss” and subsequent pages Server Hosting This quarter we decommissioned the Rackspace servers which had been in service 10-15 years. Rene provided a nice announcement: Farewell to Wowbagger - End of an Era for boost.org There was more to do then just delete servers, I built a new results.boost.org FTP server replacing the preexisting FTP server used by regression.boost.org. Configured and tested it. Inventoried the old machines, including a monitoring server. Built a replacement wowbagger called wowbagger2 to host a copy of the website - original.boost.org. The monthly cost of a small GCP Compute instance seems to be around 5% of the Rackspace legacy cloud server. Components: Ubuntu 24.04. Apache. PHP 5 PPA. “original.boost.org” continues to host a copy of the earlier boost.org website for comparison and development purposes which is interesting to check. Launched server instances for corosio.org and paperflow. Fil-C Working with Tom Kent to add Fil-C https://github.com/pizlonator/fil-c test into the regression matrix https://regression.boost.org/ . Built a Fil-C container image based on Drone images. Debugging the build process. After a few roadblocks, the latest news is that Fil-C seems to be successfully building. This is not quite finished but should be online soon. Boost release process boostorg/release-tools The boostorg/boost CircleCI jobs often threaten to cross the 1-hour time limit. Increased parallel processes from 4 to 8. Increased instance size from medium to large. And yet another adjustment: there are 4 compression algorithms used by the releases (gz, bz2, 7z, zip) and it is possible to find drop-in replacement programs that go much faster than the standard ones by utilizing parallelization. lbzip2 pigz. The substitute binaries were applied to publish-releases.py recently. Now the same idea in ci_boost_release.py. All of this reduced the CircleCI job time by many minutes. Certain boost library pull requests were finally merged after a long delay allowing an upgrade of the Sphinx pip package. Tested a superproject container image for the CircleCI jobs with updated pip packages. Boost is currently in a code freeze so this will not go live until after 1.91.0. Sphinx docs continue to deal with upgrade incompatibilities. I prepared another set of pull requests to send to boost libraries using Sphinx. Doc Previews and Doc Builds Antora docs usually show an “Edit this Page” link. Recently a couple of Alliance developers happened to comment the link didn’t quite work in some of the doc previews, and so that opened a topic to research solutions and make the Antora edit-this-page feature more robust if possible. The issue is that Boost libraries are git submodules. When working as expected submodules are checked out as “HEAD detached at a74967f0” rather than “develop”. If Antora’s edit-this-page code sees “HEAD detached at a74967f0” it will default to the path HEAD. That’s wrong on the GitHub side. A solution we found (credit to Ruben Perez) is to set the antora config to edit_url: ‘{web_url}/edit/develop/{path}’. Don’t leave a {ref} type of variable in the path. Rolling out the antora-downloads-extension to numerous boost and alliance repositories. It will retry the ui-bundle download. Refactored the release-tools build_docs scripts so that the gems and pip packages are organized into a format that matches Gemfile and requirement.txt files, instead of what the script was doing before “gem install package”. By using a Gemfile, the script becomes compatible with other build systems so content can be copy-pasted easily. CircleCI superproject builds use docbook-xml.zip, where the download url broke. Switched the link address. Also hosting a copy of the file at https://dl.cpp.al/misc/docbook-xml.zip Boost website boostorg/website-v2 Collaborated in the process of on-boarding the consulting company Metalab who are working on V3, the next iteration of the boost.org website. Disable Fastly caching to assist metalab developers. Gitflow workflow planning meetings. Discussions about how Tools should be present on the libraries pages. On the DB servers, adjusted postgresql authentication configurations from md5 to scram-sha-256 on all databases and multiple ansible roles. Actually this turns out to be a superficial change even though it should be done. The reason is that newer postgres will use scram-sha-256 behind-the-scenes regardless. Wrote deploy-qa.sh, a script to enable metalab QA engineers to deploy a pull request onto a test server. The precise git SHA commit of any open pull request can be tested. Wrote upload-images.sh, a script to store Bob Ostrom’s boost cartoons in S3 instead of the github repo. Mailman3 Synced production lists to the staging server. Wrote a document in the cppalliance/boost-mailman repo explaining how the multi-step process of syncing can be done. boostorg Migrated cppalliance/decimal to boostorg/decimal. Jenkins The Jenkins server is building documentation previews for dozens of boostorg and cppalliance repositories where each job is assigned its own “workspace” directory and then proceeds to install 1GB of node_modules. That was happening for every build and every pull request. The disk space on the server was filling up, every few weeks yet another 100GB. Rather than continue to resize the disk, or delete all jobs too quickly, was there the opportunity for optimization? Yes. In the superproject container image relocate the nodejs installation to /opt/nvm instead of root’s home directory. The /opt/nvm installation can now be “shared” by other jobs which reduces space. Conditionally check if mermaid is needed and/or if mermaid is already available in /opt/nvm. With these modifications, since each job doesn’t need to install a large amount of npm packages the job size is drastically reduced. Upgraded server and all plugins. Necessary to fix spurious bugs in certain Jenkins jobs. Debugging Jenkins runners, set subnet and zone on the cloud server configurations. Fixed lcov jobs, that need cxxstd=20 Migrated many administrative scripts from a local directory on the server to the jenkins-ci repository. Revise, clean, discard certain scripts. Dmitry contributed diff-reports that should now appear in every pull request which has been configured for LCOV previews. Implemented –flags in lcov build scripts [–skip-gcovr] [–skip-genhtml] [–skip-diff-report] [–only-gcovr] Ansible role task: install check_jenkins_queue nagios plugin automatically from Ansible. GHA Completed a major upgrade of the Terraform installation which had lagged upstream code by nearly two years. Deployed a series of GitHub Actions runners for Joaquin’s latest benchmarks at https://github.com/boostorg/boost_hub_benchmarks. Installed latest VS2026. MacOS upgrade to 26.3. Drone Launched new MacOS 26 drone runners, and FreeBSD 15.0 drone runners.</summary></entry><entry><title type="html">Statement from the C++ Alliance on WG21 Committee Meeting Support</title><link href="http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement.html" rel="alternate" type="text/html" title="Statement from the C++ Alliance on WG21 Committee Meeting Support" /><published>2026-03-27T00:00:00+00:00</published><updated>2026-03-27T00:00:00+00:00</updated><id>http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement</id><content type="html" xml:base="http://cppalliance.org/company/2026/03/27/WG21-Meeting-Support-Statement.html">&lt;p&gt;The C++ Alliance is proud to support attendance at WG21 committee meetings. We believe that facilitating the attendance of domain experts produces better outcomes for C++ and for the broader ecosystem, and we are committed to making participation more accessible.&lt;/p&gt;

&lt;p&gt;We want to be unequivocally clear: the C++ Alliance does not, and will never, direct or compel attendees to vote in any particular way. Our support comes with no strings attached. Those who attend are free and encouraged to exercise their independent judgment on every proposal before the committee.&lt;/p&gt;

&lt;p&gt;The integrity of the WG21 standards process depends on the independence of its participants. We respect that process deeply, and any suggestion to the contrary does not reflect our values or our program.&lt;/p&gt;

&lt;p&gt;If you are interested in learning more about our attendance program, please reach out to us at &lt;a href=&quot;mailto:info@cppalliance.org&quot;&gt;info@cppalliance.org&lt;/a&gt;.&lt;/p&gt;</content><author><name></name></author><category term="company" /><summary type="html">The C++ Alliance is proud to support attendance at WG21 committee meetings. We believe that facilitating the attendance of domain experts produces better outcomes for C++ and for the broader ecosystem, and we are committed to making participation more accessible. We want to be unequivocally clear: the C++ Alliance does not, and will never, direct or compel attendees to vote in any particular way. Our support comes with no strings attached. Those who attend are free and encouraged to exercise their independent judgment on every proposal before the committee. The integrity of the WG21 standards process depends on the independence of its participants. We respect that process deeply, and any suggestion to the contrary does not reflect our values or our program. If you are interested in learning more about our attendance program, please reach out to us at info@cppalliance.org.</summary></entry><entry><title type="html">Corosio Beta: Coroutine-Native Networking for C++20</title><link href="http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking.html" rel="alternate" type="text/html" title="Corosio Beta: Coroutine-Native Networking for C++20" /><published>2026-03-11T00:00:00+00:00</published><updated>2026-03-11T00:00:00+00:00</updated><id>http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking</id><content type="html" xml:base="http://cppalliance.org/mark/2026/03/11/Corosio-Beta-Coroutine-Native-Networking.html">&lt;h1 id=&quot;corosio-beta-coroutine-native-networking-for-c20&quot;&gt;Corosio Beta: Coroutine-Native Networking for C++20&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;The C++ Alliance is releasing the Corosio beta, a networking library designed from the ground up for C++20 coroutines. We are inviting serious C++ developers to use it, break it, and tell us what needs to change before it goes to Boost formal review.&lt;/em&gt;&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;the-gap-c20-left-open&quot;&gt;The Gap C++20 Left Open&lt;/h2&gt;

&lt;p&gt;C++20 gave us coroutines. It did not give us networking to go with them. Boost.Asio has added coroutine support over the years, but its foundations were laid for a world of callbacks and completion handlers. Retrofitting coroutines onto that model produces code that works, but never quite feels like the language you are writing in. We decided to find out what networking looks like when you start over.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;what-corosio-is&quot;&gt;What Corosio Is&lt;/h2&gt;

&lt;p&gt;Corosio is a coroutine-only networking library for C++20. It provides TCP sockets, acceptors, TLS streams, timers, and DNS resolution. Every operation is an awaitable. You write &lt;code&gt;co_await&lt;/code&gt; and the library handles executor affinity, cancellation, and frame allocation. No callbacks. No futures. No sender/receiver.&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-c&quot;&gt;auto [socket] = co_await acceptor.async_accept();
auto n = co_await socket.async_read_some(buffer);
co_await socket.async_write(response);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Corosio runs on Windows (IOCP), Linux (epoll), and macOS (kqueue). It targets GCC 12+, Clang 17+, and MSVC 14.34+, with no dependencies outside the standard library. Capy, its I/O foundation, is fetched automatically by CMake.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;built-on-capy&quot;&gt;Built on Capy&lt;/h2&gt;

&lt;p&gt;Corosio is built on Capy, a coroutine I/O foundation library that ships alongside it. The core insight driving Capy’s design comes from Peter Dimov: &lt;em&gt;an API designed from the ground up to use C++20 coroutines can achieve performance and ergonomics which cannot otherwise be obtained.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Capy’s &lt;em&gt;IoAwaitable&lt;/em&gt; protocol ensures coroutines resume on the correct executor after I/O completes, without thread-local globals, implicit context, or manual dispatch. Cancellation follows the same forward-propagation model: stop tokens flow from the top of a coroutine chain to the platform API boundary, giving you uniform cancellation across all operations. Frame allocation uses thread-local recycling pools to achieve zero steady-state heap allocations after warmup.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;what-we-are-asking-for&quot;&gt;What We Are Asking For&lt;/h2&gt;

&lt;p&gt;We are looking for feedback on correctness, ergonomics, platform behavior, documentation, and performance under real workloads. Specifically:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Does the executor affinity model hold up under production conditions?&lt;/li&gt;
  &lt;li&gt;Does cancellation behave correctly across complex coroutine chains?&lt;/li&gt;
  &lt;li&gt;Are there platform-specific edge cases in the IOCP, epoll, or kqueue backends?&lt;/li&gt;
  &lt;li&gt;Does the zero-allocation model hold in your deployment scenarios?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We are inviting serious C++ developers, especially if you have shipped networking code, to use it, break it, and tell us what your experience was. The Boost review process rewards libraries that arrive having already faced serious scrutiny.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;get-it&quot;&gt;Get It&lt;/h2&gt;

&lt;pre&gt;&lt;code class=&quot;language-shell&quot;&gt;git clone https://github.com/cppalliance/corosio.git
cd corosio
cmake -S . -B build -G Ninja
cmake --build build

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Or with CMake FetchContent:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;include(FetchContent)
FetchContent_Declare(corosio
  GIT_REPOSITORY https://github.com/cppalliance/corosio.git
  GIT_TAG        develop
  GIT_SHALLOW    TRUE)
FetchContent_MakeAvailable(corosio)
target_link_libraries(my_app Boost::corosio)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Requires:&lt;/strong&gt; CMake 3.25+, GCC 12+ / Clang 17+ / MSVC 14.34+&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;resources&quot;&gt;Resources&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/cppalliance/corosio&quot;&gt;Corosio on GitHub&lt;/a&gt; – https://github.com/cppalliance/corosio&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://master.corosio.cpp.al/&quot;&gt;Corosio Docs&lt;/a&gt; – https://develop.corosio.cpp.al/&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/cppalliance/capy&quot;&gt;Capy on GitHub&lt;/a&gt; – https://github.com/cppalliance/capy&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://master.capy.cpp.al/&quot;&gt;Capy Docs&lt;/a&gt; – https://develop.capy.cpp.al/&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://github.com/cppalliance/corosio/issues&quot;&gt;File an Issue&lt;/a&gt; – https://github.com/cppalliance/corosio/issues&lt;/p&gt;</content><author><name></name></author><category term="mark" /><summary type="html">Corosio Beta: Coroutine-Native Networking for C++20 The C++ Alliance is releasing the Corosio beta, a networking library designed from the ground up for C++20 coroutines. We are inviting serious C++ developers to use it, break it, and tell us what needs to change before it goes to Boost formal review. The Gap C++20 Left Open C++20 gave us coroutines. It did not give us networking to go with them. Boost.Asio has added coroutine support over the years, but its foundations were laid for a world of callbacks and completion handlers. Retrofitting coroutines onto that model produces code that works, but never quite feels like the language you are writing in. We decided to find out what networking looks like when you start over. What Corosio Is Corosio is a coroutine-only networking library for C++20. It provides TCP sockets, acceptors, TLS streams, timers, and DNS resolution. Every operation is an awaitable. You write co_await and the library handles executor affinity, cancellation, and frame allocation. No callbacks. No futures. No sender/receiver. auto [socket] = co_await acceptor.async_accept(); auto n = co_await socket.async_read_some(buffer); co_await socket.async_write(response); Corosio runs on Windows (IOCP), Linux (epoll), and macOS (kqueue). It targets GCC 12+, Clang 17+, and MSVC 14.34+, with no dependencies outside the standard library. Capy, its I/O foundation, is fetched automatically by CMake. Built on Capy Corosio is built on Capy, a coroutine I/O foundation library that ships alongside it. The core insight driving Capy’s design comes from Peter Dimov: an API designed from the ground up to use C++20 coroutines can achieve performance and ergonomics which cannot otherwise be obtained. Capy’s IoAwaitable protocol ensures coroutines resume on the correct executor after I/O completes, without thread-local globals, implicit context, or manual dispatch. Cancellation follows the same forward-propagation model: stop tokens flow from the top of a coroutine chain to the platform API boundary, giving you uniform cancellation across all operations. Frame allocation uses thread-local recycling pools to achieve zero steady-state heap allocations after warmup. What We Are Asking For We are looking for feedback on correctness, ergonomics, platform behavior, documentation, and performance under real workloads. Specifically: Does the executor affinity model hold up under production conditions? Does cancellation behave correctly across complex coroutine chains? Are there platform-specific edge cases in the IOCP, epoll, or kqueue backends? Does the zero-allocation model hold in your deployment scenarios? We are inviting serious C++ developers, especially if you have shipped networking code, to use it, break it, and tell us what your experience was. The Boost review process rewards libraries that arrive having already faced serious scrutiny. Get It git clone https://github.com/cppalliance/corosio.git cd corosio cmake -S . -B build -G Ninja cmake --build build Or with CMake FetchContent: include(FetchContent) FetchContent_Declare(corosio GIT_REPOSITORY https://github.com/cppalliance/corosio.git GIT_TAG develop GIT_SHALLOW TRUE) FetchContent_MakeAvailable(corosio) target_link_libraries(my_app Boost::corosio) Requires: CMake 3.25+, GCC 12+ / Clang 17+ / MSVC 14.34+ Resources Corosio on GitHub – https://github.com/cppalliance/corosio Corosio Docs – https://develop.corosio.cpp.al/ Capy on GitHub – https://github.com/cppalliance/capy Capy Docs – https://develop.capy.cpp.al/ File an Issue – https://github.com/cppalliance/corosio/issues</summary></entry><entry><title type="html">A postgres library for Boost</title><link href="http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update.html" rel="alternate" type="text/html" title="A postgres library for Boost" /><published>2026-01-23T00:00:00+00:00</published><updated>2026-01-23T00:00:00+00:00</updated><id>http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/ruben/2026/01/23/Ruben2025Q4Update.html">&lt;p&gt;Do you know Boost.MySQL? If you’ve been reading my posts, you probably do.
Many people have wondered ‘why not Postgres?’. Well, the time is now.
TL;DR: I’m writing the equivalent of Boost.MySQL, but for PostgreSQL.
You can find the code &lt;a href=&quot;https://github.com/anarthal/nativepg&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Since libPQ is already a good library, the NativePG project intends
to be more ambitious than Boost.MySQL. In addition to the expected
Asio interface, I intend to provide a sans-io API that exposes primitives
like message serialization.&lt;/p&gt;

&lt;p&gt;Throughout this post, I will go into the intended library design and the rationales
behind its design.&lt;/p&gt;

&lt;h2 id=&quot;the-lowest-level-message-serialization&quot;&gt;The lowest level: message serialization&lt;/h2&gt;

&lt;p&gt;PostgreSQL clients communicate with the server using
a binary protocol on top of TCP, termed &lt;a href=&quot;https://www.postgresql.org/docs/current/protocol.html&quot;&gt;the frontend/backend protocol&lt;/a&gt;.
The protocol defines a set of messages used for interactions. For example, when running a query, the following happens:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;┌────────┐                                    ┌────────┐
│ Client │                                    │ Server │
└───┬────┘                                    └───┬────┘
    │                                             │
    │  Query                                      │
    │ ──────────────────────────────────────────&amp;gt; │
    │                                             │
    │                        RowDescription       │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
    │                              DataRow        │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
    │                        CommandComplete      │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
    │                        ReadyForQuery        │
    │ &amp;lt;────────────────────────────────────────── │
    │                                             │
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In the lowest layer, this library provides functions to serialize and parse
such messages. The goal here is being as efficient as possible.
Parsing functions are non-allocating, and use an approach inspired by
Boost.Url collections:&lt;/p&gt;

&lt;h2 id=&quot;parsing-database-types&quot;&gt;Parsing database types&lt;/h2&gt;

&lt;p&gt;The PostgreSQL type system is quite rich. In addition to the usual SQL built-in types,
it supports advanced scalars like UUIDs, arrays and user-defined aggregates.&lt;/p&gt;

&lt;p&gt;When running a query, libPQ exposes retrieved data as either raw text or bytes.
This is what the server sends in the &lt;code&gt;DataRow&lt;/code&gt; packets shown above.
To do something useful with the data, users likely need parsing and serializing
such types.&lt;/p&gt;

&lt;p&gt;The next layer of NativePG is in charge of providing such functions.
This will likely contain some extension points for users to plug in their types.
This is the general form of such functions:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;system::error_code parse(span&amp;lt;const std::byte&amp;gt; from, T&amp;amp; to, const connection_state&amp;amp;);
void serialize(const T&amp;amp; from, dynamic_buffer&amp;amp; to, const connection_state&amp;amp;);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note that some types might require access to session configuration.
For instance, dates may be expressed using different wire formats depending
on the connection’s runtime settings.&lt;/p&gt;

&lt;p&gt;At the time of writing, only ints and strings are supported,
but this will be extended soon.&lt;/p&gt;

&lt;h2 id=&quot;composing-requests&quot;&gt;Composing requests&lt;/h2&gt;

&lt;p&gt;Efficiency in database communication is achieved with pipelining.
A network round-trip with the server is worth a thousand allocations in the client.
It is thus critical that:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The protocol properly supports pipelining. This is the case with PostgreSQL.&lt;/li&gt;
  &lt;li&gt;The client should expose an interface to it, and make it very easy to use.
libPQ does the first, and NativePG intends to achieve the second.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;NativePG pipelines by default. In NativePG, a &lt;code&gt;request&lt;/code&gt; object is always
a pipeline:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Create a request
request req;

// These two queries will be executed as part of a pipeline
req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;});
req.add_query(&quot;DELETE FROM libs WHERE author &amp;lt;&amp;gt; $1&quot;, {&quot;Ruben&quot;});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Everything you may ask the server can be added to &lt;code&gt;request&lt;/code&gt;.
This includes preparing and executing statements, establishing
pipeline synchronization points, and so on.
It aims to be close enough to the protocol to be powerful,
while also exposing high-level functions to make things easier.&lt;/p&gt;

&lt;h2 id=&quot;reading-responses&quot;&gt;Reading responses&lt;/h2&gt;

&lt;p&gt;Like &lt;code&gt;request&lt;/code&gt;, the core response mechanism aims to be as close
to the protocol as possible. Since use cases here are much more varied,
there is no single &lt;code&gt;response&lt;/code&gt; class, but a concept, instead.
This is what a &lt;code&gt;response_handler&lt;/code&gt; looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;
struct my_handler {
    // Check that the handler is compatible with the request,
    // and prepare any required data structures. Called once at the beginning
    handler_setup_result setup(const request&amp;amp; req, std::size_t pipeline_offset);

    // Called once for every message received from the server
    // (e.g. `RowDescription`, `DataRow`, `CommandComplete`)
    void on_message(const any_request_message&amp;amp; msg);

    // The overall result of the operation (error_code + diagnostic string).
    // Called after the operation has finished.
    const extended_error&amp;amp; result() const;
};

&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note that &lt;code&gt;on_message&lt;/code&gt; is not allowed to report errors.
Even if a handler encounters a problem with a message
(imagine finding a &lt;code&gt;NULL&lt;/code&gt; for a field where the user isn’t expecting one),
this is a user error, rather than a protocol error.
Subsequent steps in the pipeline must not be affected by this.&lt;/p&gt;

&lt;p&gt;This is powerful but very low-level. Using this mechanism, the library
exposes an interface to parse the result of a query into a user-supplied
struct, using Boost.Describe:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;struct library
{
    std::int32_t id;
    std::string name;
    std::string cpp_version;
};
BOOST_DESCRIBE_STRUCT(library, (), (id, name, cpp_version))

// ...
std::vector&amp;lt;library&amp;gt; libs;
auto handler = nativepg::into(libs); // this is a valid response_handler
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;network-algorithms&quot;&gt;Network algorithms&lt;/h2&gt;

&lt;p&gt;Given a user request and response handler, how do we send these to the server?
We need a set of network algorithms to achieve this. Some of these are trivial:
sending a request to the server is an &lt;code&gt;asio::write&lt;/code&gt; on the request’s buffer.
Others, however, are more involved:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Reading a pipeline response needs to verify that the message
sequence is what we expected, for security, and handle errors gracefully.&lt;/li&gt;
  &lt;li&gt;The handshake algorithm, in charge of authentication when we connect to the
server, needs to respond to server authentication challenges, which may
come in different forms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Writing these using &lt;code&gt;asio::async_compose&lt;/code&gt; is problematic because:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;They become tied to Boost.Asio.&lt;/li&gt;
  &lt;li&gt;They are difficult to test.&lt;/li&gt;
  &lt;li&gt;They result in long compile times and code bloat due to templating.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the moment, these are written as finite state machines, similar to
how OpenSSL behaves in non-blocking mode:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Reads the response of a pipeline (simplified).
// This is a hand-wired generator.
class read_response_fsm {
public:
    // User-supplied arguments: request and response
    read_response_fsm(const request&amp;amp; req, response_handler_ref handler);

    // Yielded to signal that we should read from the server
    struct read_args { span&amp;lt;std::byte&amp;gt; buffer; };

    // Yielded to signal that we're done
    struct done_args { system::error_code result; };

    variant&amp;lt;read_args, done_args&amp;gt;
    resume(connection_state&amp;amp;, system::error_code io_result, std::size_t bytes_transferred);
};
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The idea is that higher-level code should call &lt;code&gt;resume&lt;/code&gt; until it returns
a &lt;code&gt;done_args&lt;/code&gt; value. This allows de-coupling from the underlying I/O runtime.&lt;/p&gt;

&lt;p&gt;Since NativePG targets C++20, I’m considering rewriting this as a coroutine.
Boost.Capy (currently under development - hopefully part of Boost soon)
could be a good candidate for this.&lt;/p&gt;

&lt;h2 id=&quot;putting-everything-together-the-asio-interface&quot;&gt;Putting everything together: the Asio interface&lt;/h2&gt;

&lt;p&gt;At the end of the day, most users just want a &lt;code&gt;connection&lt;/code&gt; object they can easily
use. Once all the sans-io parts are working, writing it is pretty straight-forward.
This is what end user code looks like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;// Create a connection
connection conn{co_await asio::this_coro::executor};

// Connect
co_await conn.async_connect(
    {.hostname = &quot;localhost&quot;, .username = &quot;postgres&quot;, .password = &quot;&quot;, .database = &quot;postgres&quot;}
);
std::cout &amp;lt;&amp;lt; &quot;Startup complete\n&quot;;

// Compose our request and response
request req;
req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;});
std::vector&amp;lt;library&amp;gt; libs;

// Run the request
co_await conn.async_exec(req, into(libs));
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;auto-batch-connections&quot;&gt;Auto-batch connections&lt;/h2&gt;

&lt;p&gt;While &lt;code&gt;connection&lt;/code&gt; is good, experience has shown me that it’s still
too low-level for most users:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Connection establishment is manual with &lt;code&gt;async_connect&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;No built-in reconnection or health checks.&lt;/li&gt;
  &lt;li&gt;No built-in concurrent execution of requests.
That is, &lt;code&gt;async_exec&lt;/code&gt; first writes the request, then reads the response.
Other requests may not be executed during this period.
This limits the connection’s throughput.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For this reason, NativePG will provide some higher-level interfaces
that will make server communication easier and more efficient.
To get a feel of what we need, we should first understand
the two main usage patterns that we expect.&lt;/p&gt;

&lt;p&gt;Most of the time, connections are used in a &lt;strong&gt;stateless&lt;/strong&gt; way.
For example, consider querying data from the server:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;});
co_await conn.async_exec(req, res);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This query is not mutating connection state in any way.
Other queries could be inserted before and after it without
making any difference.&lt;/p&gt;

&lt;p&gt;I plan to add a higher-level connection type, similar to
&lt;code&gt;redis::connection&lt;/code&gt; in Boost.Redis, that automatically
batches concurrent requests and handles reconnection.
The key differences with &lt;code&gt;connection&lt;/code&gt; would be:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Several independent tasks can share an auto-batch connection.
This is an error for &lt;code&gt;connection&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;If several requests are queued at the same time,
the connection may send them together to the server using a single system call.&lt;/li&gt;
  &lt;li&gt;There is no &lt;code&gt;async_connect&lt;/code&gt; in an auto-batch connection.
Reconnection is handled automatically.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note that this pattern is not exclusive to read-only or
individual queries. Transactions can work by using protocol features:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.set_autosync(false); // All subsequent queries are part of the same transaction
req.add_query(&quot;UPDATE table1 SET x = $1 WHERE y = 2&quot;, {42});
req.add_query(&quot;UPDATE table2 SET x = $1 WHERE y = 42&quot;, {2});
req.add_sync(); // The two updates run atomically
co_await conn.async_exec(req, res);
&lt;/code&gt;&lt;/pre&gt;

&lt;h2 id=&quot;connection-pools&quot;&gt;Connection pools&lt;/h2&gt;

&lt;p&gt;I mentioned there were two main usage scenarios in the library.
Sometimes, it is required to use connections in a &lt;strong&gt;stateful&lt;/strong&gt; way:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;request req;
req.add_simple_query(&quot;BEGIN&quot;); // start a transaction manually
req.add_query(&quot;SELECT * FROM library WHERE author = $1 FOR UPDATE&quot;, {&quot;Ruben&quot;}); // lock rows
co_await conn.async_exec(req, lib);

// Do something in the client that depends on lib
if (lib.id == &quot;Boost.MySQL&quot;)
    co_return; // don't

// Now compose another request that depends on what we read from lib
req.clear();
req.add_query(&quot;UPDATE library SET status = 'deprecated' WHERE id = $1&quot;, {lib.id});
req.add_simple_query(&quot;COMMIT&quot;);
co_await conn.async_exec(req, ignore);
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The key point here is that this pattern requires exclusive access to &lt;code&gt;conn&lt;/code&gt;.
No other requests should be interleaved between the first and the second
&lt;code&gt;async_exec&lt;/code&gt; invocations.&lt;/p&gt;

&lt;p&gt;The best way to solve this is by using a connection pool.
This is what client code could look like:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;co_await pool.async_exec([&amp;amp;] (connection&amp;amp; conn) -&amp;gt; asio::awaitable&amp;lt;system::error_code&amp;gt; {
    request req;
    req.add_simple_query(&quot;BEGIN&quot;);
    req.add_query(&quot;SELECT balance, status FROM accounts WHERE user_id = $1 FOR UPDATE&quot;, {user_id});

    account_info acc;
    co_await conn.async_exec(req, into(acc));

    // Check if account has sufficient funds and is active
    if (acc.balance &amp;lt; payment_amount || acc.status != &quot;active&quot;)
        co_return error::insufficient_funds;

    // Call external payment gateway API - this CANNOT be done in SQL
    auto result = co_await payment_gateway.process_charge(user_id, payment_amount);

    // Compose next request based on the external API response
    req.clear();
    if (result.success) {
        req.add_query(
            &quot;UPDATE accounts SET balance = balance - $1 WHERE user_id = $2&quot;,
            {payment_amount, user_id}
        );
        req.add_simple_query(&quot;COMMIT&quot;);
    }
    co_await conn.async_exec(req, ignore);

    // The connection is automatically returned to the pool when this coroutine completes
    co_return result.success ? error_code{} : error::payment_failed;
});
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I explicitly want to avoid having a &lt;code&gt;connection_pool::async_get_connection()&lt;/code&gt;
function, like in Boost.MySQL. This function returns a proxy object that grants access
to a free connection. When destroyed, the connection is returned to the pool.
This pattern looks great on paper, but runs into severe complications in
multi-threaded code. The proxy object’s destructor needs to mutate the pool’s state,
thus needing at least an &lt;code&gt;asio::dispatch&lt;/code&gt; to the pool’s executor, which may or may not
be a strand. It is so easy to get wrong that Boost.MySQL added a &lt;code&gt;pool_params::thread_safe&lt;/code&gt; boolean
option to take care of this automatically, adding extra complexity. Definitely something to avoid.&lt;/p&gt;

&lt;h2 id=&quot;sql-formatting&quot;&gt;SQL formatting&lt;/h2&gt;

&lt;p&gt;As we’ve seen, the protocol has built-in support for adding
parameters to queries (see placeholders like &lt;code&gt;$1&lt;/code&gt;). These placeholders
are expanded in the server securely.&lt;/p&gt;

&lt;p&gt;While this covers most cases, sometimes we need to generate SQL
that is too dynamic to be handled by the server. For instance,
a website might allow multiple optional filters, translating into
&lt;code&gt;WHERE&lt;/code&gt; clauses that might or might not be present.&lt;/p&gt;

&lt;p&gt;These use cases require SQL generated in the client. To do so,
we need a way of formatting user-supplied values without
running into SQL injection vulnerabilities. The final piece
of the library becomes a &lt;code&gt;format_sql&lt;/code&gt; function akin to the
one in Boost.MySQL.&lt;/p&gt;

&lt;h2 id=&quot;final-thoughts&quot;&gt;Final thoughts&lt;/h2&gt;

&lt;p&gt;While the plan is clear, there is still much to be done here.
There are dedicated APIs for high-throughput data copying and
push notifications that need to be implemented. Some of the described
APIs have a solid working implementation, while others still need
some work. All in all, I hope that this library can soon reach a state
where it can be useful to people.&lt;/p&gt;</content><author><name></name></author><category term="ruben" /><summary type="html">Do you know Boost.MySQL? If you’ve been reading my posts, you probably do. Many people have wondered ‘why not Postgres?’. Well, the time is now. TL;DR: I’m writing the equivalent of Boost.MySQL, but for PostgreSQL. You can find the code here. Since libPQ is already a good library, the NativePG project intends to be more ambitious than Boost.MySQL. In addition to the expected Asio interface, I intend to provide a sans-io API that exposes primitives like message serialization. Throughout this post, I will go into the intended library design and the rationales behind its design. The lowest level: message serialization PostgreSQL clients communicate with the server using a binary protocol on top of TCP, termed the frontend/backend protocol. The protocol defines a set of messages used for interactions. For example, when running a query, the following happens: ┌────────┐ ┌────────┐ │ Client │ │ Server │ └───┬────┘ └───┬────┘ │ │ │ Query │ │ ──────────────────────────────────────────&amp;gt; │ │ │ │ RowDescription │ │ &amp;lt;────────────────────────────────────────── │ │ │ │ DataRow │ │ &amp;lt;────────────────────────────────────────── │ │ │ │ CommandComplete │ │ &amp;lt;────────────────────────────────────────── │ │ │ │ ReadyForQuery │ │ &amp;lt;────────────────────────────────────────── │ │ │ In the lowest layer, this library provides functions to serialize and parse such messages. The goal here is being as efficient as possible. Parsing functions are non-allocating, and use an approach inspired by Boost.Url collections: Parsing database types The PostgreSQL type system is quite rich. In addition to the usual SQL built-in types, it supports advanced scalars like UUIDs, arrays and user-defined aggregates. When running a query, libPQ exposes retrieved data as either raw text or bytes. This is what the server sends in the DataRow packets shown above. To do something useful with the data, users likely need parsing and serializing such types. The next layer of NativePG is in charge of providing such functions. This will likely contain some extension points for users to plug in their types. This is the general form of such functions: system::error_code parse(span&amp;lt;const std::byte&amp;gt; from, T&amp;amp; to, const connection_state&amp;amp;); void serialize(const T&amp;amp; from, dynamic_buffer&amp;amp; to, const connection_state&amp;amp;); Note that some types might require access to session configuration. For instance, dates may be expressed using different wire formats depending on the connection’s runtime settings. At the time of writing, only ints and strings are supported, but this will be extended soon. Composing requests Efficiency in database communication is achieved with pipelining. A network round-trip with the server is worth a thousand allocations in the client. It is thus critical that: The protocol properly supports pipelining. This is the case with PostgreSQL. The client should expose an interface to it, and make it very easy to use. libPQ does the first, and NativePG intends to achieve the second. NativePG pipelines by default. In NativePG, a request object is always a pipeline: // Create a request request req; // These two queries will be executed as part of a pipeline req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;}); req.add_query(&quot;DELETE FROM libs WHERE author &amp;lt;&amp;gt; $1&quot;, {&quot;Ruben&quot;}); Everything you may ask the server can be added to request. This includes preparing and executing statements, establishing pipeline synchronization points, and so on. It aims to be close enough to the protocol to be powerful, while also exposing high-level functions to make things easier. Reading responses Like request, the core response mechanism aims to be as close to the protocol as possible. Since use cases here are much more varied, there is no single response class, but a concept, instead. This is what a response_handler looks like: struct my_handler { // Check that the handler is compatible with the request, // and prepare any required data structures. Called once at the beginning handler_setup_result setup(const request&amp;amp; req, std::size_t pipeline_offset); // Called once for every message received from the server // (e.g. `RowDescription`, `DataRow`, `CommandComplete`) void on_message(const any_request_message&amp;amp; msg); // The overall result of the operation (error_code + diagnostic string). // Called after the operation has finished. const extended_error&amp;amp; result() const; }; Note that on_message is not allowed to report errors. Even if a handler encounters a problem with a message (imagine finding a NULL for a field where the user isn’t expecting one), this is a user error, rather than a protocol error. Subsequent steps in the pipeline must not be affected by this. This is powerful but very low-level. Using this mechanism, the library exposes an interface to parse the result of a query into a user-supplied struct, using Boost.Describe: struct library { std::int32_t id; std::string name; std::string cpp_version; }; BOOST_DESCRIBE_STRUCT(library, (), (id, name, cpp_version)) // ... std::vector&amp;lt;library&amp;gt; libs; auto handler = nativepg::into(libs); // this is a valid response_handler Network algorithms Given a user request and response handler, how do we send these to the server? We need a set of network algorithms to achieve this. Some of these are trivial: sending a request to the server is an asio::write on the request’s buffer. Others, however, are more involved: Reading a pipeline response needs to verify that the message sequence is what we expected, for security, and handle errors gracefully. The handshake algorithm, in charge of authentication when we connect to the server, needs to respond to server authentication challenges, which may come in different forms. Writing these using asio::async_compose is problematic because: They become tied to Boost.Asio. They are difficult to test. They result in long compile times and code bloat due to templating. At the moment, these are written as finite state machines, similar to how OpenSSL behaves in non-blocking mode: // Reads the response of a pipeline (simplified). // This is a hand-wired generator. class read_response_fsm { public: // User-supplied arguments: request and response read_response_fsm(const request&amp;amp; req, response_handler_ref handler); // Yielded to signal that we should read from the server struct read_args { span&amp;lt;std::byte&amp;gt; buffer; }; // Yielded to signal that we're done struct done_args { system::error_code result; }; variant&amp;lt;read_args, done_args&amp;gt; resume(connection_state&amp;amp;, system::error_code io_result, std::size_t bytes_transferred); }; The idea is that higher-level code should call resume until it returns a done_args value. This allows de-coupling from the underlying I/O runtime. Since NativePG targets C++20, I’m considering rewriting this as a coroutine. Boost.Capy (currently under development - hopefully part of Boost soon) could be a good candidate for this. Putting everything together: the Asio interface At the end of the day, most users just want a connection object they can easily use. Once all the sans-io parts are working, writing it is pretty straight-forward. This is what end user code looks like: // Create a connection connection conn{co_await asio::this_coro::executor}; // Connect co_await conn.async_connect( {.hostname = &quot;localhost&quot;, .username = &quot;postgres&quot;, .password = &quot;&quot;, .database = &quot;postgres&quot;} ); std::cout &amp;lt;&amp;lt; &quot;Startup complete\n&quot;; // Compose our request and response request req; req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;}); std::vector&amp;lt;library&amp;gt; libs; // Run the request co_await conn.async_exec(req, into(libs)); Auto-batch connections While connection is good, experience has shown me that it’s still too low-level for most users: Connection establishment is manual with async_connect. No built-in reconnection or health checks. No built-in concurrent execution of requests. That is, async_exec first writes the request, then reads the response. Other requests may not be executed during this period. This limits the connection’s throughput. For this reason, NativePG will provide some higher-level interfaces that will make server communication easier and more efficient. To get a feel of what we need, we should first understand the two main usage patterns that we expect. Most of the time, connections are used in a stateless way. For example, consider querying data from the server: request req; req.add_query(&quot;SELECT * FROM libs WHERE author = $1&quot;, {&quot;Ruben&quot;}); co_await conn.async_exec(req, res); This query is not mutating connection state in any way. Other queries could be inserted before and after it without making any difference. I plan to add a higher-level connection type, similar to redis::connection in Boost.Redis, that automatically batches concurrent requests and handles reconnection. The key differences with connection would be: Several independent tasks can share an auto-batch connection. This is an error for connection. If several requests are queued at the same time, the connection may send them together to the server using a single system call. There is no async_connect in an auto-batch connection. Reconnection is handled automatically. Note that this pattern is not exclusive to read-only or individual queries. Transactions can work by using protocol features: request req; req.set_autosync(false); // All subsequent queries are part of the same transaction req.add_query(&quot;UPDATE table1 SET x = $1 WHERE y = 2&quot;, {42}); req.add_query(&quot;UPDATE table2 SET x = $1 WHERE y = 42&quot;, {2}); req.add_sync(); // The two updates run atomically co_await conn.async_exec(req, res); Connection pools I mentioned there were two main usage scenarios in the library. Sometimes, it is required to use connections in a stateful way: request req; req.add_simple_query(&quot;BEGIN&quot;); // start a transaction manually req.add_query(&quot;SELECT * FROM library WHERE author = $1 FOR UPDATE&quot;, {&quot;Ruben&quot;}); // lock rows co_await conn.async_exec(req, lib); // Do something in the client that depends on lib if (lib.id == &quot;Boost.MySQL&quot;) co_return; // don't // Now compose another request that depends on what we read from lib req.clear(); req.add_query(&quot;UPDATE library SET status = 'deprecated' WHERE id = $1&quot;, {lib.id}); req.add_simple_query(&quot;COMMIT&quot;); co_await conn.async_exec(req, ignore); The key point here is that this pattern requires exclusive access to conn. No other requests should be interleaved between the first and the second async_exec invocations. The best way to solve this is by using a connection pool. This is what client code could look like: co_await pool.async_exec([&amp;amp;] (connection&amp;amp; conn) -&amp;gt; asio::awaitable&amp;lt;system::error_code&amp;gt; { request req; req.add_simple_query(&quot;BEGIN&quot;); req.add_query(&quot;SELECT balance, status FROM accounts WHERE user_id = $1 FOR UPDATE&quot;, {user_id}); account_info acc; co_await conn.async_exec(req, into(acc)); // Check if account has sufficient funds and is active if (acc.balance &amp;lt; payment_amount || acc.status != &quot;active&quot;) co_return error::insufficient_funds; // Call external payment gateway API - this CANNOT be done in SQL auto result = co_await payment_gateway.process_charge(user_id, payment_amount); // Compose next request based on the external API response req.clear(); if (result.success) { req.add_query( &quot;UPDATE accounts SET balance = balance - $1 WHERE user_id = $2&quot;, {payment_amount, user_id} ); req.add_simple_query(&quot;COMMIT&quot;); } co_await conn.async_exec(req, ignore); // The connection is automatically returned to the pool when this coroutine completes co_return result.success ? error_code{} : error::payment_failed; }); I explicitly want to avoid having a connection_pool::async_get_connection() function, like in Boost.MySQL. This function returns a proxy object that grants access to a free connection. When destroyed, the connection is returned to the pool. This pattern looks great on paper, but runs into severe complications in multi-threaded code. The proxy object’s destructor needs to mutate the pool’s state, thus needing at least an asio::dispatch to the pool’s executor, which may or may not be a strand. It is so easy to get wrong that Boost.MySQL added a pool_params::thread_safe boolean option to take care of this automatically, adding extra complexity. Definitely something to avoid. SQL formatting As we’ve seen, the protocol has built-in support for adding parameters to queries (see placeholders like $1). These placeholders are expanded in the server securely. While this covers most cases, sometimes we need to generate SQL that is too dynamic to be handled by the server. For instance, a website might allow multiple optional filters, translating into WHERE clauses that might or might not be present. These use cases require SQL generated in the client. To do so, we need a way of formatting user-supplied values without running into SQL injection vulnerabilities. The final piece of the library becomes a format_sql function akin to the one in Boost.MySQL. Final thoughts While the plan is clear, there is still much to be done here. There are dedicated APIs for high-throughput data copying and push notifications that need to be implemented. Some of the described APIs have a solid working implementation, while others still need some work. All in all, I hope that this library can soon reach a state where it can be useful to people.</summary></entry><entry><title type="html">Systems, CI Updates Q4 2025</title><link href="http://cppalliance.org/sam/2026/01/22/SamsQ4Update.html" rel="alternate" type="text/html" title="Systems, CI Updates Q4 2025" /><published>2026-01-22T00:00:00+00:00</published><updated>2026-01-22T00:00:00+00:00</updated><id>http://cppalliance.org/sam/2026/01/22/SamsQ4Update</id><content type="html" xml:base="http://cppalliance.org/sam/2026/01/22/SamsQ4Update.html">&lt;h3 id=&quot;doc-previews-and-doc-builds&quot;&gt;Doc Previews and Doc Builds&lt;/h3&gt;

&lt;p&gt;The pull request to isomorphic-git “Support git commands run in submodules” was merged, and released in the latest version. (See previous post for an explanation). The commit modified 153 files, all the git api commands, and tests applying to each one. The next step is for upstream Antora to adjust package.json and refer to the newer isomorphic-git so it will be distributed along with Antora. Since isomorphic-git is more widely used than just Antora, their userbase is already field testing the new version.&lt;/p&gt;

&lt;p&gt;Created an antora extension https://github.com/cppalliance/antora-downloads-extension that will retry ui-bundle downloads. The Boost Superproject builds sometimes fail because of Antora download failures. I am now in the process of rolling out this extension to all affected repositories. It must be included in each playbook if that playbook downloads the bundle as part of the build process.&lt;/p&gt;

&lt;p&gt;Adjusted doc previews to update the existing PR comments instead of posting many new ones, to reduce the email spam effect. The job will modify a timestamp in the PR comment which allows developers to see the most recent build time and if the pages rebuilt successfully. I needed to solve some puzzles to implement this, since usually Jenkins jobs are stateless and don’t know if they previously posted a comment, or which comment it was that should be modified across subsequent jobs runs. It turns out there is a feature “Build with Parameters”, and properties/parameters can be saved in the job.&lt;/p&gt;

&lt;h3 id=&quot;boost-website-boostorgwebsite-v2&quot;&gt;Boost website boostorg/website-v2&lt;/h3&gt;

&lt;p&gt;Lowered the CPU threshold on the horizontal pod autoscaler to scale pods more rapidly when there is increased traffic.&lt;/p&gt;

&lt;p&gt;When web visitors go to the wrong domain or URL, set the redirects to 301 “moved permanently”. Reduced the number of redirect hops by sending visitors directly to the final URL www.boost.org.&lt;/p&gt;

&lt;p&gt;Investigated a bug where PDF files were timing out and crashing the server. Those should not be parsed by beautiful soup or lxml.&lt;/p&gt;

&lt;p&gt;During this quarter we published boost 1.90.0. Worked closely with the release managers to resolve problems during the release. The boost.org website was not fully updating after importing the new version.&lt;/p&gt;

&lt;p&gt;Meetings about CMS feature, other topics. Many general discussions about website issues.&lt;/p&gt;

&lt;h3 id=&quot;mailman3&quot;&gt;Mailman3&lt;/h3&gt;

&lt;p&gt;When unmoderating a new user on mailman3 an administrator must click a drop-down and select “Default Processing” so this subscriber may send emails directly to the list and not continue to be moderated. I have started developing an enhancement in Postorius whereby there is one simple button “Accept and Unmoderate” thus streamlining the process. However as often happens with new and radical ideas sent to the Mailman maintainers, they put up roadblocks. While I believe the new feature is promising, and it is helpful to quickly unmoderate users, without skipping that step, the future of the pull request is uncertain.&lt;/p&gt;

&lt;h3 id=&quot;boost-ci&quot;&gt;boost-ci&lt;/h3&gt;

&lt;p&gt;Created a Fastly CDN mirror of keyserver.ubuntu.com at keyserver.boost.org. If keyserver.ubuntu.com experiences occasional outages but keys are cached on the CDN mirror, then CI jobs will be able to proceed without difficulty. Configured both Drone and boost-ci to use the CDN at keyserver.boost.org.&lt;/p&gt;

&lt;h3 id=&quot;jenkins&quot;&gt;Jenkins&lt;/h3&gt;

&lt;p&gt;Beast2 doc previews. Capy previews. JSON lcov jobs. Openmethod doc previews.&lt;/p&gt;

&lt;p&gt;Modified email notifications to send ‘recovery’ type messages after failed jobs.  Other enhancements to Jenkins jobs.&lt;/p&gt;

&lt;h3 id=&quot;boost-release-process-boostorgrelease-tools&quot;&gt;Boost release process boostorg/release-tools&lt;/h3&gt;

&lt;p&gt;When building releases with publish-release.py, generate “nodocs” copies of the Boost releases and upload them to archives.boost.io. The “nodocs” versions are now functional. If anyone would like to accelerate their CI build process, set the target URL to nodocs such as: https://archives.boost.io/release/1.90.0/source-nodocs/boost_1_90_0.tar.gz .&lt;/p&gt;

&lt;p&gt;Migrated the release workstation instance from GCP to AWS so that during the next Boost release 1.91.0 the server will be closer to AWS S3, allowing file uploads to transfer faster. Designed a mechanism that resizes the server instance on a cron schedule via GHA. Most of the time it’s quite small, but during releases the server is allocated more CPU.&lt;/p&gt;

&lt;h3 id=&quot;drone&quot;&gt;Drone&lt;/h3&gt;

&lt;p&gt;Microsoft Windows - VS2026 container image.&lt;br /&gt;
Ubuntu 25.10 container image.&lt;/p&gt;

&lt;h3 id=&quot;gha&quot;&gt;GHA&lt;/h3&gt;

&lt;p&gt;Added CI jobs to build “documentation” in the boostorg/container repository. GHA will test doc builds, which helps when debugging modifications to documentation.&lt;/p&gt;

&lt;p&gt;Fil-C is a “fanatically compatible memory-safe implementation of C and C++.” https://github.com/pizlonator/fil-c  Upon request, I composed an example Fil-C GitHub Actions job at https://github.com/sdarwin/fil-c-demo which was then applied by developers in other Boost repositories.&lt;/p&gt;</content><author><name></name></author><category term="sam" /><summary type="html">Doc Previews and Doc Builds The pull request to isomorphic-git “Support git commands run in submodules” was merged, and released in the latest version. (See previous post for an explanation). The commit modified 153 files, all the git api commands, and tests applying to each one. The next step is for upstream Antora to adjust package.json and refer to the newer isomorphic-git so it will be distributed along with Antora. Since isomorphic-git is more widely used than just Antora, their userbase is already field testing the new version. Created an antora extension https://github.com/cppalliance/antora-downloads-extension that will retry ui-bundle downloads. The Boost Superproject builds sometimes fail because of Antora download failures. I am now in the process of rolling out this extension to all affected repositories. It must be included in each playbook if that playbook downloads the bundle as part of the build process. Adjusted doc previews to update the existing PR comments instead of posting many new ones, to reduce the email spam effect. The job will modify a timestamp in the PR comment which allows developers to see the most recent build time and if the pages rebuilt successfully. I needed to solve some puzzles to implement this, since usually Jenkins jobs are stateless and don’t know if they previously posted a comment, or which comment it was that should be modified across subsequent jobs runs. It turns out there is a feature “Build with Parameters”, and properties/parameters can be saved in the job. Boost website boostorg/website-v2 Lowered the CPU threshold on the horizontal pod autoscaler to scale pods more rapidly when there is increased traffic. When web visitors go to the wrong domain or URL, set the redirects to 301 “moved permanently”. Reduced the number of redirect hops by sending visitors directly to the final URL www.boost.org. Investigated a bug where PDF files were timing out and crashing the server. Those should not be parsed by beautiful soup or lxml. During this quarter we published boost 1.90.0. Worked closely with the release managers to resolve problems during the release. The boost.org website was not fully updating after importing the new version. Meetings about CMS feature, other topics. Many general discussions about website issues. Mailman3 When unmoderating a new user on mailman3 an administrator must click a drop-down and select “Default Processing” so this subscriber may send emails directly to the list and not continue to be moderated. I have started developing an enhancement in Postorius whereby there is one simple button “Accept and Unmoderate” thus streamlining the process. However as often happens with new and radical ideas sent to the Mailman maintainers, they put up roadblocks. While I believe the new feature is promising, and it is helpful to quickly unmoderate users, without skipping that step, the future of the pull request is uncertain. boost-ci Created a Fastly CDN mirror of keyserver.ubuntu.com at keyserver.boost.org. If keyserver.ubuntu.com experiences occasional outages but keys are cached on the CDN mirror, then CI jobs will be able to proceed without difficulty. Configured both Drone and boost-ci to use the CDN at keyserver.boost.org. Jenkins Beast2 doc previews. Capy previews. JSON lcov jobs. Openmethod doc previews. Modified email notifications to send ‘recovery’ type messages after failed jobs. Other enhancements to Jenkins jobs. Boost release process boostorg/release-tools When building releases with publish-release.py, generate “nodocs” copies of the Boost releases and upload them to archives.boost.io. The “nodocs” versions are now functional. If anyone would like to accelerate their CI build process, set the target URL to nodocs such as: https://archives.boost.io/release/1.90.0/source-nodocs/boost_1_90_0.tar.gz . Migrated the release workstation instance from GCP to AWS so that during the next Boost release 1.91.0 the server will be closer to AWS S3, allowing file uploads to transfer faster. Designed a mechanism that resizes the server instance on a cron schedule via GHA. Most of the time it’s quite small, but during releases the server is allocated more CPU. Drone Microsoft Windows - VS2026 container image. Ubuntu 25.10 container image. GHA Added CI jobs to build “documentation” in the boostorg/container repository. GHA will test doc builds, which helps when debugging modifications to documentation. Fil-C is a “fanatically compatible memory-safe implementation of C and C++.” https://github.com/pizlonator/fil-c Upon request, I composed an example Fil-C GitHub Actions job at https://github.com/sdarwin/fil-c-demo which was then applied by developers in other Boost repositories.</summary></entry><entry><title type="html">Containers galore</title><link href="http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update.html" rel="alternate" type="text/html" title="Containers galore" /><published>2026-01-18T00:00:00+00:00</published><updated>2026-01-18T00:00:00+00:00</updated><id>http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/joaquin/2026/01/18/Joaquins2025Q4Update.html">&lt;p&gt;During Q4 2025, I’ve been working in the following areas:&lt;/p&gt;

&lt;h3 id=&quot;boostbloom&quot;&gt;Boost.Bloom&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Written &lt;a href=&quot;https://bannalia.blogspot.com/2025/10/bulk-operations-in-boostbloom.html&quot;&gt;an article&lt;/a&gt; explaining
the usage and implementation of the recently introduced bulk operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostunordered&quot;&gt;Boost.Unordered&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Written maintenance fixes
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/320&quot;&gt;PR#320&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/321&quot;&gt;PR#321&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/326&quot;&gt;PR#326&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/327&quot;&gt;PR#327&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/328&quot;&gt;PR#328&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/unordered/pull/335&quot;&gt;PR#335&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostmultiindex&quot;&gt;Boost.MultiIndex&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Refactored the library to use Boost.Mp11 instead of Boost.MPL (&lt;a href=&quot;https://github.com/boostorg/multi_index/pull/87&quot;&gt;PR#87&lt;/a&gt;),
remove pre-C++11 variadic argument emulation (&lt;a href=&quot;https://github.com/boostorg/multi_index/pull/88&quot;&gt;PR#88&lt;/a&gt;)
and remove all sorts of pre-C++11 polyfills (&lt;a href=&quot;https://github.com/boostorg/multi_index/pull/90&quot;&gt;PR#90&lt;/a&gt;).
These changes are explained in &lt;a href=&quot;https://bannalia.blogspot.com/2025/12/boostmultiindex-refactored.html&quot;&gt;an article&lt;/a&gt;
and will be shipped in Boost 1.91. Transition is expected to be mostly backwards
compatible, though two Boost libraries needed adjustments as they use MultiIndex
in rather advanced ways (see below).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostflyweight&quot;&gt;Boost.Flyweight&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Adapted the library to work with Boost.MultiIndex 1.91
(&lt;a href=&quot;https://github.com/boostorg/flyweight/pull/25&quot;&gt;PR#25&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;boostbimap&quot;&gt;Boost.Bimap&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Adapted the library to work with Boost.MultiIndex 1.91
(&lt;a href=&quot;https://github.com/boostorg/bimap/pull/50&quot;&gt;PR#50&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;other-boost-libraries&quot;&gt;Other Boost libraries&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Helped set up the Antora-based doc build chain for DynamicBitset
(&lt;a href=&quot;https://github.com/boostorg/dynamic_bitset/pull/96&quot;&gt;PR#96&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/dynamic_bitset/pull/97&quot;&gt;PR#97&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/dynamic_bitset/pull/98&quot;&gt;PR#98&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Same with OpenMethod
(&lt;a href=&quot;https://github.com/boostorg/openmethod/pull/40&quot;&gt;PR#40&lt;/a&gt;).&lt;/li&gt;
  &lt;li&gt;Fixed concept compliance of iterators provided by Spirit
(&lt;a href=&quot;https://github.com/boostorg/spirit/pull/840&quot;&gt;PR#840&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/spirit/pull/841&quot;&gt;PR#841&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;experiments-with-fil-c&quot;&gt;Experiments with Fil-C&lt;/h3&gt;

&lt;p&gt;&lt;a href=&quot;https://fil-c.org/&quot;&gt;Fil-C&lt;/a&gt; is a C and C++ compiler built on top of LLVM that adds run-time
memory-safety mechanisms preventing out-of-bounds and use-after-free accesses. 
I’ve been experimenting with compiling Boost.Unordered test suite with Fil-C and running
some benchmarks to measure the resulting degradation in execution times and memory usage.
Results follow:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Articles
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://bannalia.blogspot.com/2025/11/some-experiments-with-boostunordered-on.html&quot;&gt;Some experiments with Boost.Unordered on Fil-C&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://bannalia.blogspot.com/2025/11/comparing-run-time-performance-of-fil-c.html&quot;&gt;Comparing the run-time performance of Fil-C and ASAN&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Repos
    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;https://github.com/joaquintides/fil-c_boost_unordered&quot;&gt;Compiling Boost.Unordered test suite with Fil-C&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map_fil-c&quot;&gt;Benchmarks of Fil-C and ASAN against baseline&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map_fil-c_memory&quot;&gt;Memory consumption of Fil-C and ASAN with respect to baseline&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;proof-of-concept-of-a-semistable-vector&quot;&gt;Proof of concept of a semistable vector&lt;/h3&gt;

&lt;p&gt;By “semistable vector” I mean that pointers to the elements may be invalidated
upon insertion and erasure (just like a regular &lt;code&gt;std::vector&lt;/code&gt;) but iterators
to non-erased elements remain valid throughout.
I’ve written a small &lt;a href=&quot;https://github.com/joaquintides/semistable_vector/&quot;&gt;proof of concept&lt;/a&gt;
of this idea and measured its performance against non-stable &lt;code&gt;std::vector&lt;/code&gt; and fully
stable &lt;code&gt;std::list&lt;/code&gt;. It is dubious that such container could be of interest for production
use, but the techniques explored are mildly interesting and could be adapted, for
instance, to write powerful safe iterator facilities.&lt;/p&gt;

&lt;h3 id=&quot;teaser-exploring-the-stdhive-space&quot;&gt;Teaser: exploring the &lt;code&gt;std::hive&lt;/code&gt; space&lt;/h3&gt;

&lt;p&gt;In short, &lt;code&gt;std::hive&lt;/code&gt; (coming in C++26) is a container with stable references/iterators
and fast insertion and erasure. The &lt;a href=&quot;https://github.com/mattreecebentley/plf_hive&quot;&gt;reference implementation&lt;/a&gt;
for this container relies on a rather convoluted data structure, and I started to wonder
if something simpler could deliver superior performance. Expect to see the results of
my experiments in Q1 2026.&lt;/p&gt;

&lt;h3 id=&quot;website&quot;&gt;Website&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;Filed issues
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1936&quot;&gt;#1936&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1937&quot;&gt;#1937&lt;/a&gt;,
&lt;a href=&quot;https://github.com/boostorg/website-v2/issues/1984&quot;&gt;#1984&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;support-to-the-community&quot;&gt;Support to the community&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;I’ve been part of a task force with the C++ Alliance to review the entire
catalog of Boost libraries (170+) and categorize them according to their
maintainance status and relevance in light of additions to the C++
standard library over the years.&lt;/li&gt;
  &lt;li&gt;Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).&lt;/li&gt;
&lt;/ul&gt;</content><author><name></name></author><category term="joaquin" /><summary type="html">During Q4 2025, I’ve been working in the following areas: Boost.Bloom Written an article explaining the usage and implementation of the recently introduced bulk operations. Boost.Unordered Written maintenance fixes PR#320, PR#321, PR#326, PR#327, PR#328, PR#335. Boost.MultiIndex Refactored the library to use Boost.Mp11 instead of Boost.MPL (PR#87), remove pre-C++11 variadic argument emulation (PR#88) and remove all sorts of pre-C++11 polyfills (PR#90). These changes are explained in an article and will be shipped in Boost 1.91. Transition is expected to be mostly backwards compatible, though two Boost libraries needed adjustments as they use MultiIndex in rather advanced ways (see below). Boost.Flyweight Adapted the library to work with Boost.MultiIndex 1.91 (PR#25). Boost.Bimap Adapted the library to work with Boost.MultiIndex 1.91 (PR#50). Other Boost libraries Helped set up the Antora-based doc build chain for DynamicBitset (PR#96, PR#97, PR#98). Same with OpenMethod (PR#40). Fixed concept compliance of iterators provided by Spirit (PR#840, PR#841). Experiments with Fil-C Fil-C is a C and C++ compiler built on top of LLVM that adds run-time memory-safety mechanisms preventing out-of-bounds and use-after-free accesses. I’ve been experimenting with compiling Boost.Unordered test suite with Fil-C and running some benchmarks to measure the resulting degradation in execution times and memory usage. Results follow: Articles Some experiments with Boost.Unordered on Fil-C Comparing the run-time performance of Fil-C and ASAN Repos Compiling Boost.Unordered test suite with Fil-C Benchmarks of Fil-C and ASAN against baseline Memory consumption of Fil-C and ASAN with respect to baseline Proof of concept of a semistable vector By “semistable vector” I mean that pointers to the elements may be invalidated upon insertion and erasure (just like a regular std::vector) but iterators to non-erased elements remain valid throughout. I’ve written a small proof of concept of this idea and measured its performance against non-stable std::vector and fully stable std::list. It is dubious that such container could be of interest for production use, but the techniques explored are mildly interesting and could be adapted, for instance, to write powerful safe iterator facilities. Teaser: exploring the std::hive space In short, std::hive (coming in C++26) is a container with stable references/iterators and fast insertion and erasure. The reference implementation for this container relies on a rather convoluted data structure, and I started to wonder if something simpler could deliver superior performance. Expect to see the results of my experiments in Q1 2026. Website Filed issues #1936, #1937, #1984. Support to the community I’ve been part of a task force with the C++ Alliance to review the entire catalog of Boost libraries (170+) and categorize them according to their maintainance status and relevance in light of additions to the C++ standard library over the years. Supporting the community as a member of the Fiscal Sponsorship Committee (FSC).</summary></entry><entry><title type="html">Decimal is Accepted and Next Steps</title><link href="http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update.html" rel="alternate" type="text/html" title="Decimal is Accepted and Next Steps" /><published>2026-01-15T00:00:00+00:00</published><updated>2026-01-15T00:00:00+00:00</updated><id>http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update</id><content type="html" xml:base="http://cppalliance.org/matt/2026/01/15/Matts2025Q4Update.html">&lt;p&gt;After two reviews the Decimal (&lt;a href=&quot;https://github.com/cppalliance/decimal&quot;&gt;https://github.com/cppalliance/decimal&lt;/a&gt;) library has been accepted into Boost.
Look for it to ship for the first time with Boost 1.91 in the Spring.
For current and prospective users, a new release series (v6) is available on the releases page of the library.
This major version change contains all of the bug fixes and addresses comments from the second review.
We have once again overhauled the documentation based on the review to include a significant increase in the number of examples.
Between the &lt;code&gt;Basic Usage&lt;/code&gt; and &lt;code&gt;Examples&lt;/code&gt; tabs on the website we believe there’s now enough information to quickly make good use of the library.
One big quality of life worth highlighting for this version is that it ships with pretty printers for both GDB and LLDB.
It is a huge release (1108 commits with a diff stat of &amp;gt;50k LOC), but is be better than ever.
I expect that this is the last major version that will be released prior to moving to the Boost release cycle.&lt;/p&gt;

&lt;p&gt;Where to go from here?&lt;/p&gt;

&lt;p&gt;As I have mentioned in previous posts, the int128 (&lt;a href=&quot;https://github.com/cppalliance/int128&quot;&gt;https://github.com/cppalliance/int128&lt;/a&gt;) library started life as the backend for portable arithmetic and representation in the Decimal library.
It has since been expanded to include more of the standard library features that are unnecessary as a back-end, but useful to many people like &lt;code&gt;&amp;lt;format&amp;gt;&lt;/code&gt; support. 
The last major update that I intend to make to the library prior to proposal for Boost is to add CUDA support.
This would not only add portability to another platform for many users, it would open the door for Decimal to also have CUDA support.
I will also be looking at a few of our performance measures as I think there are still places for improvement (such as signed 128-bit division).&lt;/p&gt;

&lt;p&gt;Lastly, towards the end of this quarter (March 5 - March 15), I will be serving as the review manager for Alfredo Correa’s Multi (&lt;a href=&quot;https://github.com/correaa/boost-multi&quot;&gt;https://github.com/correaa/boost-multi&lt;/a&gt;) library.
Multi is a modern C++ library that provides manipulation and access of data in multidimensional arrays for both CPU and GPU memory.
Feel free to give the library a go now and comment on what you find. 
This is a very high quality library which should have an exciting review.&lt;/p&gt;</content><author><name></name></author><category term="matt" /><summary type="html">After two reviews the Decimal (https://github.com/cppalliance/decimal) library has been accepted into Boost. Look for it to ship for the first time with Boost 1.91 in the Spring. For current and prospective users, a new release series (v6) is available on the releases page of the library. This major version change contains all of the bug fixes and addresses comments from the second review. We have once again overhauled the documentation based on the review to include a significant increase in the number of examples. Between the Basic Usage and Examples tabs on the website we believe there’s now enough information to quickly make good use of the library. One big quality of life worth highlighting for this version is that it ships with pretty printers for both GDB and LLDB. It is a huge release (1108 commits with a diff stat of &amp;gt;50k LOC), but is be better than ever. I expect that this is the last major version that will be released prior to moving to the Boost release cycle. Where to go from here? As I have mentioned in previous posts, the int128 (https://github.com/cppalliance/int128) library started life as the backend for portable arithmetic and representation in the Decimal library. It has since been expanded to include more of the standard library features that are unnecessary as a back-end, but useful to many people like &amp;lt;format&amp;gt; support. The last major update that I intend to make to the library prior to proposal for Boost is to add CUDA support. This would not only add portability to another platform for many users, it would open the door for Decimal to also have CUDA support. I will also be looking at a few of our performance measures as I think there are still places for improvement (such as signed 128-bit division). Lastly, towards the end of this quarter (March 5 - March 15), I will be serving as the review manager for Alfredo Correa’s Multi (https://github.com/correaa/boost-multi) library. Multi is a modern C++ library that provides manipulation and access of data in multidimensional arrays for both CPU and GPU memory. Feel free to give the library a go now and comment on what you find. This is a very high quality library which should have an exciting review.</summary></entry><entry><title type="html">From Prototype to Product: MrDocs in 2025</title><link href="http://cppalliance.org/alan/2025/10/28/Alan.html" rel="alternate" type="text/html" title="From Prototype to Product: MrDocs in 2025" /><published>2025-10-28T00:00:00+00:00</published><updated>2025-10-28T00:00:00+00:00</updated><id>http://cppalliance.org/alan/2025/10/28/Alan</id><content type="html" xml:base="http://cppalliance.org/alan/2025/10/28/Alan.html">&lt;p&gt;In 2024, the &lt;a href=&quot;https://www.mrdocs.com&quot;&gt;MrDocs&lt;/a&gt; project was a &lt;strong&gt;fragile prototype&lt;/strong&gt;. It documented Boost.URL, but the &lt;strong&gt;CLI&lt;/strong&gt;, &lt;strong&gt;configuration&lt;/strong&gt;, and &lt;strong&gt;build process&lt;/strong&gt; were unstable. Most users could not run it without direct help from the core group. That unstable baseline is the starting point for this report.&lt;/p&gt;

&lt;p&gt;In 2025, we moved the codebase to &lt;strong&gt;minimum-viable-product&lt;/strong&gt; shape. I led the releases that stabilized the pipeline, aligned the &lt;strong&gt;configuration model&lt;/strong&gt;, and documented the work in this report to support a smooth &lt;strong&gt;leadership transition&lt;/strong&gt;. This post summarizes the &lt;strong&gt;2024 gaps&lt;/strong&gt;, the &lt;strong&gt;2025 fixes&lt;/strong&gt;, and the &lt;strong&gt;recommended directions&lt;/strong&gt; for the next phase.&lt;/p&gt;

&lt;!-- prettier-ignore --&gt;
&lt;ul id=&quot;markdown-toc&quot;&gt;
  &lt;li&gt;&lt;a href=&quot;#system-overview&quot; id=&quot;markdown-toc-system-overview&quot;&gt;System Overview&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#2024-lessons-from-a-fragile-prototype&quot; id=&quot;markdown-toc-2024-lessons-from-a-fragile-prototype&quot;&gt;2024: Lessons from a Fragile Prototype&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#2025-from-prototype-to-mvp&quot; id=&quot;markdown-toc-2025-from-prototype-to-mvp&quot;&gt;2025: From Prototype to MVP&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#v003-enforcing-consistency&quot; id=&quot;markdown-toc-v003-enforcing-consistency&quot;&gt;v0.0.3: Enforcing Consistency&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#v004-establishing-the-foundation&quot; id=&quot;markdown-toc-v004-establishing-the-foundation&quot;&gt;v0.0.4: Establishing the Foundation&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#v005-stabilization-and-public-readiness&quot; id=&quot;markdown-toc-v005-stabilization-and-public-readiness&quot;&gt;v0.0.5: Stabilization and Public Readiness&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#2026-beyond-the-mvp&quot; id=&quot;markdown-toc-2026-beyond-the-mvp&quot;&gt;2026: Beyond the MVP&lt;/a&gt;    &lt;ul&gt;
      &lt;li&gt;&lt;a href=&quot;#strategic-prioritization&quot; id=&quot;markdown-toc-strategic-prioritization&quot;&gt;Strategic Prioritization&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#reflection&quot; id=&quot;markdown-toc-reflection&quot;&gt;Reflection&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#metadata&quot; id=&quot;markdown-toc-metadata&quot;&gt;Metadata&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#extensions-and-plugins&quot; id=&quot;markdown-toc-extensions-and-plugins&quot;&gt;Extensions and Plugins&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#dependency-resilience&quot; id=&quot;markdown-toc-dependency-resilience&quot;&gt;Dependency Resilience&lt;/a&gt;&lt;/li&gt;
      &lt;li&gt;&lt;a href=&quot;#follow-up-issues-for-v006&quot; id=&quot;markdown-toc-follow-up-issues-for-v006&quot;&gt;Follow-up Issues for v0.0.6&lt;/a&gt;&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#acknowledgments&quot; id=&quot;markdown-toc-acknowledgments&quot;&gt;Acknowledgments&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;system-overview&quot;&gt;System Overview&lt;/h2&gt;

&lt;p&gt;&lt;a href=&quot;https://www.mrdocs.com&quot;&gt;MrDocs&lt;/a&gt; is a C++ documentation generator built on &lt;strong&gt;Clang&lt;/strong&gt;. It parses source with full language fidelity, links declarations to their comments, and produces reference documentation that reflects real program structure—&lt;strong&gt;templates&lt;/strong&gt;, &lt;strong&gt;constraints&lt;/strong&gt;, and &lt;strong&gt;overloads&lt;/strong&gt; included.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Traditional tools often approximate the AST. MrDocs uses the AST directly, so documentation matches the code and modern C++ features render correctly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Unlike single-purpose generators, MrDocs separates the &lt;strong&gt;corpus&lt;/strong&gt; (semantic data) from the &lt;strong&gt;presentation layer&lt;/strong&gt;. Projects can choose among multiple &lt;strong&gt;output formats&lt;/strong&gt; or extend the system entirely: supply &lt;strong&gt;custom Handlebars templates&lt;/strong&gt; or script new generators using the &lt;strong&gt;plugin system&lt;/strong&gt;. The corpus is represented in the generators as a &lt;strong&gt;rich JSON-like DOM&lt;/strong&gt;. With schema files, MrDocs enables integration with &lt;strong&gt;build systems&lt;/strong&gt;, &lt;strong&gt;documentation frameworks&lt;/strong&gt;, or &lt;strong&gt;IDEs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;From the user’s perspective, MrDocs behaves like a &lt;strong&gt;well-engineered CLI utility&lt;/strong&gt;. It accepts &lt;strong&gt;configuration files&lt;/strong&gt;, supports &lt;strong&gt;relative paths&lt;/strong&gt;, accepts custom &lt;strong&gt;build options&lt;/strong&gt;, and reports &lt;strong&gt;warnings&lt;/strong&gt; in a controlled, &lt;strong&gt;compiler-like&lt;/strong&gt; fashion. For C++ teams transitioning from &lt;strong&gt;Doxygen&lt;/strong&gt;, the &lt;strong&gt;command structure&lt;/strong&gt; is somewhat familiar, but the &lt;strong&gt;internal model&lt;/strong&gt; is designed for &lt;strong&gt;reproducibility&lt;/strong&gt; and &lt;strong&gt;correctness&lt;/strong&gt;. Our goal is not just to render &lt;strong&gt;reference pages&lt;/strong&gt; but to provide a &lt;strong&gt;reliable pipeline&lt;/strong&gt; that any C++ project seeking &lt;strong&gt;modern documentation infrastructure&lt;/strong&gt; can adopt.&lt;/p&gt;

&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js&quot;&gt;&lt;/script&gt;
&lt;div class=&quot;mermaid&quot;&gt;
graph LR
  A[Source] --&amp;gt; B[Clang]
  B --&amp;gt; C[Corpus]
  C --&amp;gt; D{Plugin Layer}
  subgraph Generator
    E[HTML]
    F[AsciiDoc]
    G[XML]
    G2[...]
  end
  D --&amp;gt; E
  D --&amp;gt; F
  D --&amp;gt; G
  D --&amp;gt; G2
  E --&amp;gt; H{Plugin Layer}
  H --&amp;gt; H2[Published Docs]
  F --&amp;gt; H
  G --&amp;gt; H
  G2 --&amp;gt; H
  C --&amp;gt; I[Schema Export]
  I --&amp;gt; J[Integrations&lt;br /&gt;IDEs &amp;amp; Build Systems]
&lt;/div&gt;

&lt;h2 id=&quot;2024-lessons-from-a-fragile-prototype&quot;&gt;2024: Lessons from a Fragile Prototype&lt;/h2&gt;

&lt;p&gt;MrDocs entered 2024 as a proof-of-concept built for Boost.URL. It could document one or two curated codebases and produce asciidoc pages for Antora, but the workflow stopped there. The CLI exposed only the scenarios we needed. Configuration options lived in internal notes. The only dependable build path was the script sequence we used inside the Alliance. External users hit errors and missing options almost immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stability was just as fragile:&lt;/strong&gt; We had no &lt;strong&gt;sanitizers&lt;/strong&gt;, no &lt;strong&gt;warnings-as-errors&lt;/strong&gt;, and inconsistent &lt;strong&gt;CI hardware&lt;/strong&gt;. The binaries crashed as soon as they saw unfamiliar code. The pipeline worked only when the input looked like Boost.URL. Point it at slightly different code patterns and it would segfault. Each feature landed as a custom patch, so logic duplicated across generators, and fixing one path broke another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Early releases:&lt;/strong&gt; Release &lt;code&gt;v0.0.1&lt;/code&gt; captured that prototype: the early Handlebars engine, the HTML generator, the DOM refactor, and a list of APIs that only the core team could drive. &lt;code&gt;v0.0.2&lt;/code&gt; added structured configuration, automatic &lt;code&gt;compile_commands.json&lt;/code&gt;, and better SFINAE handling, but the tool was still insider-only.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leadership transition:&lt;/strong&gt; Late in 2024 I became project lead with two initial priorities: &lt;strong&gt;document the gaps&lt;/strong&gt; and describe the &lt;strong&gt;true limits&lt;/strong&gt; of the system. That set the 2025 baseline—a functional prototype that needed &lt;strong&gt;coherence&lt;/strong&gt;, &lt;strong&gt;reproducibility&lt;/strong&gt;, and &lt;strong&gt;trust&lt;/strong&gt; before it could call itself a product.&lt;/p&gt;

&lt;p&gt;What 2025 later fixed were the weaknesses we saw here: configuration coherence, generator unification, schema validation, and basic options were all missing. The CLI, configuration files, and code drifted apart. Generators evolved independently with duplicated code and inconsistent naming. Editors had no schema to lean on. Extraction rules were ad hoc, which made the output incomplete. CI ran on an improvised matrix with no caching, sanitizers, or coverage, so regressions slipped through. That was the starting point.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Summary: 2024 produced a working demo, not a reproducible system. Each success exposed another weak link and clarified what had to change in 2025.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In short:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;2024 left us with a working prototype but no coherent architecture.&lt;/li&gt;
  &lt;li&gt;The system could demonstrate the concept, but not sustain or reproduce it.&lt;/li&gt;
  &lt;li&gt;Every improvement exposed another weak link, and every success demanded more structure than the system was built to handle.&lt;/li&gt;
  &lt;li&gt;It was a year of learning by exhaustion—and setting the stage for everything that came next.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Key 2024 checkpoints align with the timeline below:&lt;/p&gt;

&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js&quot;&gt;&lt;/script&gt;
&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%%
timeline
  title Prototypes
  2024 Q1 : Boost.URL showcase
  2024 Q2 : CLI gaps
  2024 Q3 : Config + SFINAE fixes
  2024 Q4 : Leadership transition
&lt;/div&gt;

&lt;h1 id=&quot;2025-from-prototype-to-mvp&quot;&gt;2025: From Prototype to MVP&lt;/h1&gt;

&lt;p&gt;I started the year with a gap analysis that compared MrDocs to other C++ documentation pipelines. From that review I defined the minimum viable product and three priority tracks. &lt;strong&gt;Usability&lt;/strong&gt; covered workflows and surface area that make adoption simple. &lt;strong&gt;Stability&lt;/strong&gt; covered deterministic behavior, proper data structures, and CI discipline. &lt;strong&gt;Foundation&lt;/strong&gt; covered configuration and data models that keep code, flags, and documentation aligned. The 2025 releases followed those tracks and turned MrDocs from a proof of concept into a tool that other teams can adopt.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;v0.0.3 — Consistency.&lt;/strong&gt; We replaced ad-hoc behavior with a coherent system: a single source of truth for configuration kept CLI, config files, and docs in sync; generators and templates were unified so changes propagate by design; core semantic extraction (e.g., concepts, constraints, SFINAE) became reliable; and CI hardened around reproducible, tested outputs across HTML and Antora.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;v0.0.4 — Foundation.&lt;/strong&gt; We introduced precise warning controls and a family of &lt;code&gt;extract-*&lt;/code&gt; options to match established tooling, added a JSON Schema for configuration (enabling editor validation/autocomplete), delivered a robust reference system for documentation comments, brought initial inline formatting to generators, and simplified onboarding with a cross-platform bootstrap script. CI gained sanitizers, coverage checks, and modern compilers.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;v0.0.5 — Stabilization.&lt;/strong&gt; We redesigned documentation metadata to support recursive inline elements, enforced safer polymorphic types with optional references and non-nullable patterns, and added user-facing improvements (sorting, automatic compilation database detection, quick reference indices, improved namespace/overload grouping, LLDB formatters). The website and documentation UI were refreshed for accessibility and responsiveness, new demos (including self-documentation) were published, and CI was further tightened with stricter policies and cross-platform bootstrap enhancements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Together, these releases executed the roadmap derived from the initial gap analysis: they &lt;strong&gt;aligned&lt;/strong&gt; the moving parts, &lt;strong&gt;closed&lt;/strong&gt; the most important capability gaps, and delivered a &lt;strong&gt;stable foundation&lt;/strong&gt; that future work can extend without re-litigating fundamentals.&lt;/p&gt;

&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js&quot;&gt;&lt;/script&gt;
&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {
  &quot;primaryColor&quot;: &quot;#e4eee8&quot;,
  &quot;primaryBorderColor&quot;: &quot;#affbd6&quot;,
  &quot;primaryTextColor&quot;: &quot;#000000&quot;,
  &quot;lineColor&quot;: &quot;#baf9d9&quot;,
  &quot;secondaryColor&quot;: &quot;#f0eae4&quot;,
  &quot;tertiaryColor&quot;: &quot;#ebeaf4&quot;,
  &quot;fontSize&quot;: &quot;14px&quot;
}}}%%
mindmap
  root((MVP Evolution))
    v0.0.3
      Config sync
      Shared templates
      CI discipline
    v0.0.4
      Warning controls
      Schema
      Bootstrap
    v0.0.5
      Recursive docs
      Nav refresh
      Tooling polish
&lt;/div&gt;

&lt;h2 id=&quot;v003-enforcing-consistency&quot;&gt;v0.0.3: Enforcing Consistency&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;v0.0.3&lt;/code&gt; is where MrDocs stopped being a collection of one-off special cases and became a coherent system. Before this release, features landed in a single generator and drifted from the others; extraction handled only the narrowly requested pattern and crashed on nearby ones; and options were inconsistent—some hard-coded, some missing from CLI/config, with no mechanism to keep code, docs, and flags aligned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What changed:&lt;/strong&gt; The &lt;code&gt;v0.0.3&lt;/code&gt; release fixes this foundation. We introduced a single source of truth for &lt;strong&gt;configuration options&lt;/strong&gt; with TableGen-style metadata: docs, the config file, and the CLI always stay in sync. We added essential Doxygen-like options to make basic projects immediately usable and filled obvious gaps in symbols and doc comments.&lt;/p&gt;

&lt;p&gt;We implemented metadata extraction for &lt;strong&gt;core symbol types&lt;/strong&gt; and their information—such as template constraints, &lt;strong&gt;concepts&lt;/strong&gt;, and &lt;strong&gt;automatic SFINAE&lt;/strong&gt; detection. We &lt;strong&gt;unified generators&lt;/strong&gt; and templates so changes propagate by design, added &lt;strong&gt;tagfile support&lt;/strong&gt; and “lightweight reflection” to documentation comments as &lt;strong&gt;lazy DOM objects&lt;/strong&gt; and arrays, and &lt;strong&gt;extended Handlebars&lt;/strong&gt; to power the new generators. These features allowed us to create the initial version of the &lt;strong&gt;website&lt;/strong&gt; and ensure the documentation is always in sync.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build and testing discipline:&lt;/strong&gt; CI, builds, and tests were hardened. All generators were now tested, &lt;strong&gt;LLVM caching&lt;/strong&gt; systems improved, and we launched our first &lt;strong&gt;macOS release&lt;/strong&gt; (important for teams working on Antora UI bundles). All of this long tail of performance, correctness, and safety work turned “works on my machine” into repeatable, adoptable output across HTML and Antora.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;v0.0.3&lt;/code&gt; was the inflection point. For the first time, developers could depend on consistent configuration, &lt;strong&gt;shared templates&lt;/strong&gt;, and predictable behavior across generators. It aligned internal tools, eliminated duplicated effort, and replaced trial-and-error debugging with &lt;strong&gt;reproducible builds&lt;/strong&gt;. Every improvement in later versions built on this foundation.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Categorized improvements for v0.0.3&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;strong&gt;Configuration Options&lt;/strong&gt;: enforcing consistency, reproducible builds, and transparent reporting
      &lt;ul&gt;
        &lt;li&gt;Enforce configuration options are in sync with the JSON source of truth (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a1fb8ec6f23ef0802626329d7ab1e5c4635c52a7&quot; title=&quot;refactor(generate-config-info): normalization via visitor&quot;&gt;a1fb8ec6&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9daf71fe0539a3a6b926560a15e65fdbd6343356&quot; title=&quot;refactor: info nodes configuration file&quot;&gt;9daf71fe&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;File and symbol filters (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1b67a847db83f329af6cb9f059da7fa071939593&quot; title=&quot;feat: file and symbol filters&quot;&gt;1b67a847&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b352ba223db0ad0b3d5f7283072b5dffb95eab1e&quot; title=&quot;feat: symbol filters listed on docs&quot;&gt;b352ba22&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Reference and symbol configuration (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a3e4477f699e1c5c4d489239ad559f9d51823272&quot; title=&quot;feat: reference, symbol options&quot;&gt;a3e4477f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/30eaabc9a28aa3282bbe9e5b0c8b0e4a2c2c817f&quot; title=&quot;docs: reference, symbol options&quot;&gt;30eaabc9&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Extraction options (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/41411db2848e1fab628dc62ee2e1831628b5d4c7&quot; title=&quot;feat: extraction options&quot;&gt;41411db2&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1214d94bcf3597bd69caacd5b2648f677d4d197d&quot; title=&quot;docs: extraction options&quot;&gt;1214d94b&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Reporting options (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f994e47e318d852cc17cd026f7d7cdbcf3df0c5f&quot; title=&quot;feat: reporting options&quot;&gt;f994e47e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0dd9cb45cf0168dec028aeb276bd03a419ba3a12&quot; title=&quot;docs: reporting options&quot;&gt;0dd9cb45&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Configuration structure (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c8662b35fc85dc142f0694f299bb000a0f8899be&quot; title=&quot;feat: use structured information for configuration&quot;&gt;c8662b35&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/dcf5beef5a4b8ea75b24364b9c8a8f2f56d5e6c8&quot; title=&quot;feat: generate config documentation&quot;&gt;dcf5beef&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4bd3ea42420f20b6a45c545e7b61396567c3201f&quot; title=&quot;docs: configuration schema&quot;&gt;4bd3ea42&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;CLI workflows (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a2dc4c7883917025f0b63b227be7476f3986fd1d&quot; title=&quot;feat: CLI orchestrator improvements&quot;&gt;a2dc4c78&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3c0f90df53794a02d3c53d25aa4fa5c8a69fbaad&quot; title=&quot;docs: CLI quick reference&quot;&gt;3c0f90df&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Warnings (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4eab1933ff58330fb2c6753a648a26fba3038118&quot; title=&quot;docs: warnings&quot;&gt;4eab1933&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5e586f2b03dd7b1eb5a45e51c904d8cbf4f63661&quot; title=&quot;feat: warnings&quot;&gt;5e586f2b&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0e2dd713ebde919bf0ebc231d9a5795eb99b0d25&quot; title=&quot;feat: warning when configuration references missing include directories&quot;&gt;0e2dd713&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;SettingsDB (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/225b2d50835485b746c766df8993e1bb66938d17&quot; title=&quot;feat: settings DB&quot;&gt;225b2d50&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/51639e77b629f00c02aa11afe41a01e12804ef63&quot; title=&quot;feat: settings db generator&quot;&gt;51639e77&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Deterministic configuration (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b544974105efc225af0af7f9952ef96338fe4c44&quot; title=&quot;feat: deterministic configuration order&quot;&gt;b5449741&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Global configuration documentation (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ec3dbf5c3d72b6a3cee6bea66f3002c59b398b80&quot; title=&quot;docs: global configuration reference&quot;&gt;ec3dbf5c&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Generators&lt;/strong&gt;: unification, new features, and early refactoring
      &lt;ul&gt;
        &lt;li&gt;Antora/HTML generator consistency (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/e674182fd5b72a91f7acd74d2f93df13d1d604b3&quot; title=&quot;refactor: antora/HTML generator consistency&quot;&gt;e674182f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/82e86a6cb1ced9c8aca8024f6314d1b4089f7cbd&quot; title=&quot;feat: unify Antora and HTML generation&quot;&gt;82e86a6c&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9154b9c5957e4fa8aa4ad918b6d9e9cb61a2a08d&quot; title=&quot;feat: Antora generator templates&quot;&gt;9154b9c5&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;HTML generator improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a28cb2f7e2df935295b041b30c89ea2f0f7316a3&quot; title=&quot;feat: HTML generator improvements&quot;&gt;a28cb2f7&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/064ce55a568bf8adca76a56c16b918836147cee0&quot; title=&quot;feat(Handlebars): html generators&quot;&gt;064ce55a&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5f6665d8f8c0c54f1a77a4a6d9447bb7a8c9e968&quot; title=&quot;feat: html nav helper&quot;&gt;5f6665d8&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Documentation for generators (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2382e8cf095d8241d745e381042ec9cdb15f347d&quot; title=&quot;docs(generators): HTML and Antora&quot;&gt;2382e8cf&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/646a1e5bae94b295ffdbbe07d0a7de618f2ab422&quot; title=&quot;docs: Antora generator docs&quot;&gt;646a1e5b&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Supporting new output formats (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/58a79f748dcefc4a6d561755a60f012f921985fe&quot; title=&quot;feat: generator registry&quot;&gt;58a79f74&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/271dde577da0c48f19c6d7dce39ed7e827642850&quot; title=&quot;feat: xml generator&quot;&gt;271dde57&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9d9f6652c8f247512c605bec097c1fd1f79afb57&quot; title=&quot;feat: xml generator docs&quot;&gt;9d9f6652&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Handlebars improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ebf4dbebc4b550321d0119b3372d856e56f5e41f&quot; title=&quot;feat: Handlebars improvements&quot;&gt;ebf4dbeb&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/be76fc073a95fdd2b4f69d0d68d03355e5caa0d1&quot; title=&quot;feat: handlebars helpers documentation&quot;&gt;be76fc07&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Generator tooling (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/00fc84cff9390743ecc1ff87f4d49d68e19698d7&quot; title=&quot;feat: generator tests&quot;&gt;00fc84cf&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6a69747d86ea7117de64a559211a96d792f8f83a&quot; title=&quot;feat: generator harness&quot;&gt;6a69747d&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Navigation helpers (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/fdccad42c85358aed91c318ed3daa9d1113facde&quot; title=&quot;feat: navigation helpers&quot;&gt;fdccad42&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;DOM optimizations (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9b41d2e44fcb17d383c8d926c9988ccc381315d7&quot; title=&quot;feat: DOM optimizations&quot;&gt;9b41d2e4&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Libraries and metadata&lt;/strong&gt;: unification, fixes, and extraction enhancements
      &lt;ul&gt;
        &lt;li&gt;Info node visitor and traversal improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/be86a08d4df00800004337b52844af1f8d76f9fb&quot; title=&quot;feat: info node visitor improvements&quot;&gt;be86a08d&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/58ab5a5ea28200bf26be8314ebb677cb5b87f106&quot; title=&quot;feat: traversal improvements&quot;&gt;58ab5a5e&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Metadata consistency (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/544ee37d11fa30537642abff3cf39e4beab8a7e2&quot; title=&quot;feat: metadata consistency&quot;&gt;544ee37d&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/62f8a2bd3f52eef902bb47e8106d3b8cf886fbac&quot; title=&quot;feat: metadata refactor&quot;&gt;62f8a2bd&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bd9c704f87f40812d2b176143e7f24cc786ca7f0&quot; title=&quot;feat: metadata extraction&quot;&gt;bd9c704f&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Template and concept support (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4b0b4a7198a270e21d73f6c024d0d3c6cf6f8bbf&quot; title=&quot;feat: concept extraction&quot;&gt;4b0b4a71&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/57cf74de0a87fd29496b8aa00f9b355a51443ed6&quot; title=&quot;feat: SFINAE detection improvements&quot;&gt;57cf74de&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/92aa76a4529919831e3e2b8802e9b47b68d5d447&quot; title=&quot;feat: template constraints extraction&quot;&gt;92aa76a4&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Symbol resolution and references (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f64d4a06c17782fb8f75309cba3138ff9aa12f7d&quot; title=&quot;feat: symbol resolution improvements&quot;&gt;f64d4a06&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/aa9333d4c2eab4cc02c33ad4c7a0f8fb2c7cee25&quot; title=&quot;feat: reference handling improvements&quot;&gt;aa9333d4&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Documentation improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5d3f21c8c8d8235f57deeef78d9e4eab4607c6f9&quot; title=&quot;docs: metadata documentation&quot;&gt;5d3f21c8&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Website and Documentation&lt;/strong&gt;: turning features into a showcase and simplifying workflows
      &lt;ul&gt;
        &lt;li&gt;Create website (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/05400c3c42c85c31a892d763cddcb2b562205c10&quot; title=&quot;docs: website landing page&quot;&gt;05400c3c&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8fba2020cb971722fcb4c7942d11cd8f1cfcd866&quot; title=&quot;docs: landing page download link&quot;&gt;8fba2020&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Use the new features to create an HTML panel demos workflow (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/12ceadee834e3dbb133f6e5ed24f6d2aafacbdc3&quot; title=&quot;docs: website panels use embedded HTML&quot;&gt;12ceadee&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d38d3e1a59983b09405b9accb47abb0f7d40a9d7&quot; title=&quot;docs(demos): enable HTML demos&quot;&gt;d38d3e1a&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c46c4a9179abb7701a2f1c6f9446f29caae64350&quot; title=&quot;ci: enable html demos&quot;&gt;c46c4a91&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Unify Antora author mode playbook (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/999ea4f3468ba3ad920b0cb91b56b5227c48d5a2&quot; title=&quot;docs: unify author mode playbook&quot;&gt;999ea4f3&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Generator use cases and trade-offs (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2307ca6aba463bc67417e929563d63fb037fe3b4&quot; title=&quot;docs(generators): use cases and trade-offs&quot;&gt;2307ca6a&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Correctness and simplification (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4d884f43470596c69e500fe3ba55a2f504412056&quot; title=&quot;docs: simplify demos table&quot;&gt;4d884f43&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/55214d7242eaf4a5a8c5746d6b8779e82dbaeaf7&quot; title=&quot;docs: releases extension allows CI authentication and retries&quot;&gt;55214d72&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b078beadd046bed6806604b96196f91a234e1140&quot; title=&quot;docs(Scope): include lookups in documentation&quot;&gt;b078bead&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d8b7fcf4245e98d5935ebbb05c02d0aba62e3faa&quot; title=&quot;docs(usage): cmake example uses TMP_CPP_FILE&quot;&gt;d8b7fcf4&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/96484836ef67bba4b54df5f780c5caac3a255f68&quot; title=&quot;docs: libc++ compiler requirements&quot;&gt;96484836&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/62f361fb50ebb5c09e901b8a16b4cfa992bffcb1&quot; title=&quot;ci: remove info node support warnings&quot;&gt;62f361fb&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Build, Testing, and Releases&lt;/strong&gt;: strengthening CI, improving LLVM caching workflow, and stabilizing releases
      &lt;ul&gt;
        &lt;li&gt;Templates are tested with golden tests (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2bc09e65c916a0701ed3bf09ef11a7fb15d0abf1&quot; title=&quot;test: asciidoc golden tests&quot;&gt;2bc09e65&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9eece731f3865f2ed50faf3ee36c8c३०8b1ff90&quot; title=&quot;test: html golden tests&quot;&gt;9eece731&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;LLVM caches and runners improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4c14e875b06ad995ed3206cd2979dea13f004bd6&quot; title=&quot;ci: no fallback for GHA LLVM cache&quot;&gt;4c14e875&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bd54dc7c2562ec751e42fb161116468a4838cb6d&quot; title=&quot;ci: unify llvm parameters&quot;&gt;bd54dc7c&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3d92071a351fb1ee59011d55ca90147762c62bb8&quot; title=&quot;ci: intermediary steps use actions&quot;&gt;3d92071a&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8537d3dbc71878a3ea6e176b0d25af8d0d51e799&quot; title=&quot;ci: resolve llvm-root for cache@v4&quot;&gt;8537d3db&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f3b33a473eb9b4d3abf6591e8aa49401efae7ba9&quot; title=&quot;ci(llvm-matrix): filter uses Node.js 20&quot;&gt;f3b33a47&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5982cc7e8bcaeb67fe5287c507636302416a7613&quot; title=&quot;ci(llvm-releases): handle empty llvm releases matrix&quot;&gt;5982cc7e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/93487669932e940115f9c6d827e301d41d2e9616&quot; title=&quot;ci(releases): test all releases&quot;&gt;93487669&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Enable macOS workflow (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/390159e34a91074627c333b6f0d09a25bf9d5452&quot; title=&quot;ci: enable macos&quot;&gt;390159e3&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Stabilize artifacts (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5e0f628e5b1cdb06a3dd260e0e42069f12733353&quot; title=&quot;ci(releases): antora includes stacktraces&quot;&gt;5e0f628e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d1c3566ed55f0dfc2225d3f67224291367aa00f3&quot; title=&quot;ci: fix package asset uploads&quot;&gt;d1c3566e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/62736e456f1e9e089822930e10a322ebadc89730&quot; title=&quot;ci: demos artifact path is relative&quot;&gt;62736e45&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Tests support individual file inputs, which improved local tests considerably (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/75b1bc52d35648890f21c397fcfbcfb570d43d97&quot; title=&quot;Support file inputs&quot;&gt;75b1bc52&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Performance, correctness, and safety (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a820ad790d4fb943516d9f676bf8d96e9d7fd374&quot; title=&quot;ci(llvm-releases): ssh uses relative user paths&quot;&gt;a820ad79&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/43e5f2520462b9ab2fd5c9d6558d3c299c1a4b1a&quot; title=&quot;ci: prevent redundant builds&quot;&gt;43e5f252&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a382820f3adba34ca9b6d6c48924ee72fb6291b0&quot; title=&quot;ci: release packaging improvements&quot;&gt;a382820f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/fbcb5b2d445df1fa746aac1d4735d10d5451d70f&quot; title=&quot;ci: move sanitizer workflows&quot;&gt;fbcb5b2d&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6a2290cbde99556ac06f94c0c1e1cd2ea9f29a44&quot; title=&quot;ci: enforce formatting on generators&quot;&gt;6a2290cb&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/49f4125ff42e0e1d80a55df6d49c6940700ebab7&quot; title=&quot;ci: disable failing llvm tests temporarily&quot;&gt;49f4125f&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;v004-establishing-the-foundation&quot;&gt;v0.0.4: Establishing the Foundation&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;v0.0.4&lt;/code&gt; completed the core capabilities we need for production. With the moving parts aligned in &lt;code&gt;v0.0.3&lt;/code&gt;, this release focused on the fundamentals. It added consistent &lt;strong&gt;warning options&lt;/strong&gt;, &lt;strong&gt;extraction controls&lt;/strong&gt; that match established tools, &lt;strong&gt;schema support&lt;/strong&gt; for IDE auto-completion, a complete &lt;strong&gt;reference system&lt;/strong&gt; for doc comments, and initial &lt;strong&gt;inline formatting&lt;/strong&gt; in the generators. The &lt;strong&gt;bootstrap script&lt;/strong&gt; became a one-step path to a working build. We also hardened the pipeline with modern &lt;strong&gt;CI&lt;/strong&gt; practices—sanitizers, coverage integration, and standardized presets.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Categorized improvements for v0.0.4&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;strong&gt;Configuration and Extraction&lt;/strong&gt;: structured configuration, extraction controls, and schema validation
      &lt;ul&gt;
        &lt;li&gt;Configuration schema (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d9517e1d37c61b45a8df89d647abb12ca0582788&quot; title=&quot;feat: generate JSON schema for config&quot;&gt;d9517e1d&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5f846c1c1d4be0aa18862b08d4f39b8a1c398058&quot; title=&quot;feat: config schema docs&quot;&gt;5f846c1c&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ffa0d1a661cbf7ef6b49666f598d33490af65f05&quot; title=&quot;feat: schema validation&quot;&gt;ffa0d1a6&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Extraction filters (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0a60bb989b1e60292b4e6fc8b5517fcd9e237ebd&quot; title=&quot;feat: extraction filter improvements&quot;&gt;0a60bb98&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a7d7714db8268c6e1df4032ff889473f6d429847&quot; title=&quot;feat: extraction filters doc updates&quot;&gt;a7d7714d&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Reference configuration (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d18a8ab3b0eeabac8d0a2ed880c1c1f196fedfbd&quot; title=&quot;feat: reference configuration updates&quot;&gt;d18a8ab3&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Documentation metadata (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6676c1e8ed7d1f6d3828bcaf8b28577c88eb02e5&quot; title=&quot;feat: documentation metadata improvements&quot;&gt;6676c1e8&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Warnings and Reporting&lt;/strong&gt;: consistent governance with CLI parity
      &lt;ul&gt;
        &lt;li&gt;Warning controls (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2a29f0a04824c5b3d70755029766b1d19b8c5bcd&quot; title=&quot;feat: warning controls&quot;&gt;2a29f0a0&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6d3c1f47d662d0ed9264f10dd3d9cc3229a48bc3&quot; title=&quot;docs: warning controls&quot;&gt;6d3c1f47&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Extract options (&lt;code&gt;extract-{public,protected,private,inline}&lt;/code&gt;) (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/aa5a6be3d1f9a87d2fd1941f0904ffa52c57d205&quot; title=&quot;feat: extract options align with Doxygen defaults&quot;&gt;aa5a6be3&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;CLI defaults (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d85439c399c88113a69e01358fd9a63a64c6af38&quot; title=&quot;feat: CLI defaults and reporting updates&quot;&gt;d85439c3&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Generators&lt;/strong&gt;: Javadoc, inline formatting, and reference improvements
      &lt;ul&gt;
        &lt;li&gt;Documentation reference system (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4b430f9b1bd1c6b7df49bb004bb7961c6f215047&quot; title=&quot;feat: documentation reference system&quot;&gt;4b430f9b&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/73489e2b4be42d2b2c26cb013fe532d3fb4e9ff4&quot; title=&quot;docs: reference system docs&quot;&gt;73489e2b&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Javadoc metadata (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8dd3af67bbbf0a0f1e57d9f351d10d160dfde0f4&quot; title=&quot;feat: Javadoc metadata extraction&quot;&gt;8dd3af67&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f7e59d4c61d77c2587da9ba0fa808c5b1e366f3b&quot; title=&quot;docs: Javadoc metadata reference&quot;&gt;f7e59d4c&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Inline formatting (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5c7490a3d5388551e68f6f021caa6e741d0f2f86&quot; title=&quot;feat: inline formatting support&quot;&gt;5c7490a3&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d1d807456573e9350a13e01da27a8e8fc3d317fc&quot; title=&quot;fix: inline formatting edge cases&quot;&gt;d1d80745&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;XML generator alignment (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9867e0d25fb16973109fec922dd068991de3d5af&quot; title=&quot;feat: XML generator schema alignment&quot;&gt;9867e0d2&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/0f890f2c1d2d471ffe9343d7b15b731afc93e8e2&quot; title=&quot;fix: XML generator synchronizes metadata&quot;&gt;0f890f2c&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Build and CI&lt;/strong&gt;: sanitizers, coverage, and reproducible builds
      &lt;ul&gt;
        &lt;li&gt;Sanitizer integration (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6257c74758f0f382d7c4d6cd430144bd7e7a1740&quot; title=&quot;ci: add asan clang Linux job&quot;&gt;6257c747&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/88954d7f00b1d7fb8de8824e422ddc8fd7081f39&quot; title=&quot;ci: add msan Linux job&quot;&gt;88954d7f&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Coverage reporting (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bf195759192109cee82097cce91440d0155616b5&quot; title=&quot;ci: enable coverage validation for PRs&quot;&gt;bf195759&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Relocatable build (&lt;code&gt;std::format&lt;/code&gt;) (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/7b871032ae0fd34e69370e0ab45e910255f8f1c9&quot; title=&quot;feat: switch to std::format for relocatable build&quot;&gt;7b871032&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Bootstrap modernization (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3eec9a48e7df379a43c2abaea65a74acc9bd733f&quot; title=&quot;build(bootstrap): find_tool also looks at prefixes&quot;&gt;3eec9a48&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/71afb87b3e3c397d0681da961f754cdfb50d4aad&quot; title=&quot;build(bootstrap): run configurations create paths with path.join&quot;&gt;71afb87b&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/524e7923750f2dd8e8e19d11cc468fa8dd49f70a&quot; title=&quot;build(bootstrap): visual studio run configurations and tasks&quot;&gt;524e7923&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h2 id=&quot;v005-stabilization-and-public-readiness&quot;&gt;v0.0.5: Stabilization and Public Readiness&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;v0.0.5&lt;/code&gt; marked the transition toward a &lt;strong&gt;sustained development model&lt;/strong&gt; and prepared the project for &lt;strong&gt;handoff&lt;/strong&gt;. This release focused on &lt;strong&gt;presentation&lt;/strong&gt;, &lt;strong&gt;polish&lt;/strong&gt;, and &lt;strong&gt;reliability&lt;/strong&gt;—ensuring that MrDocs was ready not only for internal use but for public visibility. During this period, we expanded the set of &lt;strong&gt;public demos&lt;/strong&gt;, refined the &lt;strong&gt;website and documentation&lt;/strong&gt;, and stabilized the &lt;strong&gt;infrastructure&lt;/strong&gt; to support a growing user base. The goal was to leave the project in a state where it could continue evolving smoothly, with a stable core, clear development practices, and a professional public face.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community and visibility&lt;/strong&gt;: Beyond the commits, this release reflected broader &lt;strong&gt;activity around the project&lt;/strong&gt;. We generated and published several &lt;strong&gt;new demos&lt;/strong&gt;, many of which revealed &lt;strong&gt;integration issues&lt;/strong&gt; that were subsequently fixed. As more external users began adopting MrDocs, the &lt;strong&gt;feedback loop accelerated&lt;/strong&gt;: bug reports, feature requests, and real-world &lt;strong&gt;edge cases&lt;/strong&gt; guided much of the work. New contributors joined the team, collaboration became more distributed, and visibility increased. Around the same time, I introduced MrDocs to developers at &lt;strong&gt;CppCon 2025&lt;/strong&gt;, where it received strong feedback from library authors testing it on their own projects. The tool was beginning to gain recognition as a &lt;strong&gt;viable, modern alternative to Doxygen&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical progress&lt;/strong&gt;: This release focused on correctness. We redesigned the documentation comment data structures to support &lt;strong&gt;recursive inline elements&lt;/strong&gt; and render &lt;strong&gt;Markdown and HTML-style formatting&lt;/strong&gt; correctly. We moved to &lt;strong&gt;non-nullable polymorphic types&lt;/strong&gt; and &lt;strong&gt;optional references&lt;/strong&gt; so that invariants fail at compile time rather than at runtime. User-facing updates included new &lt;strong&gt;sorting options&lt;/strong&gt;, &lt;strong&gt;automatic compilation database detection&lt;/strong&gt;, a &lt;strong&gt;quick reference index&lt;/strong&gt;, broader namespace and overload grouping, and &lt;strong&gt;LLDB formatters&lt;/strong&gt; for Clang and MrDocs symbols. We &lt;strong&gt;refreshed the website and documentation UI&lt;/strong&gt; for accessibility and responsiveness, added new &lt;strong&gt;demos&lt;/strong&gt; (including the MrDocs self-reference), and tightened CI with more sanitizers, stricter warning policies, and cross-platform bootstrap improvements.&lt;/p&gt;

&lt;p&gt;Together, these improvements completed the transition from a &lt;strong&gt;developing prototype&lt;/strong&gt; to a &lt;strong&gt;dependable product&lt;/strong&gt;. &lt;code&gt;v0.0.5&lt;/code&gt; established a &lt;strong&gt;stable foundation&lt;/strong&gt; for others to build on—&lt;strong&gt;polished&lt;/strong&gt;, &lt;strong&gt;documented&lt;/strong&gt;, and &lt;strong&gt;resilient&lt;/strong&gt;—so future releases could focus on extending capabilities rather than consolidating them. With this release, the project reached a point where the &lt;strong&gt;handoff could occur naturally&lt;/strong&gt;, closing one chapter and opening another.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;Categorized improvements for v0.0.5&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;strong&gt;Metadata&lt;/strong&gt;: documentation inlines and safety improvements
      &lt;ul&gt;
        &lt;li&gt;Recursive documentation inlines (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/51e2b655af43f36bc2fd3e9c369dbd48046d2de6&quot; title=&quot;feat(metadata): support recursive inline elements in documentation&quot;&gt;51e2b655&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Consistent sorting options for members and namespaces (&lt;code&gt;sort-members-by&lt;/code&gt;, &lt;code&gt;sort-namespace-members-by&lt;/code&gt;) (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f0ba28dd3526144e8053aa01eb1bbe5e90b7a4f3&quot; title=&quot;feat: `sort-members-by` option&quot;&gt;f0ba28dd&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a0f694dcf6c7d4fd0249f42f91592f65a5d78afd&quot; title=&quot;feat: `sort-namespace-members-by` option&quot;&gt;a0f694dc&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Non-nullable polymorphic types and optional references (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c9f9ba132627696b2140a62e078ed128edb2ea31&quot; title=&quot;feat(lib): optional nullable traits&quot;&gt;c9f9ba13&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8ef3ffaf8628f6c1c4109f2600061c7fb3778577&quot; title=&quot;feat(lib): optional references&quot;&gt;8ef3ffaf&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bd3e1217e60f949c2bbf692917750fac3d9fad11&quot; title=&quot;refactor(lib): use mrdocs::Optional in public API&quot;&gt;bd3e1217&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/afa558a6dd834c10ba4153828d16340304d75c2c&quot; title=&quot;refactor(Corpus): enforce non-optional polymorphic types&quot;&gt;afa558a6&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6ba8ef6bdc5dcbb60c7b09344d3839bd39e49325&quot; title=&quot;refactor(Corpus): valueless_after_move is asserted&quot;&gt;6ba8ef6b&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Consistent metadata class family hierarchy pattern (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6d4954975bba75c184393b5d93f3f9f040311ed0&quot;&gt;6d495497&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;MrDocsSettings includes automatic compilation database support (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9afededbfe293f2e47fa2d7266b80772b0d0cb04&quot; title=&quot;feat: MrDocsSettings compilation database&quot;&gt;9afededb&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a1f289de6719d8004e11ebd066c3d2a49c4d28d4&quot; title=&quot;fix: use a distinct include guard in MrDocsSettingsDB.hpp&quot;&gt;a1f289de&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Quick reference index (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/68e029c17c51711c982c6e049510c8e47f5e4f66&quot; title=&quot;feat: quick reference index page&quot;&gt;68e029c1&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/940c33f47062b6d8f915bd5e92a3ce6f6e60d774&quot; title=&quot;feat: add close button to docs nav (#1033)&quot;&gt;940c33f4&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Namespace/using/overloads grouping includes using declarations and overloads as shadows (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/69e1c3bcd9607bc2037a50f865ecea976a72f5a6&quot; title=&quot;feat: namespace tranches include using declarations&quot;&gt;69e1c3bc&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d722b7d09ee20479fcd06726f95376589c39cc85&quot; title=&quot;feat(handlebars): using declaration page includes shadows and briefs&quot;&gt;d722b7d0&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2b59269cbe74bce8ee261552dd35a72cfb240b20&quot; title=&quot;feat: overload sets as shadow declarations&quot;&gt;2b59269c&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Conditional &lt;code&gt;explicit&lt;/code&gt; clauses in templated methods (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2bff4e2fbf93e35a5eeb31e0505c0bde9bcf7c6d&quot; title=&quot;feat: conditionally explicit clauses in templated methods&quot;&gt;2bff4e2f&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Destructor overloads supported in class templates (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/336ad3190fac18a69481b166d72b2d647db129c9&quot; title=&quot;feat: destructor overloads in class templates&quot;&gt;336ad319&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Using declarations include all shadow variants (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/88a1cebf1e62b87551cf2fd6ec5e1705d3a4e34a&quot; title=&quot;test: test cases for all using declaration variants&quot;&gt;88a1cebf&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/9253fd8f228208d17d94a5dc34a75c8c6c5c542d&quot; title=&quot;test: using declaration shadows only include previous declarations&quot;&gt;9253fd8f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a7d5cf6a00874addd313f74b7e833e0df6df1aaa&quot; title=&quot;test: using declaration with mixed shadows&quot;&gt;a7d5cf6a&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;&lt;code&gt;show-enum-constants&lt;/code&gt; option (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/07b69e1c92eee1b8d4176a4076161c10759d8aaf&quot; title=&quot;feat: show-enum-constants option&quot;&gt;07b69e1c&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Custom LLDB formatters for Clang and MrDocs symbols (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/069bd8f4f6aa85f24c5d938542e42791ee91c46a&quot; title=&quot;feat(lldb): LLDB data formatters&quot;&gt;069bd8f4&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f83eca17b1ca0ef593fd55373d48f48d101ec2cd&quot; title=&quot;fix(lldb): only handle Info types directly in mrdocs namespace&quot;&gt;f83eca17&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1b39fdd76abb2d531ab28f46ee086571dd745e44&quot; title=&quot;fix(lldb): clang ast formatters&quot;&gt;1b39fdd7&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/aefc53c7b016d43a46c75b150d41fec2f82f00b4&quot; title=&quot;fix(lldb): consistent &amp;lt;unnamed&amp;gt; clang summary&quot;&gt;aefc53c7&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Performance, correctness, and safety (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d1788049ffa6b8412869af319820328a05a24536&quot; title=&quot;feat: templates receive config via reflection&quot;&gt;d1788049&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3bd94cff54039b60217ec14767f359bc54f168d1&quot; title=&quot;refactor(Config): config dom object update function&quot;&gt;3bd94cff&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/8a8115602137641f5dab378f292843bc9ad56f37&quot; title=&quot;fix: overloads finalizer preemptively emplaces members&quot;&gt;8a811560&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3ff37448d207b89e83a27e3ff58e6401a76eaee3&quot; title=&quot;fix: legible names handle using declarations as shadow&quot;&gt;3ff37448&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ad1e7baa611ba05f02759347d114b4cdb464a3c4&quot; title=&quot;Remove duplicate template argument list for excluded class template specialization&quot;&gt;ad1e7baa&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b10b8aa3bd2917e35e480390e2ce47d5b8dc9d48&quot; title=&quot;fix: symbol shadows table has a single column&quot;&gt;b10b8aa3&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/482c0be836921577afa08e62a7ed1d1829fafc9a&quot; title=&quot;refactor: xml generator use config values directly&quot;&gt;482c0be8&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d66da796fe9877bbece0bec7983b8c25bc16d1f5&quot; title=&quot;fix(handlebars): html code blocks start on the first line&quot;&gt;d66da796&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ec8daa11085ba58b52628c468f5591ffc0340208&quot; title=&quot;fix(handlebars): starts_with helper validates arguments&quot;&gt;ec8daa11&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5234b67cd0745048408705c50b2108cf4f09aedd&quot; title=&quot;fix(handlebars): recursively traversed namespaces do not include description&quot;&gt;5234b67c&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5e879b102c76f416d0a40cae87ef16226ddc1431&quot; title=&quot;fix(handlebars): records include protected base classes&quot;&gt;5e879b10&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/35e14c93f27d323dea675bb83eb78d7077c8ad9d&quot; title=&quot;fix(ci,style): improve asset copying and enhance UI contrast for docs site (#979)&quot;&gt;35e14c93&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/d5a28a8973ef3b6d01f2d48b229c9e666e093d7d&quot; title=&quot;feat(handlebars): final specifier&quot;&gt;d5a28a89&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6878c199920260f6ccca2f9be0f933bf08318398&quot; title=&quot;fix: `using` synopsis uses the nameinfo only&quot;&gt;6878c199&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/21ce3e74db3ff5fa9f7b0180530b95a3ef32a1d3&quot; title=&quot;fix: std::formatter for clang::mrdocs::SymbolID&quot;&gt;21ce3e74&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2da2081b0b0724309fc7a68c071e830e7faa2da9&quot; title=&quot;fix: remove an unused `else if` in record.hbs&quot;&gt;2da2081b&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/b528ae11a46f6b8fda4e47a30e32bc8868cc9555&quot; title=&quot;fix: simplify the logic about base classes in record.hbs&quot;&gt;b528ae11&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Website and Documentation&lt;/strong&gt;: new demos and a new website
      &lt;ul&gt;
        &lt;li&gt;New demos (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/cfa9eb7d1c7770ba6e1b6d12bf7322cb81afa4d2&quot; title=&quot;docs: fmt demo&quot;&gt;cfa9eb7d&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/1b930b863a7a7a763ef1349f51a1813769a84e41&quot; title=&quot;docs: fmt demo&quot;&gt;1b930b86&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/c18be83e355a1a6bdea95f21f911080869267a07&quot; title=&quot;docs: nlohmann.json demo&quot;&gt;c18be83e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/177fae4a79f6d8d4665026f25aa2ce2482c59a09&quot; title=&quot;docs: extension sorts demos by release&quot;&gt;177fae4a&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/33275050025921c6aa6c241268899920f456e652&quot; title=&quot;docs: add range-v3 demo&quot;&gt;33275050&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Website and documentation refresh (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/35e14c93f27d323dea675bb83eb78d7077c8ad9d&quot; title=&quot;fix(ci,style): improve asset copying and enhance UI contrast for docs site (#979)&quot;&gt;35e14c93&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/a643774216d553b7f0f16c3e9b7380c17da7f0c1&quot; title=&quot;docs: redesign landing page&quot;&gt;a6437742&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Self-documentation (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f2a5f77eb9d2273a15329f3d5c9963c1f48d9952&quot; title=&quot;docs: MrDocs documents itself&quot;&gt;f2a5f77e&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Antora enhancements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5ed0f48fda415df1d3f67bff4c8072921bffeb29&quot; title=&quot;docs: Antora enhancements&quot;&gt;5ed0f48f&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
    &lt;li&gt;&lt;strong&gt;Build, Testing, and Releases&lt;/strong&gt;: improvements and hardening CI
      &lt;ul&gt;
        &lt;li&gt;Toolchain and CI hardening (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/6257c74758f0f382d7c4d6cd430144bd7e7a1740&quot; title=&quot;ci: add asan clang Linux job&quot;&gt;6257c747&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/88954d7f00b1d7fb8de8824e422ddc8fd7081f39&quot; title=&quot;ci: add msan Linux job&quot;&gt;88954d7f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/bf195759192109cee82097cce91440d0155616b5&quot; title=&quot;ci: enable coverage validation for PRs&quot;&gt;bf195759&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/ba0dcfd37dee134f363cd0365d435b39fd6b766b&quot; title=&quot;ci: treat warnings as errors&quot;&gt;ba0dcfd3&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Bootstrap improvements (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/3eec9a48e7df379a43c2abaea65a74acc9bd733f&quot; title=&quot;build(bootstrap): find_tool also looks at prefixes&quot;&gt;3eec9a48&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/71afb87b3e3c397d0681da961f754cdfb50d4aad&quot; title=&quot;build(bootstrap): run configurations create paths with path.join&quot;&gt;71afb87b&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/524e7923750f2dd8e8e19d11cc468fa8dd49f70a&quot; title=&quot;build(bootstrap): visual studio run configurations and tasks&quot;&gt;524e7923&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4b79ef4136fabd8673d63361a0ba0412ed94330f&quot; title=&quot;build(bootstrap): probe vcvarsall environment&quot;&gt;4b79ef41&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/7d27204ee78255d557a384e4031688fe51a58779&quot; title=&quot;build(bootstrap): Boost documentation run configuration folder&quot;&gt;7d27204e&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/988e9ebc576690c8885def76ee8ec4796764703&quot; title=&quot;build(bootstrap): config info for docs&quot;&gt;988e9ebc&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/94a5b799543e7b62802c8a18ca26ec156086ad24&quot; title=&quot;build(bootstrap): remove dependency build directories after installation&quot;&gt;94a5b799&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/be7332cf2a9b727fc8b4913c8b4303842505caa2&quot; title=&quot;build: presets use optimizeddebug to match bootstrap&quot;&gt;be7332cf&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/4d705c96be5daa974f0fc3417383b86eb3a9608d&quot; title=&quot;build(bootstrap): ensure git symlinks&quot;&gt;4d705c96&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f48bbd2fc9ee9e77120ed374997ba3ded4a6963d&quot; title=&quot;build: bootstrap enables libcxx hardening mode&quot;&gt;f48bbd2f&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/f93634610e131c0ab9ec6c45d4644eed4a16186d&quot; title=&quot;fix: bootstrap uses latest clang include directory&quot;&gt;f9363461&lt;/a&gt;)&lt;/li&gt;
        &lt;li&gt;Performance, correctness, and safety (&lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/5aa714b21e11dbc64e51f81d7097adda59cd7cb4&quot; title=&quot;build: custom target to test all generators&quot;&gt;5aa714b2&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/469f41ee79957525e5fd52e1e3838624d03458f1&quot; title=&quot;remove_bad_files script does not rely on mapfile&quot;&gt;469f41ee&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/629f184895a04117b057602df9785cd23661f139&quot; title=&quot;build: quote genexp for target_include_directories&quot;&gt;629f1848&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/2f0dd8c1c4dfd1a01e9543049ad00ca2bc9df984&quot; title=&quot;ci: antora workflow uses full clone&quot;&gt;2f0dd8c1&lt;/a&gt;, &lt;a href=&quot;https://github.com/cppalliance/mrdocs/commit/acf7c10709a1f3a4436101522d799718415ebad8&quot; title=&quot;ci: debug level for antora generation and copy&quot;&gt;acf7c107&lt;/a&gt;)&lt;/li&gt;
      &lt;/ul&gt;
    &lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h1 id=&quot;2026-beyond-the-mvp&quot;&gt;2026: Beyond the MVP&lt;/h1&gt;

&lt;p&gt;MrDocs now ships a working MVP, but significant &lt;strong&gt;foundational work&lt;/strong&gt; remains. The priority framework is the same: start with &lt;strong&gt;gap analysis&lt;/strong&gt;, shape an &lt;strong&gt;MVP&lt;/strong&gt; (or now just a viable product), and rank follow-on work against that baseline. In 2025 we invested in &lt;strong&gt;presentation&lt;/strong&gt; earlier than &lt;strong&gt;infrastructure&lt;/strong&gt;. That inversion still raises costs: each foundational change forces rework across user-facing pieces.&lt;/p&gt;

&lt;p&gt;I do not know how the leadership model will evolve in 2026. The team might keep a single coordinator or move to shared stewardship. Regardless, the project only succeeds if we continue investing in &lt;strong&gt;foundational capabilities&lt;/strong&gt;. The steps below outline the &lt;strong&gt;recommendations&lt;/strong&gt; I believe will help keep MrDocs &lt;strong&gt;sustainable over the long term&lt;/strong&gt;.&lt;/p&gt;

&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js&quot;&gt;&lt;/script&gt;
&lt;div class=&quot;mermaid&quot;&gt;
%%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {
  &quot;primaryColor&quot;: &quot;#f2eadf&quot;,
  &quot;primaryBorderColor&quot;: &quot;#ffe8c6&quot;,
  &quot;primaryTextColor&quot;: &quot;#000000&quot;,
  &quot;lineColor&quot;: &quot;#ffe8c8&quot;,
  &quot;secondaryColor&quot;: &quot;#e8ebf3&quot;,
  &quot;tertiaryColor&quot;: &quot;#eceaf4&quot;,
  &quot;fontSize&quot;: &quot;14px&quot;
}}}%%
mindmap
  root((2026 Priorities))
    Reflection
      Describe symbols
      Shared walkers
    Metadata
      Recursive docs
      Stable names
      Typed expressions
    Extensions
      Script helpers
      Plugin ABI
    Dependencies
      Curated toolchain
      Opt-in stubs
    Community
      Integration demos
      Outreach cadence
&lt;/div&gt;

&lt;h2 id=&quot;strategic-prioritization&quot;&gt;Strategic Prioritization&lt;/h2&gt;

&lt;p&gt;Aligning &lt;strong&gt;priorities&lt;/strong&gt; is itself the highest priority. At the start of my tenure as project lead we followed a strict sequence—&lt;strong&gt;gap analysis&lt;/strong&gt;, then an &lt;strong&gt;MVP&lt;/strong&gt;, then a set of &lt;strong&gt;priorities&lt;/strong&gt;—but that model exposed limitations once work began to land. The &lt;strong&gt;issue tracker&lt;/strong&gt; does not reflect how priorities relate to each other, and as individual tickets close the priority stack does not adjust automatically. The project’s &lt;strong&gt;complexity&lt;/strong&gt; now amplifies the risk: without a clear view of &lt;strong&gt;dependencies&lt;/strong&gt; we can assign a high-value engineer to a task that drags several teammates into the same bottleneck, resulting in net-negative progress. Defining priorities therefore includes understanding the team’s &lt;strong&gt;skills&lt;/strong&gt;, mapping how they &lt;strong&gt;collaborate&lt;/strong&gt;, and making sure no one becomes a &lt;strong&gt;sink&lt;/strong&gt; that blocks everyone else. &lt;strong&gt;Alignment&lt;/strong&gt; across roles remains essential so the plan reflects the people who actually execute it.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;tooling&lt;/strong&gt; already exists to put this into practice. &lt;strong&gt;GitHub&lt;/strong&gt; now lets us mark issues as &lt;strong&gt;blocked by&lt;/strong&gt; or &lt;strong&gt;blocking&lt;/strong&gt; others and to model &lt;strong&gt;parent/child relationships&lt;/strong&gt;. We can use those relationships to &lt;strong&gt;reorganize the priorities programmatically&lt;/strong&gt;. Once the relationships are encoded, &lt;strong&gt;priorities gain semantic meaning&lt;/strong&gt; because we can explain why a small ticket matters in the larger story. Priorities become the &lt;strong&gt;byproduct of higher-level goals&lt;/strong&gt;— narratives  about the product—rather than a short-term &lt;strong&gt;static wish list&lt;/strong&gt; of individual features.&lt;/p&gt;

&lt;p&gt;We also need to strengthen the &lt;strong&gt;operational tools&lt;/strong&gt; that keep the team coordinated. &lt;strong&gt;Coverage&lt;/strong&gt; in CI is still far below our other C++ Alliance projects, and the gap shows up as crashes whenever a new library explores an untested path in the codebase. Improving coverage is a priority in its own right. We can pair that effort with &lt;strong&gt;automation&lt;/strong&gt; and &lt;strong&gt;analysis tools&lt;/strong&gt; like &lt;strong&gt;ReviewDog&lt;/strong&gt; to accelerate code-review feedback, &lt;strong&gt;Danger.js&lt;/strong&gt; to enforce pull-request policies, &lt;strong&gt;CodeClimate&lt;/strong&gt; or similar services for &lt;strong&gt;static analysis&lt;/strong&gt;, and &lt;strong&gt;clang-tidy&lt;/strong&gt; checks to catch issues earlier. Finally, we can invite other collaborators to revisit the &lt;strong&gt;gap analysis&lt;/strong&gt; and &lt;strong&gt;MVP&lt;/strong&gt;, including C++Alliance colleagues who specialize in &lt;strong&gt;marketing&lt;/strong&gt;. Their perspective will help us assign priorities that reflect both &lt;strong&gt;technical dependencies&lt;/strong&gt; and the project’s &lt;strong&gt;broader positioning&lt;/strong&gt;.&lt;/p&gt;

&lt;h2 id=&quot;reflection&quot;&gt;Reflection&lt;/h2&gt;

&lt;p&gt;The corpus keeps drifting out of sync because every important path in MrDocs duplicates representation by hand. Almost every subsystem reflects data from one format to another, and almost every internal operation traverses those structures. Each time we adjust a field we have to edit dozens of call sites, and even small mistakes create inconsistent state—different copies of the “truth” that evolve independently. Reflection eliminates this churn. If we can describe the corpus once and let the code iterate over those descriptions, the boilerplate disappears, the traversals remain correct, and we stop fighting the same battle.&lt;/p&gt;

&lt;p&gt;A lightweight option would be to enforce the corpus from JSON the way we treat configuration, but the volume of metadata in AST makes that impractical. Instead, we lean on &lt;strong&gt;compile-time reflection utilities&lt;/strong&gt; such as &lt;strong&gt;Boost.Describe&lt;/strong&gt; and &lt;strong&gt;Boost.mp11&lt;/strong&gt;. With those libraries we can convert the corpus to any representation, and each generator—including future &lt;strong&gt;binary&lt;/strong&gt; or &lt;strong&gt;JSON&lt;/strong&gt; targets—sees the same schema automatically. MrDocs can even emit the schema that powers each generator, keeping the schema, DOM, and documentation in sync. This approach also fixes the long-standing lag in the &lt;strong&gt;XML generator&lt;/strong&gt;, where updates have historically been manual and error-prone.&lt;/p&gt;

&lt;p&gt;The following sequence diagram illustrates how reflection consolidates data flow without duplicating logic:&lt;/p&gt;

&lt;script src=&quot;https://cdn.jsdelivr.net/npm/mermaid@11.12.0/dist/mermaid.min.js&quot;&gt;&lt;/script&gt;
&lt;div class=&quot;mermaid&quot;&gt;
sequenceDiagram
  participant AST as Clang AST
  participant Corpus as Typed Corpus
  participant Traits as Reflect Traits
  participant DOM as Corpus DOM
  participant Generators as Generators
  participant Clients as Integrations
  AST-&amp;gt;&amp;gt;Corpus: Extract symbols
  Corpus-&amp;gt;&amp;gt;Traits: Publish descriptors
  Traits-&amp;gt;&amp;gt;DOM: Build type-erased nodes
  DOM-&amp;gt;&amp;gt;Generators: Supply normalized schema
  Generators-&amp;gt;&amp;gt;Clients: Deliver outputs
  Clients-&amp;gt;&amp;gt;Generators: Provide feedback
  Generators-&amp;gt;&amp;gt;Traits: Request updates
&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Process:&lt;/strong&gt; We can start by describing the &lt;strong&gt;Symbols&lt;/strong&gt;, &lt;strong&gt;Javadoc&lt;/strong&gt;, and related classes, shipping each refactor as a dedicated PR so reviews stay contained. Each description removes custom specializations, reverts to &lt;code&gt;= default&lt;/code&gt; where possible, and replaces old logic with &lt;strong&gt;static asserts&lt;/strong&gt; that enforce invariants. We generalize the main merge logic first, then update callers such as the &lt;strong&gt;AST visitor&lt;/strong&gt; that walks &lt;code&gt;RecordTranche&lt;/code&gt;, ensuring the &lt;strong&gt;comments data structure&lt;/strong&gt; matches the new descriptions. A &lt;code&gt;MRDOCS_DESCRIBE_DERIVED&lt;/code&gt; helper can enumerate derived classes so every visit routine becomes generic. Once the C++ side is described, we rebuild the lazy DOM objects on top of Describe so their types mirror the DOM layout directly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use cases:&lt;/strong&gt; Redundant non-member functions like &lt;code&gt;tag_invoke&lt;/code&gt;, &lt;code&gt;operator⇔&lt;/code&gt;, &lt;code&gt;toString&lt;/code&gt;, and &lt;code&gt;merge&lt;/code&gt; collapse into &lt;strong&gt;shared implementations&lt;/strong&gt; that use traits unless real customization is required. New generators—binary, JSON, or otherwise—drop in with minimal code because the schema and traversal logic already exist. The XML generator stops maintaining a private representation and simply reads the described elements. We can finally standardize &lt;strong&gt;naming conventions&lt;/strong&gt; (kebab-case or camelCase) because the schema enforces them. Generating the &lt;strong&gt;Relax NG Compact&lt;/strong&gt; file becomes just another output produced from the same description. A metadata walker can then discover auxiliary objects and emit &lt;strong&gt;DOM documentation automatically&lt;/strong&gt;. As a side effect of integrating Boost.mp11, we can extend the &lt;code&gt;tag_invoke&lt;/code&gt; context protocol with tuple-based helpers for &lt;code&gt;mrdocs::FromValue&lt;/code&gt;, further narrowing the gap between concrete and DOM objects.&lt;/p&gt;

&lt;h2 id=&quot;metadata&quot;&gt;Metadata&lt;/h2&gt;

&lt;p&gt;MrDocs still carries metadata gaps that are too large to ignore. The subsections below highlight the three extraction areas that demand sustained effort; each of them blocks the rest of the system from staying consistent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recursive blocks and inlines.&lt;/strong&gt; Release 0.0.5 introduced the data structures for recursive Javadoc elements, but we still do not parse all of those structures. The fix is straightforward in concept—extend the CommonMark-based parser so every block and inline variant becomes a first-class node—but the implementation is long because there are many element types. We can ship this incrementally by opening issues and sub-issues, tackling one structure per PR, and starting with block elements before moving to inlines. The existing post-process documentation finalizer already contains the mechanics; we just need to wire each rule into the new documentation nodes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Legible names.&lt;/strong&gt; The current name generator appends hash fragments to differentiate symbols lazily, which makes references unstable and awkward. We need a stable allocator that remembers which symbols claimed which names. The highest-priority symbol should receive the base name, and suffixes should cascade to less critical overloads so the visible entries stay predictable. Moving the generator into the extraction phase and storing the assignments there ensures anchors remain stable, lets us update artifacts such as the Boost.URL tagfile, and produces names that actually read well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Populate expressions.&lt;/strong&gt; Whenever the extractor fails to recognize an expression, it falls back to the raw source string. That shortcut prevents us from applying the usual transformations, especially inside requires-expressions where implementation-defined symbols appear. We should introduce typed representations for the constructs we already understand and continue to store strings for the expressions we have not modeled yet. As coverage grows, more expressions flow through the structured pipeline, and the remaining string-based nodes shrink to the truly unknown cases.&lt;/p&gt;

&lt;h2 id=&quot;extensions-and-plugins&quot;&gt;Extensions and Plugins&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Extensions&lt;/strong&gt; and &lt;strong&gt;plugins&lt;/strong&gt; aim at the same outcome—letting projects &lt;strong&gt;customize MrDocs&lt;/strong&gt;—but they operate at different layers. Extensions run &lt;strong&gt;inside the application&lt;/strong&gt;, usually through &lt;strong&gt;interpreters&lt;/strong&gt; we bundle. We already ship &lt;strong&gt;Lua&lt;/strong&gt; and &lt;strong&gt;Duktape&lt;/strong&gt;, yet today they only power a handful of &lt;strong&gt;Handlebars helpers&lt;/strong&gt;. The plan is to widen that surface: add more interpreters where it makes sense, extend helper support so extensions can participate in &lt;strong&gt;escaping&lt;/strong&gt; and &lt;strong&gt;formatting&lt;/strong&gt;, and give extensions the ability to &lt;strong&gt;consume the entire corpus&lt;/strong&gt;. With that access, an extension can list every symbol, emit metadata in formats we do not yet support, or transform the corpus before it reaches a native generator. The same mechanism enables &lt;strong&gt;quality-of-life utilities&lt;/strong&gt;, such as a generator extension that checks whether a library’s public API changed according to a policy defined in code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plugins&lt;/strong&gt;, by contrast, are &lt;strong&gt;compiled artifacts&lt;/strong&gt;. They unlock similar customization goals, but their &lt;strong&gt;ABI must stay stable&lt;/strong&gt;, and platform differences mean a plugin built on one system will not run on another. To keep the surface manageable we should expose a &lt;strong&gt;narrow wrapper&lt;/strong&gt;: pass plugins a set of &lt;strong&gt;DOM proxies&lt;/strong&gt; so they never depend on the underlying &lt;strong&gt;Info classes&lt;/strong&gt;, use &lt;strong&gt;traits&lt;/strong&gt; or &lt;strong&gt;versioned interfaces&lt;/strong&gt; to handle incompatibilities, and &lt;strong&gt;plan the API carefully&lt;/strong&gt; before release.&lt;/p&gt;

&lt;h2 id=&quot;dependency-resilience&quot;&gt;Dependency Resilience&lt;/h2&gt;

&lt;p&gt;Working with &lt;strong&gt;dependent libraries&lt;/strong&gt; is still the most fragile part of the MrDocs workflow. &lt;strong&gt;Environments drift&lt;/strong&gt;, &lt;strong&gt;transitive dependencies change&lt;/strong&gt; without notice, and heavyweight projects force us to install &lt;strong&gt;toolchains&lt;/strong&gt; we do not actually need. In &lt;strong&gt;Boost.URL&lt;/strong&gt; alone we watch upstream Boost libraries evolve every few weeks; sometimes the code truly breaks, but just as often a new release exercises an untested path in MrDocs and triggers a crash because our &lt;strong&gt;coverage&lt;/strong&gt; is still thin. Other ecosystems push the cost even higher: documenting a library that depends on &lt;strong&gt;LLVM&lt;/strong&gt; can turn a three-second render into an hours-long process because the transitive LLVM &lt;strong&gt;headers&lt;/strong&gt; MrDocs needs are generated at build time, so we must compile and install LLVM merely to obtain include files. &lt;strong&gt;CI environments&lt;/strong&gt; regularly fail for the same reason.&lt;/p&gt;

&lt;p&gt;We already experimented with &lt;strong&gt;mitigation strategies&lt;/strong&gt; and should refine them rather than abandon the ideas. Shipping a &lt;strong&gt;curated standard library&lt;/strong&gt; with MrDocs removes one entire category of instability. The option will soon be disabled by default, but users can still enable it or even combine it with the system library when &lt;strong&gt;reproducibility&lt;/strong&gt; matters more than access to system libraries. This mirrors how &lt;strong&gt;Clang&lt;/strong&gt; ships &lt;strong&gt;libc++&lt;/strong&gt;; it does not allow invalid code, it simply guarantees a known baseline.&lt;/p&gt;

&lt;p&gt;On top of that, we have preliminary support for &lt;strong&gt;user-defined stubs&lt;/strong&gt;. &lt;strong&gt;Configuration files&lt;/strong&gt; can provide short descriptions of expected symbols from hard-to-build dependencies, and MrDocs can &lt;strong&gt;inject those during extraction&lt;/strong&gt;. For predictable patterns we can &lt;strong&gt;auto-generate stubs&lt;/strong&gt; when the user opts in, synthesizing symbols rather than failing immediately. None of this accepts invalid code—the compiler still diagnoses real errors—but it shields projects from breakage when a &lt;strong&gt;transitive dependency&lt;/strong&gt; tweaks implementation details or when generated headers are unavailable. The features remain &lt;strong&gt;optional&lt;/strong&gt;, so teams can disable synthesis to debug the underlying issue and still benefit from the faster path when schedules are tight. Even if the project moves in another direction we should &lt;strong&gt;document the proposal&lt;/strong&gt; and remove the existing stub hooks deliberately rather than letting them linger undocumented.&lt;/p&gt;

&lt;p&gt;The payoffs are clear. &lt;strong&gt;Boost libraries&lt;/strong&gt; could generate documentation without cloning the entire super-project, relying on &lt;strong&gt;SettingsDB&lt;/strong&gt; to produce a &lt;strong&gt;compilation database&lt;/strong&gt; and skipping &lt;strong&gt;CMake&lt;/strong&gt; entirely. MrDocs itself could publish reference docs without building &lt;strong&gt;LLVM&lt;/strong&gt; because the required symbols would come from stubs. &lt;strong&gt;Releases&lt;/strong&gt; would stop breaking every time a transitive dependency changes, and developers would regain hours currently spent firefighting. These are the &lt;strong&gt;stability&lt;/strong&gt; and &lt;strong&gt;reproducibility&lt;/strong&gt; gains we need if we want MrDocs to be the &lt;strong&gt;default tooling&lt;/strong&gt; for large C++ ecosystems.&lt;/p&gt;

&lt;h2 id=&quot;follow-up-issues-for-v006&quot;&gt;Follow-up Issues for v0.0.6&lt;/h2&gt;

&lt;p&gt;To keep this post focused on the big-picture transition, I spun the tactical tasks into GitHub issues for the 0.0.6 milestone. They’re queued up and ready for execution whenever the team circles back to implementation.&lt;/p&gt;

&lt;details&gt;
  &lt;summary&gt;List of follow-up issues for v0.0.6&lt;/summary&gt;

  &lt;ul&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1081&quot;&gt;#1081&lt;/a&gt; Support custom stylesheets in the HTML generator&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1082&quot;&gt;#1082&lt;/a&gt; Format-agnostic Handlebars generator extension&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1083&quot;&gt;#1083&lt;/a&gt; Allow SettingsDB to describe a single source file&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1084&quot;&gt;#1084&lt;/a&gt; Guard against invalid source links&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1085&quot;&gt;#1085&lt;/a&gt; Complete tests for all using declaration forms&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1086&quot;&gt;#1086&lt;/a&gt; Explore a recursive project layout&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1087&quot;&gt;#1087&lt;/a&gt; Convert ConfigOptions.json into a schema file&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1088&quot;&gt;#1088&lt;/a&gt; Separate parent context and parent page&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1089&quot;&gt;#1089&lt;/a&gt; List deduction guides on the record page&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1090&quot;&gt;#1090&lt;/a&gt; Expand coverage for Friends&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1091&quot;&gt;#1091&lt;/a&gt; Remove dependency symbols after finalization&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1092&quot;&gt;#1092&lt;/a&gt; Review Bash Commands Parser&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1093&quot;&gt;#1093&lt;/a&gt; Review NameInfoVisitor&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1094&quot;&gt;#1094&lt;/a&gt; Improve overload-set documentation&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1095&quot;&gt;#1095&lt;/a&gt; CI uses the bootstrap script&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1096&quot;&gt;#1096&lt;/a&gt; Connect Antora extensions&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1097&quot;&gt;#1097&lt;/a&gt; Handlebars: optimize render state&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1098&quot;&gt;#1098&lt;/a&gt; Handlebars: explore template compilation&lt;/li&gt;
    &lt;li&gt;&lt;a href=&quot;https://github.com/cppalliance/mrdocs/issues/1099&quot;&gt;#1099&lt;/a&gt; Handlebars: investigate incremental rendering&lt;/li&gt;
  &lt;/ul&gt;

&lt;/details&gt;

&lt;h1 id=&quot;acknowledgments&quot;&gt;Acknowledgments&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Matheus Izvekov&lt;/strong&gt; and &lt;strong&gt;Krystian Stasiowski&lt;/strong&gt; kept the Clang integration moving. Their expertise cleared issues that would have stalled us. &lt;strong&gt;Gennaro Prota&lt;/strong&gt; and &lt;strong&gt;Fernando Pelliccioni&lt;/strong&gt; handled the maintenance load that kept the project on schedule. They took on the long tasks and followed them through.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Robert Beeston&lt;/strong&gt; and &lt;strong&gt;Julio Estrada&lt;/strong&gt; delivered the public face of MrDocs. The site we ship today exists because they turned open-ended goals into a complete experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vinnie Falco&lt;/strong&gt;, &lt;strong&gt;Louis Tatta&lt;/strong&gt;, and &lt;strong&gt;Sam Darwin&lt;/strong&gt; formed the backbone of my daily support. &lt;strong&gt;Vinnie&lt;/strong&gt; trusted the direction and backed the plan when decisions were difficult. &lt;strong&gt;Louis&lt;/strong&gt; made sure I had space to return after setbacks. &lt;strong&gt;Sam&lt;/strong&gt; kept the Alliance infrastructure running so the team always had what it needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ruben Perez&lt;/strong&gt;, &lt;strong&gt;Klemens Morgenstern&lt;/strong&gt;, &lt;strong&gt;Peter Dimov&lt;/strong&gt;, and &lt;strong&gt;Peter Turcan&lt;/strong&gt; offered honest feedback whenever we needed another perspective. Their observations sharpened the product and kept collaboration positive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Joaquín M López Muñoz&lt;/strong&gt; and &lt;strong&gt;Arnaud Bachelier&lt;/strong&gt; guided me through the people side of leadership. Their advice turned complex situations into workable plans.&lt;/p&gt;

&lt;p&gt;Working alongside everyone listed here has been a privilege. Their contributions made this year possible.&lt;/p&gt;

&lt;h1 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h1&gt;

&lt;p&gt;The 2025 releases unified the generators, locked the configuration model, added sanitizers and coverage to CI, and introduced features that make the tool usable outside Boost.URL. The project is ready for new contributors because they can extend the code without rebuilding the basics, and downstream teams can run the CLI on large codebases and expect predictable output.&lt;/p&gt;

&lt;p&gt;While we delivered those releases, I learned that engineering progress depends on steady communication. Remote discussions often sound negative even when people agree on the goals, so I schedule short check-ins, add light signals like emojis, and keep space for conversations that are not task-driven. I also protect time to listen and ask for help when the workload gets heavy; if I lose that time, every deadline slips anyway.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Reflections&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Technical conversations start negative by default, so add clear signals when you agree or appreciate the work.&lt;/li&gt;
  &lt;li&gt;Assume terse feedback comes from the medium, not the person, and respond with patience.&lt;/li&gt;
  &lt;li&gt;Keep informal connection habits—buddy calls, breaks, or quick chats—to maintain trust.&lt;/li&gt;
  &lt;li&gt;Look after your own health and use outside support when needed.&lt;/li&gt;
  &lt;li&gt;Never allow the schedule to block real listening time; reset your calendar when that happens.&lt;/li&gt;
&lt;/ul&gt;</content><author><name></name></author><category term="alan" /><summary type="html">In 2024, the MrDocs project was a fragile prototype. It documented Boost.URL, but the CLI, configuration, and build process were unstable. Most users could not run it without direct help from the core group. That unstable baseline is the starting point for this report. In 2025, we moved the codebase to minimum-viable-product shape. I led the releases that stabilized the pipeline, aligned the configuration model, and documented the work in this report to support a smooth leadership transition. This post summarizes the 2024 gaps, the 2025 fixes, and the recommended directions for the next phase. System Overview 2024: Lessons from a Fragile Prototype 2025: From Prototype to MVP v0.0.3: Enforcing Consistency v0.0.4: Establishing the Foundation v0.0.5: Stabilization and Public Readiness 2026: Beyond the MVP Strategic Prioritization Reflection Metadata Extensions and Plugins Dependency Resilience Follow-up Issues for v0.0.6 Acknowledgments Conclusion System Overview MrDocs is a C++ documentation generator built on Clang. It parses source with full language fidelity, links declarations to their comments, and produces reference documentation that reflects real program structure—templates, constraints, and overloads included. Traditional tools often approximate the AST. MrDocs uses the AST directly, so documentation matches the code and modern C++ features render correctly. Unlike single-purpose generators, MrDocs separates the corpus (semantic data) from the presentation layer. Projects can choose among multiple output formats or extend the system entirely: supply custom Handlebars templates or script new generators using the plugin system. The corpus is represented in the generators as a rich JSON-like DOM. With schema files, MrDocs enables integration with build systems, documentation frameworks, or IDEs. From the user’s perspective, MrDocs behaves like a well-engineered CLI utility. It accepts configuration files, supports relative paths, accepts custom build options, and reports warnings in a controlled, compiler-like fashion. For C++ teams transitioning from Doxygen, the command structure is somewhat familiar, but the internal model is designed for reproducibility and correctness. Our goal is not just to render reference pages but to provide a reliable pipeline that any C++ project seeking modern documentation infrastructure can adopt. graph LR A[Source] --&amp;gt; B[Clang] B --&amp;gt; C[Corpus] C --&amp;gt; D{Plugin Layer} subgraph Generator E[HTML] F[AsciiDoc] G[XML] G2[...] end D --&amp;gt; E D --&amp;gt; F D --&amp;gt; G D --&amp;gt; G2 E --&amp;gt; H{Plugin Layer} H --&amp;gt; H2[Published Docs] F --&amp;gt; H G --&amp;gt; H G2 --&amp;gt; H C --&amp;gt; I[Schema Export] I --&amp;gt; J[IntegrationsIDEs &amp;amp; Build Systems] 2024: Lessons from a Fragile Prototype MrDocs entered 2024 as a proof-of-concept built for Boost.URL. It could document one or two curated codebases and produce asciidoc pages for Antora, but the workflow stopped there. The CLI exposed only the scenarios we needed. Configuration options lived in internal notes. The only dependable build path was the script sequence we used inside the Alliance. External users hit errors and missing options almost immediately. Stability was just as fragile: We had no sanitizers, no warnings-as-errors, and inconsistent CI hardware. The binaries crashed as soon as they saw unfamiliar code. The pipeline worked only when the input looked like Boost.URL. Point it at slightly different code patterns and it would segfault. Each feature landed as a custom patch, so logic duplicated across generators, and fixing one path broke another. Early releases: Release v0.0.1 captured that prototype: the early Handlebars engine, the HTML generator, the DOM refactor, and a list of APIs that only the core team could drive. v0.0.2 added structured configuration, automatic compile_commands.json, and better SFINAE handling, but the tool was still insider-only. Leadership transition: Late in 2024 I became project lead with two initial priorities: document the gaps and describe the true limits of the system. That set the 2025 baseline—a functional prototype that needed coherence, reproducibility, and trust before it could call itself a product. What 2025 later fixed were the weaknesses we saw here: configuration coherence, generator unification, schema validation, and basic options were all missing. The CLI, configuration files, and code drifted apart. Generators evolved independently with duplicated code and inconsistent naming. Editors had no schema to lean on. Extraction rules were ad hoc, which made the output incomplete. CI ran on an improvised matrix with no caching, sanitizers, or coverage, so regressions slipped through. That was the starting point. Summary: 2024 produced a working demo, not a reproducible system. Each success exposed another weak link and clarified what had to change in 2025. In short: 2024 left us with a working prototype but no coherent architecture. The system could demonstrate the concept, but not sustain or reproduce it. Every improvement exposed another weak link, and every success demanded more structure than the system was built to handle. It was a year of learning by exhaustion—and setting the stage for everything that came next. Key 2024 checkpoints align with the timeline below: %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: {&quot;primaryColor&quot;: &quot;#f7f9ff&quot;, &quot;primaryBorderColor&quot;: &quot;#9aa7e8&quot;, &quot;primaryTextColor&quot;: &quot;#1f2a44&quot;, &quot;lineColor&quot;: &quot;#b4bef2&quot;, &quot;secondaryColor&quot;: &quot;#fbf8ff&quot;, &quot;tertiaryColor&quot;: &quot;#ffffff&quot;, &quot;fontSize&quot;: &quot;14px&quot;}}}%% timeline title Prototypes 2024 Q1 : Boost.URL showcase 2024 Q2 : CLI gaps 2024 Q3 : Config + SFINAE fixes 2024 Q4 : Leadership transition 2025: From Prototype to MVP I started the year with a gap analysis that compared MrDocs to other C++ documentation pipelines. From that review I defined the minimum viable product and three priority tracks. Usability covered workflows and surface area that make adoption simple. Stability covered deterministic behavior, proper data structures, and CI discipline. Foundation covered configuration and data models that keep code, flags, and documentation aligned. The 2025 releases followed those tracks and turned MrDocs from a proof of concept into a tool that other teams can adopt. v0.0.3 — Consistency. We replaced ad-hoc behavior with a coherent system: a single source of truth for configuration kept CLI, config files, and docs in sync; generators and templates were unified so changes propagate by design; core semantic extraction (e.g., concepts, constraints, SFINAE) became reliable; and CI hardened around reproducible, tested outputs across HTML and Antora. v0.0.4 — Foundation. We introduced precise warning controls and a family of extract-* options to match established tooling, added a JSON Schema for configuration (enabling editor validation/autocomplete), delivered a robust reference system for documentation comments, brought initial inline formatting to generators, and simplified onboarding with a cross-platform bootstrap script. CI gained sanitizers, coverage checks, and modern compilers. v0.0.5 — Stabilization. We redesigned documentation metadata to support recursive inline elements, enforced safer polymorphic types with optional references and non-nullable patterns, and added user-facing improvements (sorting, automatic compilation database detection, quick reference indices, improved namespace/overload grouping, LLDB formatters). The website and documentation UI were refreshed for accessibility and responsiveness, new demos (including self-documentation) were published, and CI was further tightened with stricter policies and cross-platform bootstrap enhancements. Together, these releases executed the roadmap derived from the initial gap analysis: they aligned the moving parts, closed the most important capability gaps, and delivered a stable foundation that future work can extend without re-litigating fundamentals. %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: { &quot;primaryColor&quot;: &quot;#e4eee8&quot;, &quot;primaryBorderColor&quot;: &quot;#affbd6&quot;, &quot;primaryTextColor&quot;: &quot;#000000&quot;, &quot;lineColor&quot;: &quot;#baf9d9&quot;, &quot;secondaryColor&quot;: &quot;#f0eae4&quot;, &quot;tertiaryColor&quot;: &quot;#ebeaf4&quot;, &quot;fontSize&quot;: &quot;14px&quot; }}}%% mindmap root((MVP Evolution)) v0.0.3 Config sync Shared templates CI discipline v0.0.4 Warning controls Schema Bootstrap v0.0.5 Recursive docs Nav refresh Tooling polish v0.0.3: Enforcing Consistency v0.0.3 is where MrDocs stopped being a collection of one-off special cases and became a coherent system. Before this release, features landed in a single generator and drifted from the others; extraction handled only the narrowly requested pattern and crashed on nearby ones; and options were inconsistent—some hard-coded, some missing from CLI/config, with no mechanism to keep code, docs, and flags aligned. What changed: The v0.0.3 release fixes this foundation. We introduced a single source of truth for configuration options with TableGen-style metadata: docs, the config file, and the CLI always stay in sync. We added essential Doxygen-like options to make basic projects immediately usable and filled obvious gaps in symbols and doc comments. We implemented metadata extraction for core symbol types and their information—such as template constraints, concepts, and automatic SFINAE detection. We unified generators and templates so changes propagate by design, added tagfile support and “lightweight reflection” to documentation comments as lazy DOM objects and arrays, and extended Handlebars to power the new generators. These features allowed us to create the initial version of the website and ensure the documentation is always in sync. Build and testing discipline: CI, builds, and tests were hardened. All generators were now tested, LLVM caching systems improved, and we launched our first macOS release (important for teams working on Antora UI bundles). All of this long tail of performance, correctness, and safety work turned “works on my machine” into repeatable, adoptable output across HTML and Antora. v0.0.3 was the inflection point. For the first time, developers could depend on consistent configuration, shared templates, and predictable behavior across generators. It aligned internal tools, eliminated duplicated effort, and replaced trial-and-error debugging with reproducible builds. Every improvement in later versions built on this foundation. Categorized improvements for v0.0.3 Configuration Options: enforcing consistency, reproducible builds, and transparent reporting Enforce configuration options are in sync with the JSON source of truth (a1fb8ec6, 9daf71fe) File and symbol filters (1b67a847, b352ba22) Reference and symbol configuration (a3e4477f, 30eaabc9) Extraction options (41411db2, 1214d94b) Reporting options (f994e47e, 0dd9cb45) Configuration structure (c8662b35, dcf5beef, 4bd3ea42) CLI workflows (a2dc4c78, 3c0f90df) Warnings (4eab1933, 5e586f2b, 0e2dd713) SettingsDB (225b2d50, 51639e77) Deterministic configuration (b5449741) Global configuration documentation (ec3dbf5c) Generators: unification, new features, and early refactoring Antora/HTML generator consistency (e674182f, 82e86a6c, 9154b9c5) HTML generator improvements (a28cb2f7, 064ce55a, 5f6665d8) Documentation for generators (2382e8cf, 646a1e5b) Supporting new output formats (58a79f74, 271dde57, 9d9f6652) Handlebars improvements (ebf4dbeb, be76fc07) Generator tooling (00fc84cf, 6a69747d) Navigation helpers (fdccad42) DOM optimizations (9b41d2e4) Libraries and metadata: unification, fixes, and extraction enhancements Info node visitor and traversal improvements (be86a08d, 58ab5a5e) Metadata consistency (544ee37d, 62f8a2bd, bd9c704f) Template and concept support (4b0b4a71, 57cf74de, 92aa76a4) Symbol resolution and references (f64d4a06, aa9333d4) Documentation improvements (5d3f21c8) Website and Documentation: turning features into a showcase and simplifying workflows Create website (05400c3c, 8fba2020) Use the new features to create an HTML panel demos workflow (12ceadee, d38d3e1a, c46c4a91) Unify Antora author mode playbook (999ea4f3) Generator use cases and trade-offs (2307ca6a) Correctness and simplification (4d884f43, 55214d72, b078bead, d8b7fcf4, 96484836, 62f361fb) Build, Testing, and Releases: strengthening CI, improving LLVM caching workflow, and stabilizing releases Templates are tested with golden tests (2bc09e65, 9eece731) LLVM caches and runners improvements (4c14e875, bd54dc7c, 3d92071a, 8537d3db, f3b33a47, 5982cc7e, 93487669) Enable macOS workflow (390159e3) Stabilize artifacts (5e0f628e, d1c3566e, 62736e45) Tests support individual file inputs, which improved local tests considerably (75b1bc52) Performance, correctness, and safety (a820ad79, 43e5f252, a382820f, fbcb5b2d, 6a2290cb, 49f4125f) v0.0.4: Establishing the Foundation v0.0.4 completed the core capabilities we need for production. With the moving parts aligned in v0.0.3, this release focused on the fundamentals. It added consistent warning options, extraction controls that match established tools, schema support for IDE auto-completion, a complete reference system for doc comments, and initial inline formatting in the generators. The bootstrap script became a one-step path to a working build. We also hardened the pipeline with modern CI practices—sanitizers, coverage integration, and standardized presets. Categorized improvements for v0.0.4 Configuration and Extraction: structured configuration, extraction controls, and schema validation Configuration schema (d9517e1d, 5f846c1c, ffa0d1a6) Extraction filters (0a60bb98, a7d7714d) Reference configuration (d18a8ab3) Documentation metadata (6676c1e8) Warnings and Reporting: consistent governance with CLI parity Warning controls (2a29f0a0, 6d3c1f47) Extract options (extract-{public,protected,private,inline}) (aa5a6be3) CLI defaults (d85439c3) Generators: Javadoc, inline formatting, and reference improvements Documentation reference system (4b430f9b, 73489e2b) Javadoc metadata (8dd3af67, f7e59d4c) Inline formatting (5c7490a3, d1d80745) XML generator alignment (9867e0d2, 0f890f2c) Build and CI: sanitizers, coverage, and reproducible builds Sanitizer integration (6257c747, 88954d7f) Coverage reporting (bf195759) Relocatable build (std::format) (7b871032) Bootstrap modernization (3eec9a48, 71afb87b, 524e7923) v0.0.5: Stabilization and Public Readiness v0.0.5 marked the transition toward a sustained development model and prepared the project for handoff. This release focused on presentation, polish, and reliability—ensuring that MrDocs was ready not only for internal use but for public visibility. During this period, we expanded the set of public demos, refined the website and documentation, and stabilized the infrastructure to support a growing user base. The goal was to leave the project in a state where it could continue evolving smoothly, with a stable core, clear development practices, and a professional public face. Community and visibility: Beyond the commits, this release reflected broader activity around the project. We generated and published several new demos, many of which revealed integration issues that were subsequently fixed. As more external users began adopting MrDocs, the feedback loop accelerated: bug reports, feature requests, and real-world edge cases guided much of the work. New contributors joined the team, collaboration became more distributed, and visibility increased. Around the same time, I introduced MrDocs to developers at CppCon 2025, where it received strong feedback from library authors testing it on their own projects. The tool was beginning to gain recognition as a viable, modern alternative to Doxygen. Technical progress: This release focused on correctness. We redesigned the documentation comment data structures to support recursive inline elements and render Markdown and HTML-style formatting correctly. We moved to non-nullable polymorphic types and optional references so that invariants fail at compile time rather than at runtime. User-facing updates included new sorting options, automatic compilation database detection, a quick reference index, broader namespace and overload grouping, and LLDB formatters for Clang and MrDocs symbols. We refreshed the website and documentation UI for accessibility and responsiveness, added new demos (including the MrDocs self-reference), and tightened CI with more sanitizers, stricter warning policies, and cross-platform bootstrap improvements. Together, these improvements completed the transition from a developing prototype to a dependable product. v0.0.5 established a stable foundation for others to build on—polished, documented, and resilient—so future releases could focus on extending capabilities rather than consolidating them. With this release, the project reached a point where the handoff could occur naturally, closing one chapter and opening another. Categorized improvements for v0.0.5 Metadata: documentation inlines and safety improvements Recursive documentation inlines (51e2b655) Consistent sorting options for members and namespaces (sort-members-by, sort-namespace-members-by) (f0ba28dd, a0f694dc) Non-nullable polymorphic types and optional references (c9f9ba13, 8ef3ffaf, bd3e1217, afa558a6, 6ba8ef6b) Consistent metadata class family hierarchy pattern (6d495497) MrDocsSettings includes automatic compilation database support (9afededb, a1f289de) Quick reference index (68e029c1, 940c33f4) Namespace/using/overloads grouping includes using declarations and overloads as shadows (69e1c3bc, d722b7d0, 2b59269c) Conditional explicit clauses in templated methods (2bff4e2f) Destructor overloads supported in class templates (336ad319) Using declarations include all shadow variants (88a1cebf, 9253fd8f, a7d5cf6a) show-enum-constants option (07b69e1c) Custom LLDB formatters for Clang and MrDocs symbols (069bd8f4, f83eca17, 1b39fdd7, aefc53c7) Performance, correctness, and safety (d1788049, 3bd94cff, 8a811560, 3ff37448, ad1e7baa, b10b8aa3, 482c0be8, d66da796, ec8daa11, 5234b67c, 5e879b10, 35e14c93, d5a28a89, 6878c199, 21ce3e74, 2da2081b, b528ae11) Website and Documentation: new demos and a new website New demos (cfa9eb7d, 1b930b86, c18be83e, 177fae4a, 33275050) Website and documentation refresh (35e14c93, a6437742) Self-documentation (f2a5f77e) Antora enhancements (5ed0f48f) Build, Testing, and Releases: improvements and hardening CI Toolchain and CI hardening (6257c747, 88954d7f, bf195759, ba0dcfd3) Bootstrap improvements (3eec9a48, 71afb87b, 524e7923, 4b79ef41, 7d27204e, 988e9ebc, 94a5b799, be7332cf, 4d705c96, f48bbd2f, f9363461) Performance, correctness, and safety (5aa714b2, 469f41ee, 629f1848, 2f0dd8c1, acf7c107) 2026: Beyond the MVP MrDocs now ships a working MVP, but significant foundational work remains. The priority framework is the same: start with gap analysis, shape an MVP (or now just a viable product), and rank follow-on work against that baseline. In 2025 we invested in presentation earlier than infrastructure. That inversion still raises costs: each foundational change forces rework across user-facing pieces. I do not know how the leadership model will evolve in 2026. The team might keep a single coordinator or move to shared stewardship. Regardless, the project only succeeds if we continue investing in foundational capabilities. The steps below outline the recommendations I believe will help keep MrDocs sustainable over the long term. %%{init: {&quot;theme&quot;: &quot;base&quot;, &quot;themeVariables&quot;: { &quot;primaryColor&quot;: &quot;#f2eadf&quot;, &quot;primaryBorderColor&quot;: &quot;#ffe8c6&quot;, &quot;primaryTextColor&quot;: &quot;#000000&quot;, &quot;lineColor&quot;: &quot;#ffe8c8&quot;, &quot;secondaryColor&quot;: &quot;#e8ebf3&quot;, &quot;tertiaryColor&quot;: &quot;#eceaf4&quot;, &quot;fontSize&quot;: &quot;14px&quot; }}}%% mindmap root((2026 Priorities)) Reflection Describe symbols Shared walkers Metadata Recursive docs Stable names Typed expressions Extensions Script helpers Plugin ABI Dependencies Curated toolchain Opt-in stubs Community Integration demos Outreach cadence Strategic Prioritization Aligning priorities is itself the highest priority. At the start of my tenure as project lead we followed a strict sequence—gap analysis, then an MVP, then a set of priorities—but that model exposed limitations once work began to land. The issue tracker does not reflect how priorities relate to each other, and as individual tickets close the priority stack does not adjust automatically. The project’s complexity now amplifies the risk: without a clear view of dependencies we can assign a high-value engineer to a task that drags several teammates into the same bottleneck, resulting in net-negative progress. Defining priorities therefore includes understanding the team’s skills, mapping how they collaborate, and making sure no one becomes a sink that blocks everyone else. Alignment across roles remains essential so the plan reflects the people who actually execute it. The tooling already exists to put this into practice. GitHub now lets us mark issues as blocked by or blocking others and to model parent/child relationships. We can use those relationships to reorganize the priorities programmatically. Once the relationships are encoded, priorities gain semantic meaning because we can explain why a small ticket matters in the larger story. Priorities become the byproduct of higher-level goals— narratives about the product—rather than a short-term static wish list of individual features. We also need to strengthen the operational tools that keep the team coordinated. Coverage in CI is still far below our other C++ Alliance projects, and the gap shows up as crashes whenever a new library explores an untested path in the codebase. Improving coverage is a priority in its own right. We can pair that effort with automation and analysis tools like ReviewDog to accelerate code-review feedback, Danger.js to enforce pull-request policies, CodeClimate or similar services for static analysis, and clang-tidy checks to catch issues earlier. Finally, we can invite other collaborators to revisit the gap analysis and MVP, including C++Alliance colleagues who specialize in marketing. Their perspective will help us assign priorities that reflect both technical dependencies and the project’s broader positioning. Reflection The corpus keeps drifting out of sync because every important path in MrDocs duplicates representation by hand. Almost every subsystem reflects data from one format to another, and almost every internal operation traverses those structures. Each time we adjust a field we have to edit dozens of call sites, and even small mistakes create inconsistent state—different copies of the “truth” that evolve independently. Reflection eliminates this churn. If we can describe the corpus once and let the code iterate over those descriptions, the boilerplate disappears, the traversals remain correct, and we stop fighting the same battle. A lightweight option would be to enforce the corpus from JSON the way we treat configuration, but the volume of metadata in AST makes that impractical. Instead, we lean on compile-time reflection utilities such as Boost.Describe and Boost.mp11. With those libraries we can convert the corpus to any representation, and each generator—including future binary or JSON targets—sees the same schema automatically. MrDocs can even emit the schema that powers each generator, keeping the schema, DOM, and documentation in sync. This approach also fixes the long-standing lag in the XML generator, where updates have historically been manual and error-prone. The following sequence diagram illustrates how reflection consolidates data flow without duplicating logic: sequenceDiagram participant AST as Clang AST participant Corpus as Typed Corpus participant Traits as Reflect Traits participant DOM as Corpus DOM participant Generators as Generators participant Clients as Integrations AST-&amp;gt;&amp;gt;Corpus: Extract symbols Corpus-&amp;gt;&amp;gt;Traits: Publish descriptors Traits-&amp;gt;&amp;gt;DOM: Build type-erased nodes DOM-&amp;gt;&amp;gt;Generators: Supply normalized schema Generators-&amp;gt;&amp;gt;Clients: Deliver outputs Clients-&amp;gt;&amp;gt;Generators: Provide feedback Generators-&amp;gt;&amp;gt;Traits: Request updates Process: We can start by describing the Symbols, Javadoc, and related classes, shipping each refactor as a dedicated PR so reviews stay contained. Each description removes custom specializations, reverts to = default where possible, and replaces old logic with static asserts that enforce invariants. We generalize the main merge logic first, then update callers such as the AST visitor that walks RecordTranche, ensuring the comments data structure matches the new descriptions. A MRDOCS_DESCRIBE_DERIVED helper can enumerate derived classes so every visit routine becomes generic. Once the C++ side is described, we rebuild the lazy DOM objects on top of Describe so their types mirror the DOM layout directly. Use cases: Redundant non-member functions like tag_invoke, operator⇔, toString, and merge collapse into shared implementations that use traits unless real customization is required. New generators—binary, JSON, or otherwise—drop in with minimal code because the schema and traversal logic already exist. The XML generator stops maintaining a private representation and simply reads the described elements. We can finally standardize naming conventions (kebab-case or camelCase) because the schema enforces them. Generating the Relax NG Compact file becomes just another output produced from the same description. A metadata walker can then discover auxiliary objects and emit DOM documentation automatically. As a side effect of integrating Boost.mp11, we can extend the tag_invoke context protocol with tuple-based helpers for mrdocs::FromValue, further narrowing the gap between concrete and DOM objects. Metadata MrDocs still carries metadata gaps that are too large to ignore. The subsections below highlight the three extraction areas that demand sustained effort; each of them blocks the rest of the system from staying consistent. Recursive blocks and inlines. Release 0.0.5 introduced the data structures for recursive Javadoc elements, but we still do not parse all of those structures. The fix is straightforward in concept—extend the CommonMark-based parser so every block and inline variant becomes a first-class node—but the implementation is long because there are many element types. We can ship this incrementally by opening issues and sub-issues, tackling one structure per PR, and starting with block elements before moving to inlines. The existing post-process documentation finalizer already contains the mechanics; we just need to wire each rule into the new documentation nodes. Legible names. The current name generator appends hash fragments to differentiate symbols lazily, which makes references unstable and awkward. We need a stable allocator that remembers which symbols claimed which names. The highest-priority symbol should receive the base name, and suffixes should cascade to less critical overloads so the visible entries stay predictable. Moving the generator into the extraction phase and storing the assignments there ensures anchors remain stable, lets us update artifacts such as the Boost.URL tagfile, and produces names that actually read well. Populate expressions. Whenever the extractor fails to recognize an expression, it falls back to the raw source string. That shortcut prevents us from applying the usual transformations, especially inside requires-expressions where implementation-defined symbols appear. We should introduce typed representations for the constructs we already understand and continue to store strings for the expressions we have not modeled yet. As coverage grows, more expressions flow through the structured pipeline, and the remaining string-based nodes shrink to the truly unknown cases. Extensions and Plugins Extensions and plugins aim at the same outcome—letting projects customize MrDocs—but they operate at different layers. Extensions run inside the application, usually through interpreters we bundle. We already ship Lua and Duktape, yet today they only power a handful of Handlebars helpers. The plan is to widen that surface: add more interpreters where it makes sense, extend helper support so extensions can participate in escaping and formatting, and give extensions the ability to consume the entire corpus. With that access, an extension can list every symbol, emit metadata in formats we do not yet support, or transform the corpus before it reaches a native generator. The same mechanism enables quality-of-life utilities, such as a generator extension that checks whether a library’s public API changed according to a policy defined in code. Plugins, by contrast, are compiled artifacts. They unlock similar customization goals, but their ABI must stay stable, and platform differences mean a plugin built on one system will not run on another. To keep the surface manageable we should expose a narrow wrapper: pass plugins a set of DOM proxies so they never depend on the underlying Info classes, use traits or versioned interfaces to handle incompatibilities, and plan the API carefully before release. Dependency Resilience Working with dependent libraries is still the most fragile part of the MrDocs workflow. Environments drift, transitive dependencies change without notice, and heavyweight projects force us to install toolchains we do not actually need. In Boost.URL alone we watch upstream Boost libraries evolve every few weeks; sometimes the code truly breaks, but just as often a new release exercises an untested path in MrDocs and triggers a crash because our coverage is still thin. Other ecosystems push the cost even higher: documenting a library that depends on LLVM can turn a three-second render into an hours-long process because the transitive LLVM headers MrDocs needs are generated at build time, so we must compile and install LLVM merely to obtain include files. CI environments regularly fail for the same reason. We already experimented with mitigation strategies and should refine them rather than abandon the ideas. Shipping a curated standard library with MrDocs removes one entire category of instability. The option will soon be disabled by default, but users can still enable it or even combine it with the system library when reproducibility matters more than access to system libraries. This mirrors how Clang ships libc++; it does not allow invalid code, it simply guarantees a known baseline. On top of that, we have preliminary support for user-defined stubs. Configuration files can provide short descriptions of expected symbols from hard-to-build dependencies, and MrDocs can inject those during extraction. For predictable patterns we can auto-generate stubs when the user opts in, synthesizing symbols rather than failing immediately. None of this accepts invalid code—the compiler still diagnoses real errors—but it shields projects from breakage when a transitive dependency tweaks implementation details or when generated headers are unavailable. The features remain optional, so teams can disable synthesis to debug the underlying issue and still benefit from the faster path when schedules are tight. Even if the project moves in another direction we should document the proposal and remove the existing stub hooks deliberately rather than letting them linger undocumented. The payoffs are clear. Boost libraries could generate documentation without cloning the entire super-project, relying on SettingsDB to produce a compilation database and skipping CMake entirely. MrDocs itself could publish reference docs without building LLVM because the required symbols would come from stubs. Releases would stop breaking every time a transitive dependency changes, and developers would regain hours currently spent firefighting. These are the stability and reproducibility gains we need if we want MrDocs to be the default tooling for large C++ ecosystems. Follow-up Issues for v0.0.6 To keep this post focused on the big-picture transition, I spun the tactical tasks into GitHub issues for the 0.0.6 milestone. They’re queued up and ready for execution whenever the team circles back to implementation. List of follow-up issues for v0.0.6 #1081 Support custom stylesheets in the HTML generator #1082 Format-agnostic Handlebars generator extension #1083 Allow SettingsDB to describe a single source file #1084 Guard against invalid source links #1085 Complete tests for all using declaration forms #1086 Explore a recursive project layout #1087 Convert ConfigOptions.json into a schema file #1088 Separate parent context and parent page #1089 List deduction guides on the record page #1090 Expand coverage for Friends #1091 Remove dependency symbols after finalization #1092 Review Bash Commands Parser #1093 Review NameInfoVisitor #1094 Improve overload-set documentation #1095 CI uses the bootstrap script #1096 Connect Antora extensions #1097 Handlebars: optimize render state #1098 Handlebars: explore template compilation #1099 Handlebars: investigate incremental rendering Acknowledgments Matheus Izvekov and Krystian Stasiowski kept the Clang integration moving. Their expertise cleared issues that would have stalled us. Gennaro Prota and Fernando Pelliccioni handled the maintenance load that kept the project on schedule. They took on the long tasks and followed them through. Robert Beeston and Julio Estrada delivered the public face of MrDocs. The site we ship today exists because they turned open-ended goals into a complete experience. Vinnie Falco, Louis Tatta, and Sam Darwin formed the backbone of my daily support. Vinnie trusted the direction and backed the plan when decisions were difficult. Louis made sure I had space to return after setbacks. Sam kept the Alliance infrastructure running so the team always had what it needed. Ruben Perez, Klemens Morgenstern, Peter Dimov, and Peter Turcan offered honest feedback whenever we needed another perspective. Their observations sharpened the product and kept collaboration positive. Joaquín M López Muñoz and Arnaud Bachelier guided me through the people side of leadership. Their advice turned complex situations into workable plans. Working alongside everyone listed here has been a privilege. Their contributions made this year possible. Conclusion The 2025 releases unified the generators, locked the configuration model, added sanitizers and coverage to CI, and introduced features that make the tool usable outside Boost.URL. The project is ready for new contributors because they can extend the code without rebuilding the basics, and downstream teams can run the CLI on large codebases and expect predictable output. While we delivered those releases, I learned that engineering progress depends on steady communication. Remote discussions often sound negative even when people agree on the goals, so I schedule short check-ins, add light signals like emojis, and keep space for conversations that are not task-driven. I also protect time to listen and ask for help when the workload gets heavy; if I lose that time, every deadline slips anyway. Final Reflections Technical conversations start negative by default, so add clear signals when you agree or appreciate the work. Assume terse feedback comes from the medium, not the person, and respond with patience. Keep informal connection habits—buddy calls, breaks, or quick chats—to maintain trust. Look after your own health and use outside support when needed. Never allow the schedule to block real listening time; reset your calendar when that happens.</summary></entry><entry><title type="html">Making the Clang AST Leaner and Faster</title><link href="http://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster.html" rel="alternate" type="text/html" title="Making the Clang AST Leaner and Faster" /><published>2025-10-20T00:00:00+00:00</published><updated>2025-10-20T00:00:00+00:00</updated><id>http://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster</id><content type="html" xml:base="http://cppalliance.org/mizvekov,/clang/2025/10/20/Making-Clang-AST-Leaner-Faster.html">&lt;p&gt;Modern C++ codebases — from browsers to GPU frameworks — rely heavily on templates, and that often means &lt;em&gt;massive&lt;/em&gt; abstract syntax trees. Even small inefficiencies in Clang’s AST representation can add up to noticeable compile-time overhead.&lt;/p&gt;

&lt;p&gt;This post walks through a set of structural improvements I recently made to Clang’s AST that make type representation smaller, simpler, and faster to create — leading to measurable build-time gains in real-world projects.&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;A couple of months ago, I landed &lt;a href=&quot;https://github.com/llvm/llvm-project/pull/147835&quot;&gt;a large patch&lt;/a&gt; in Clang that brought substantial compile-time improvements for heavily templated C++ code.&lt;/p&gt;

&lt;p&gt;For example, in &lt;a href=&quot;https://github.com/NVIDIA/stdexec&quot;&gt;stdexec&lt;/a&gt; — the reference implementation of the &lt;code&gt;std::execution&lt;/code&gt; &lt;a href=&quot;http://wg21.link/p2300&quot;&gt;feature slated for C++26&lt;/a&gt; — the slowest test (&lt;a href=&quot;https://github.com/NVIDIA/stdexec/blob/main/test/stdexec/algos/adaptors/test_on2.cpp&quot;&gt;&lt;code&gt;test_on2.cpp&lt;/code&gt;&lt;/a&gt;) saw a &lt;strong&gt;7% reduction in build time&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Also the &lt;a href=&quot;https://www.chromium.org/Home/&quot;&gt;Chromium&lt;/a&gt; build showed a &lt;strong&gt;5% improvement&lt;/strong&gt; (&lt;a href=&quot;https://github.com/llvm/llvm-project/pull/147835#issuecomment-3278893447&quot;&gt;source&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;At a high level, the patch makes the Clang AST &lt;em&gt;leaner&lt;/em&gt;: it reduces the memory footprint of type representations and lowers the cost of creating and uniquing them.&lt;/p&gt;

&lt;p&gt;These improvements will ship with &lt;strong&gt;Clang 22&lt;/strong&gt;, expected in the next few months.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;how-elaboration-and-qualified-names-used-to-work&quot;&gt;How elaboration and qualified names used to work&lt;/h2&gt;

&lt;p&gt;Consider this simple snippet:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;namespace NS {
  struct A {};
}
using T = struct NS::A;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The type of &lt;code&gt;T&lt;/code&gt; (&lt;code&gt;struct NS::A&lt;/code&gt;) carries two pieces of information:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;It’s &lt;em&gt;elaborated&lt;/em&gt; — the &lt;code&gt;struct&lt;/code&gt; keyword appears.&lt;/li&gt;
  &lt;li&gt;It’s &lt;em&gt;qualified&lt;/em&gt; — &lt;code&gt;NS::&lt;/code&gt; acts as a &lt;a href=&quot;https://eel.is/c++draft/expr.prim.id.qual#:nested-name-specifier&quot;&gt;&lt;em&gt;nested-name-specifier&lt;/em&gt;&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here’s how the &lt;a href=&quot;https://compiler-explorer.com/z/WEWc4817x&quot;&gt;AST dump&lt;/a&gt; looked before this patch:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;ElaboratedType 'struct NS::A' sugar
`-RecordType 'test::NS::A'
  `-CXXRecord 'A'
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;RecordType&lt;/code&gt; represents a direct reference to the previously declared &lt;code&gt;struct A&lt;/code&gt; — a kind of &lt;em&gt;canonical&lt;/em&gt; view of the type, stripped of syntactic details like &lt;code&gt;struct&lt;/code&gt; or namespace qualifiers.&lt;/p&gt;

&lt;p&gt;Those syntactic details were stored separately in an &lt;code&gt;ElaboratedType&lt;/code&gt; node that wrapped the &lt;code&gt;RecordType&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Interestingly, an &lt;code&gt;ElaboratedType&lt;/code&gt; node existed even when no elaboration or qualification appeared in the source (&lt;a href=&quot;https://compiler-explorer.com/z/ncW5bzWrc&quot;&gt;example&lt;/a&gt;). This was needed to distinguish between an explicitly unqualified type and one that lost its qualifiers through template substitution.&lt;/p&gt;

&lt;p&gt;However, this design was expensive: every &lt;code&gt;ElaboratedType&lt;/code&gt; node consumed &lt;strong&gt;48 bytes&lt;/strong&gt;, and creating one required extra work to uniquify it — an important step for Clang’s fast type comparisons.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;a-more-compact-representation&quot;&gt;A more compact representation&lt;/h2&gt;

&lt;p&gt;The new approach removes &lt;code&gt;ElaboratedType&lt;/code&gt; entirely. Instead, elaboration and qualifiers are now stored &lt;strong&gt;directly inside &lt;code&gt;RecordType&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;a href=&quot;https://compiler-explorer.com/z/asz5q5YGj&quot;&gt;new AST dump&lt;/a&gt; for the same example looks like this:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;RecordType 'struct NS::A' struct
|-NestedNameSpecifier Namespace 'NS'
`-CXXRecord 'A'
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The &lt;code&gt;struct&lt;/code&gt; elaboration now fits into previously unused bits within &lt;code&gt;RecordType&lt;/code&gt;, while the qualifier is &lt;em&gt;tail-allocated&lt;/em&gt; when present — making the node variably sized.&lt;/p&gt;

&lt;p&gt;This change both shrinks the memory footprint and eliminates one level of indirection when traversing the AST.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;representing-nestednamespecifier&quot;&gt;Representing &lt;code&gt;NestedNameSpecifier&lt;/code&gt;&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;NestedNameSpecifier&lt;/code&gt; is Clang’s internal representation for name qualifiers.&lt;/p&gt;

&lt;p&gt;Before this patch, it was represented by a pointer (&lt;code&gt;NestedNameSpecifier*&lt;/code&gt;) to a uniqued structure that could describe:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;The global namespace (&lt;code&gt;::&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;A named namespace (including aliases)&lt;/li&gt;
  &lt;li&gt;A type&lt;/li&gt;
  &lt;li&gt;An identifier naming an unknown entity&lt;/li&gt;
  &lt;li&gt;A &lt;code&gt;__super&lt;/code&gt; reference (Microsoft extension)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For all but cases (1) and (5), each &lt;code&gt;NestedNameSpecifier&lt;/code&gt; also held a &lt;em&gt;prefix&lt;/em&gt; — the qualifier to its left.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;pre&gt;&lt;code class=&quot;language-cpp&quot;&gt;Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;::XX
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This would be stored as a linked list:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;[id: XX] -&amp;gt; [type: NestedClassTemplate&amp;lt;T&amp;gt;] -&amp;gt; [type: Class] -&amp;gt; [namespace: Namespace]
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Internally, that meant &lt;strong&gt;seven allocations&lt;/strong&gt; totaling around &lt;strong&gt;160 bytes&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;code&gt;NestedNameSpecifier&lt;/code&gt; (identifier) – 16 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;NestedNameSpecifier&lt;/code&gt; (type) – 16 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;TemplateSpecializationType&lt;/code&gt; – 48 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;QualifiedTemplateName&lt;/code&gt; – 16 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;NestedNameSpecifier&lt;/code&gt; (type) – 16 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;RecordType&lt;/code&gt; – 32 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;NestedNameSpecifier&lt;/code&gt; (namespace) – 16 bytes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The real problem wasn’t just size — it was the &lt;em&gt;uniquing cost&lt;/em&gt;. Every prospective node has to be looked up in a hash table for a pre-existing instance.&lt;/p&gt;

&lt;p&gt;To make matters worse, &lt;code&gt;ElaboratedType&lt;/code&gt; nodes sometimes leaked into these chains, which wasn’t supposed to happen and led to &lt;a href=&quot;https://github.com/llvm/llvm-project/issues/43179&quot;&gt;several&lt;/a&gt; &lt;a href=&quot;https://github.com/llvm/llvm-project/issues/68670&quot;&gt;long-standing&lt;/a&gt; &lt;a href=&quot;https://github.com/llvm/llvm-project/issues/92757&quot;&gt;bugs&lt;/a&gt;.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;a-new-smarter-nestednamespecifier&quot;&gt;A new, smarter &lt;code&gt;NestedNameSpecifier&lt;/code&gt;&lt;/h2&gt;

&lt;p&gt;After this patch, &lt;code&gt;NestedNameSpecifier&lt;/code&gt; becomes a &lt;strong&gt;compact, tagged pointer&lt;/strong&gt; — just one machine word wide.&lt;/p&gt;

&lt;p&gt;The pointer uses 8-byte alignment, leaving three spare bits. Two bits are used for kind discrimination, and one remains available for arbitrary use.&lt;/p&gt;

&lt;p&gt;When non-null, the tag bits encode:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;A type&lt;/li&gt;
  &lt;li&gt;A declaration (either a &lt;code&gt;__super&lt;/code&gt; class or a namespace)&lt;/li&gt;
  &lt;li&gt;A namespace prefixed by the global scope (&lt;code&gt;::Namespace&lt;/code&gt;)&lt;/li&gt;
  &lt;li&gt;A special object combining a namespace with its prefix&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When null, the tag bits instead encode:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;An empty nested name (the terminator)&lt;/li&gt;
  &lt;li&gt;The global name&lt;/li&gt;
  &lt;li&gt;An invalid/tombstone entry (for hash tables)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Other changes include:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;The “unknown identifier” case is now represented by a &lt;code&gt;DependentNameType&lt;/code&gt;.&lt;/li&gt;
  &lt;li&gt;Type prefixes are handled directly in the type hierarchy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Revisiting the earlier example, after the patch its AST dump becomes:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;DependentNameType 'Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;::XX' dependent
`-NestedNameSpecifier TemplateSpecializationType 'Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;' dependent
  `-name: 'Namespace::Class::NestedClassTemplate' qualified
    |-NestedNameSpecifier RecordType 'Namespace::Class'
    | |-NestedNameSpecifier Namespace 'Namespace'
    | `-CXXRecord 'Class'
    `-ClassTemplate NestedClassTemplate
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This representation now requires only &lt;strong&gt;four allocations (156 bytes total):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;code&gt;DependentNameType&lt;/code&gt; – 48 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;TemplateSpecializationType&lt;/code&gt; – 48 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;QualifiedTemplateName&lt;/code&gt; – 16 bytes&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;RecordType&lt;/code&gt; – 40 bytes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s almost half the number of nodes.&lt;/p&gt;

&lt;p&gt;While &lt;code&gt;DependentNameType&lt;/code&gt; is larger than the previous 16-byte “identifier” node, the additional space isn’t wasted — it holds cached answers to common queries such as “does this type reference a template parameter?” or “what is its canonical form?”.&lt;/p&gt;

&lt;p&gt;These caches make those operations significantly cheaper, further improving performance.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;wrapping-up&quot;&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;There’s more in the patch than what I’ve covered here, including:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;code&gt;RecordType&lt;/code&gt; now points directly to the declaration found at creation, enriching the AST without measurable overhead.&lt;/li&gt;
  &lt;li&gt;&lt;code&gt;RecordType&lt;/code&gt; nodes are now created lazily.&lt;/li&gt;
  &lt;li&gt;The redesigned &lt;code&gt;NestedNameSpecifier&lt;/code&gt; simplified several template instantiation transforms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these could warrant its own write-up, but even this high-level overview shows how careful structural changes in the AST can lead to tangible compile-time wins.&lt;/p&gt;

&lt;p&gt;I hope you found this deep dive into Clang’s internals interesting — and that it gives a glimpse of the kind of small, structural optimizations that add up to real performance improvements in large C++ builds.&lt;/p&gt;</content><author><name></name></author><category term="mizvekov," /><category term="clang" /><summary type="html">Modern C++ codebases — from browsers to GPU frameworks — rely heavily on templates, and that often means massive abstract syntax trees. Even small inefficiencies in Clang’s AST representation can add up to noticeable compile-time overhead. This post walks through a set of structural improvements I recently made to Clang’s AST that make type representation smaller, simpler, and faster to create — leading to measurable build-time gains in real-world projects. A couple of months ago, I landed a large patch in Clang that brought substantial compile-time improvements for heavily templated C++ code. For example, in stdexec — the reference implementation of the std::execution feature slated for C++26 — the slowest test (test_on2.cpp) saw a 7% reduction in build time. Also the Chromium build showed a 5% improvement (source). At a high level, the patch makes the Clang AST leaner: it reduces the memory footprint of type representations and lowers the cost of creating and uniquing them. These improvements will ship with Clang 22, expected in the next few months. How elaboration and qualified names used to work Consider this simple snippet: namespace NS { struct A {}; } using T = struct NS::A; The type of T (struct NS::A) carries two pieces of information: It’s elaborated — the struct keyword appears. It’s qualified — NS:: acts as a nested-name-specifier. Here’s how the AST dump looked before this patch: ElaboratedType 'struct NS::A' sugar `-RecordType 'test::NS::A' `-CXXRecord 'A' The RecordType represents a direct reference to the previously declared struct A — a kind of canonical view of the type, stripped of syntactic details like struct or namespace qualifiers. Those syntactic details were stored separately in an ElaboratedType node that wrapped the RecordType. Interestingly, an ElaboratedType node existed even when no elaboration or qualification appeared in the source (example). This was needed to distinguish between an explicitly unqualified type and one that lost its qualifiers through template substitution. However, this design was expensive: every ElaboratedType node consumed 48 bytes, and creating one required extra work to uniquify it — an important step for Clang’s fast type comparisons. A more compact representation The new approach removes ElaboratedType entirely. Instead, elaboration and qualifiers are now stored directly inside RecordType. The new AST dump for the same example looks like this: RecordType 'struct NS::A' struct |-NestedNameSpecifier Namespace 'NS' `-CXXRecord 'A' The struct elaboration now fits into previously unused bits within RecordType, while the qualifier is tail-allocated when present — making the node variably sized. This change both shrinks the memory footprint and eliminates one level of indirection when traversing the AST. Representing NestedNameSpecifier NestedNameSpecifier is Clang’s internal representation for name qualifiers. Before this patch, it was represented by a pointer (NestedNameSpecifier*) to a uniqued structure that could describe: The global namespace (::) A named namespace (including aliases) A type An identifier naming an unknown entity A __super reference (Microsoft extension) For all but cases (1) and (5), each NestedNameSpecifier also held a prefix — the qualifier to its left. For example: Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;::XX This would be stored as a linked list: [id: XX] -&amp;gt; [type: NestedClassTemplate&amp;lt;T&amp;gt;] -&amp;gt; [type: Class] -&amp;gt; [namespace: Namespace] Internally, that meant seven allocations totaling around 160 bytes: NestedNameSpecifier (identifier) – 16 bytes NestedNameSpecifier (type) – 16 bytes TemplateSpecializationType – 48 bytes QualifiedTemplateName – 16 bytes NestedNameSpecifier (type) – 16 bytes RecordType – 32 bytes NestedNameSpecifier (namespace) – 16 bytes The real problem wasn’t just size — it was the uniquing cost. Every prospective node has to be looked up in a hash table for a pre-existing instance. To make matters worse, ElaboratedType nodes sometimes leaked into these chains, which wasn’t supposed to happen and led to several long-standing bugs. A new, smarter NestedNameSpecifier After this patch, NestedNameSpecifier becomes a compact, tagged pointer — just one machine word wide. The pointer uses 8-byte alignment, leaving three spare bits. Two bits are used for kind discrimination, and one remains available for arbitrary use. When non-null, the tag bits encode: A type A declaration (either a __super class or a namespace) A namespace prefixed by the global scope (::Namespace) A special object combining a namespace with its prefix When null, the tag bits instead encode: An empty nested name (the terminator) The global name An invalid/tombstone entry (for hash tables) Other changes include: The “unknown identifier” case is now represented by a DependentNameType. Type prefixes are handled directly in the type hierarchy. Revisiting the earlier example, after the patch its AST dump becomes: DependentNameType 'Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;::XX' dependent `-NestedNameSpecifier TemplateSpecializationType 'Namespace::Class::NestedClassTemplate&amp;lt;T&amp;gt;' dependent `-name: 'Namespace::Class::NestedClassTemplate' qualified |-NestedNameSpecifier RecordType 'Namespace::Class' | |-NestedNameSpecifier Namespace 'Namespace' | `-CXXRecord 'Class' `-ClassTemplate NestedClassTemplate This representation now requires only four allocations (156 bytes total): DependentNameType – 48 bytes TemplateSpecializationType – 48 bytes QualifiedTemplateName – 16 bytes RecordType – 40 bytes That’s almost half the number of nodes. While DependentNameType is larger than the previous 16-byte “identifier” node, the additional space isn’t wasted — it holds cached answers to common queries such as “does this type reference a template parameter?” or “what is its canonical form?”. These caches make those operations significantly cheaper, further improving performance. Wrapping up There’s more in the patch than what I’ve covered here, including: RecordType now points directly to the declaration found at creation, enriching the AST without measurable overhead. RecordType nodes are now created lazily. The redesigned NestedNameSpecifier simplified several template instantiation transforms. Each of these could warrant its own write-up, but even this high-level overview shows how careful structural changes in the AST can lead to tangible compile-time wins. I hope you found this deep dive into Clang’s internals interesting — and that it gives a glimpse of the kind of small, structural optimizations that add up to real performance improvements in large C++ builds.</summary></entry></feed>