Ossification and Hawaii: Impressions of a C++ working group

I've recently interacted informally with the mailing list of the ISO C++ Working Group. I've tried to float the following ideas.

Aggregated exceptions. I think I came up with a neat and complete proposal, but it's too ambitious for an existing language, given the changes it proposes and its relatively niche concern. We've migrated to VS 2015, so I've begrudgingly accepted noexcept destructors. And since C++11, lambdas provide a practical way to solve problems where one might previously want to use complex destructors.

So I guess we can live without multi-exceptions. Okay.

I then tried to float an is_relocatable property. A shortcoming of C++ right now is that it allows object content to be moved; but it doesn't allow movement of objects themselves. Even though anywhere from 50% to 100% of the objects we store in containers can be moved with memcpy — formally, this is undefined behavior. This is a problem for container resizing, and requires inefficient deep copying when noexcept move construction or destruction aren't available — even though the objects could be moved with a trivial memcpy. Various libraries work around this by implementing their own "is relocatable" concept: Qt has Q_MOVABLE_TYPE, EASTL has has_trivial_relocate, BSL has IsBitwiseMovable, Folly has IsRelocatable. I also saw this need, and rolled my version of this concept in EST (not yet published), and in a previous version of Atomic/Integral (to be published — hopefully, soon).

The need for a standardized concept is apparent. What I would most like to see is fairly simple:
  • In type_traits, a standard is_relocatable property.
  • A way to declare a type relocatable without exceedingly ugly syntax. My favorite:

    class A relocatable { ...
  • To avoid unnecessary declarations leading to developer mistakes, a way for the compiler to infer that an obviously relocatable type is in fact relocatable. For instance, if type Str is relocatable, then the following should be also:

    struct A { Str x; };

    It is possible to infer this in a safe way by limiting this inference to types where (1) the compiler declares an implicit move constructor, and (2) all direct bases and non-static members are relocatable.
Do you think I was successful?

There were people who staunchly opposed even adding a property — even though this one is needed, and current type_traits is littered with them (and those are also useful).

In fact — there had been an attempt to propose this as is_trivially_destructive_movable. This was shot down by the C++ committee because it would require conceptualizing the idea that object lifetime is "ended" at one address, and "begun" again at another address. This is too much of a conceptual problem. (Even though object lifetime doesn't end — it just continues...)

Not to mention the wall of opposition to any kind of compiler inference of a relocatable property. Notwithstanding that this would be purely an improvement; wouldn't break anything; and would allow this optimization to fit in with every day use.

Exasperated with this failure to find support for what seemed like a modest and innocuous improvement, I tried the least contentious possible idea. Can we just have a version of realloc — we could call this version try_realloc — that tries to expand the memory in place, and fails if it's unable? In the absence of a relocatable property, containers could at least use try_realloc to try to expand existing memory in place, before committing to a potentially deep copy.

Everyone agrees this is a good idea, and it turns out something similar had been proposed.

But it didn't go anywhere. Why not?

Well, the person who wrote that proposal couldn't afford to take a week off to travel to a C++ working group meeting to champion and defend the proposal. And so, it died that way.

Open standards — if you attend our meeting in Hawaii

Nominally, the C++ standardization process is open. In practice, it's gated by who can justify sending representatives to week-long meetings in far-off places. These meetings take place two or three times a year, and the next one takes place in Kona, Hawaii.

It goes without saying that, if you're not present on at least one of the full WG meetings, your chances of making an impact on the C++ language are slim, at the least. As the preceding anecdote shows — if you don't attend, forget about even something as trivial as a realloc improvement.

This wastes resources on travel; excludes people who could contribute; and silences worthwhile ideas that don't happen to have a champion with disposable time and money.

About a decade ago, I participated in SSH standardization. Some members of that group did travel, but this had no impact on a person's ability to affect the direction of the standard, or allow their voice to be heard. The Internet Engineering Task Force, which supervised standardization of SSH, does organize regular meetings; but this is no way required to publish RFCs, or contribute to them.

Holding face to face meetings is an inefficient and exclusionary process that became unnecessary around the year 2000. Yet it continues to persist. I wonder if this is because most people who would vote to change it enjoy being sponsored by their companies to travel. After all, it's necessary if people do it...

When I voiced this concern, members of the group were of course up in arms. It really is to get work done!

But the next WG21 meeting is being held this October at the absurd location of Kona, Hawaii. This is 5000 miles from New York, 7,400 miles from Berlin, and 2400 miles from San Francisco.

It would be too rushed to arrange this as a fly-in and fly-out 2-3 day event. If that were the case, it might as well be held in Bentonville, AR, in a non-descript Walmart office building. To allow work to get done, it has to be a leisurely occasion of 5 nights and 6 days. This allows for convenient weekend arrival or departure, which I'm sure no one will use to treat themselves to a volcano trip — or a leisurely day at the beach.

The average fuel economy of long-distance travel is >3L of jet fuel per 100 miles per passenger seat. With 90 - 100 attendees traveling an average of 5,000 miles each, return trip, this is going to involve the burning of 27,000 liters of jet fuel, and the release of 68 metric tons of CO2 into the atmosphere.

All of this must happen 2-3 times per year, because otherwise it's impossible to advance the language.

Some members of the group said they've tried to attend remotely, but that it just doesn't work as well. Well, of course it doesn't work, when the event is being set up so that you're a second class participant.

With meetings held in places like Hawaii, they're spending at least $2,000 per person per event. Annual cost is $400,000 - $600,000, just in what each participant pays. You could get absolutely amazing teleconferencing if your budget for it was $500,000 per year. And that's just one ISO working group. How many other active groups are there? What tech could you get for a $2 or $10 million annual budget?

But of course — that would make sense if you wanted open standards, where anyone could contribute.

As opposed to a week in Hawaii, each year...


Love and function

Someone asked the following conundrum. This was asked in the context of whether it's "shallow" for a person to refuse another as a partner, based solely on that they aren't sexually compatible:

"If your love is predicated on sex then is it really love or is it just two people using each other?"

Loving someone, and being useful to them, are not opposites. The two work together. To love someone is to offer yourself to be useful to them. It is to serve them gladly, with the expectation that this will be appreciated and returned. To accept being loved is to welcome this offer; to return it, and appreciate it.

Love is a willingness to serve: without coercion, and without feeling coerced.

"Love" and "relationship", though, are different things. Every one of us can love everyone, hypothetically. However, we can't have functional relationships with people who can't meet our needs.

Relationships are love + function. If you take away the function, the love remains. However, without function, love alone is not enough for a relationship.

This is why relationships based on compatibility can work. We can love everyone — if there's no reason against it. So when two people are complementary, there's no reason for love not to arise. But the reverse is not true: two people who feel deep and passionate love for each other can simply not be compatible.

"Good, traditional" traits

I find that "good" and "traditional" don't exactly go hand in hand.

If it makes sense, it's not called tradition. It's called common sense. If it's called tradition, it means that at some level, it doesn't make sense. It's being practiced despite it.

It does not make a person good if they follow imperatives that violate sense. It makes them compliant.

Being compliant makes sense, to an extent. However, being overly compliant makes you a tool. At best, you're a tool for nonsense. At worst, you're a tool for perpetuation of suffering and hardship.

Attractiveness is not shallow

There are large groups of men online — they're mostly men — who consider themselves unattractive, and adopt this as their identity, and an embittered perch from which to carp about life.

If you're an unattractive man — or woman — stop the lifestyle that makes you feel and look that way.

Most people can look great if they invest the effort. You aren't going to get taller, and you aren't going to grow a bigger penis. But you can fix almost anything else - lose fat, gain muscle, develop a sense of style, develop self confidence due to the results you have achieved.

None of this is beyond anyone's reach save a handful of really unfortunate people. Chances are that you're not one of those. Chances are that, if you think of yourself as unattractive, it's a result of a lifetime of ugly thoughts leading to disrespect and neglect of yourself and your body.

Now tell me. Who wants a person who chooses a lifetime of these ugly thoughts? Who has the option to invest the effort and improve his self confidence and looks, but avoids doing so in favor of whining about people, insulting them for their choices, and continuing to neglect his body?

Attractiveness is not shallow. You're not being judged for something outside your control. It's not in your genes, you're not "big boned". No one's depriving you of self confidence. No one but yourself dressed you poorly.

Attractiveness is 99% a consequence of mental habits, attitudes, and lifestyle. Most people don't have to be unattractive if they adopt a healthy inner life. So if you are, and you don't have to be, that speaks volumes.


VS 2015 projects: "One or more errors occurred"

For the most part, I find that Visual Studio 2015 is awesome. However, it did ship with kinks that need to be worked out. Not least, it has crashed on me from time to time, while working on a large solution.

I recently experienced a problem where I couldn't open Visual C++ projects (.vcxproj) after I copied them to another location. When trying to open the projects, VS 2015 would give me nothing more than this "extremely useful" error:

That is some talent, right there. That must have taken a lot of work to get this right.

After trying various things, and having researched this without success for a few hours, I finally got the idea to try opening the projects in Visual Studio 2013, instead.

Behold: VS 2013 actually displayed a useful error, which informed me that my projects had Import dependencies that I forgot to move.

So, yeah. If VS 2015 helpfully tells you that "one or more errors occurred" — try VS 2013.


Algorithm for selective archival or aggregation of records

Given the absurdity of the US patent system, it comes to mind that I should publish all technical ideas, no matter how inane, when I think of them — simply to provide a prior art reference, in case anyone claims that idea in a future patent.

The ideas I publish are arrived at independently, and based on publicly available information. This does not mean that existing and currently "valid" patents don't exist that cover parts of my ideas. It does, however, mean that such patents are obvious, and that the patent system is legalized enabling of extortion.

Without further ado, here's an idea for how to archive old log files to:
  • maintain required storage under a manageable threshold;
  • store the latest data, as well as older data in the event of undetected corruption;
  • avoid predictability to an attacker who might rely on a particular pattern of deletion.
We assume that log files are being archived in regular periods. This period may be an hour, a day, a week, or whatever works for an application.

We will store the log files in ranks, such that rank 0 are the most recent log files, and they get bumped to a higher rank as each rank fills. We allow for an unlimited number of ranks.

We define a maximum number of log files per rank. We call this M, and this number shall be 3 or more. It could be, for example, 4.

We determine the proportion of log files that we will keep each time as we move log files up the ranks. We call this number K, and it is a fraction between 0 and 1. A natural choice of K would be 1/2 or 1/3.

Each rank will start empty, and will with time become populated with log files.

When generated, every log file is first added to rank 0. Then, for each rank, starting with rank 0:
  • We test if the number of log files in that rank has reached M.
  • If it has:
    • We determine a subset S of log files in this rank that contains K*M of the oldest log files. If K is a fraction 1/N, e.g. 1/2, it would be natural to round the number of log files in the subset down to a multiple of N. For example: if K=0.5, S might contain 2 log files.
    • To avoid predictability of which log files are kept, we randomly discard a K proportion of log files in subset S. For example, if K = 1/2, we randomly discard one of the two log files. If predictability is desired, we discard instead based on a predetermined rule. For example, we could discard every other log file.
    • We graduate the log files in S that have not been discarded to the next rank. By this, I mean that we remove them from the current rank, and add them to the next.
We repeat this process with each rank, until we reach one that is empty.

This process keeps around the most recent log files; keeps around the older ones, going all the way back to origin; randomizes which log files are discarded; and uses O(log(T)) storage.

This process works not only for log files, but for any other kind of historical data storage. Instead of discarding information when graduating records from each rank, information can also be aggregated, while reducing the number of records. For example, when selecting the subset of records S that will be graduated to the next rank, instead of deleting a K proportion of records, all the records in S could be aggregated into a single record, which is then propagated to the next rank.

This would allow storage of information such as statistics and totals that is more granular for recent information, and more sparse for older information - yet does not lose information about overall totals.

A natural extension of the above principles is to define ranks to match natural periods of time. For example, rank 0 could be days, rank 1 could be weeks, rank 2 could be months, rank 3 could be years. The maximum number of records per rank, M; and the proportion of records kept between ranks, K; would then be rank-dependent.

Exceptions in destructors and Visual Studio 2015

If you're migrating code to Visual Studio 2015, you may have run into the following warning:
warning C4297: 'A::~A': function assumed not to throw an exception but does
note: destructor or deallocator has a (possibly implicit)
    non-throwing exception specification
You may not have seen this warning with GCC or Clang, so you may think VS 2015 is just bothering you. Wrong! GCC should be warning you even more so than Visual Studio (I'll explain why), but it does not.

You may also think that throwing an exception from a destructor in C++ is inherently undefined behavior. Wrong! Throwing an exception from a destructor in C++ is extremely well defined.

In C++03, throwing an exception from a destructor works as follows:
  • If there is no exception in flight, it means the destructor is being called through forward progress. In this case an exception in the destructor causes the beginning of unwinding — backward progress — just as if the exception was thrown anywhere else. The object whose destructor threw continues to be orderly destroyed, all the subobject destructors are still called, and operator delete is still called if the object's destruction was triggered by the delete keyword.
  • If there is an exception in flight, a destructor can still throw. However, an exception thrown from a destructor cannot meet with the exception in flight. It can still pass out of a destructor and be caught by another destructor if the throwing destructor was called recursively. However, C++ does not support exception aggregation. If the two exceptions meet, such that they would have to be joined to unwind together, the program is instead terminated abnormally.
In C++11 and later:
  • Everything exactly the same as above, except that destructors now have an implicit noexcept declaration, which is deduced to be the same as the destructor that the compiler would generate. This means that a user defined destructor is noexcept(true) by default, unless it is explicitly declared noexcept(false), and unless a base class or an aggregated object declares a destructor explicitly as noexcept(false).
  • If an exception leaves a noexcept(true) destructor, the C++ standard now requires std::terminate to be called. GCC does this; Clang does this; Visual Studio 2015 does this unless you enable optimization — which of course you will for production code. If you enable optimization, then against the spec, Visual Studio 2015 appears to ignore noexcept, and allows the exception to pass through.
Even though, like other compilers, GCC will call std::terminate if an exception leaves a noexcept destructor; and even though GCC will do so more consistently than VS 2015 — the behavior doesn't go away with -O2; GCC produces absolutely no warnings about this, even with -Wall.

In this case, therefore, we have Visual Studio 2015 producing a useful warning which exposes code incorrectness, which GCC does not produce.

Why the change in C++11?

Mostly, move semantics and containers. Exceptions from destructors in stack-allocated objects are usually not problematic, assuming the destructor checks std::uncaught_exception to see if it can throw. However, because C++ supports neither exception aggregation, nor a relocatable object property, a throwing move constructor or destructor make it next-to-impossible to provide a strong exception safety guarantee when e.g. resizing a vector.

It is possible that relocatable may be supported in the future, allowing objects to be moved via trivial memcpy instead of move construction + destruction. This would make it possible to safely resize a vector containing objects whose destructors may throw. But that leaves the question of what to do when multiple destructors throw when destroying or erasing the vector. That would require exception aggregation, which in turn would be ineffective without making developers aware; and at this time, that seems not to be feasible.

It seems likely we may get relocatable some time, but probably not multi-exceptions any time soon. Planning for the next 10 years, it's best to design your code to have noexcept destructors.

What to do?

If you have code that currently throws from destructors, plausible things to do are:
  1. Band-aid to restore C++03 behavior: declare destructors noexcept(false). Not only those that trigger the warning, but also those that may call throwing code. This addresses the VS 2015 warning, and fixes behavior with compilers that should issue a warning, but do not. This is safe to do if destructors are checking std::uncaught_exception before throwing.
  2. Destructor redesign: you can comply with the spirit of C++11, and change destructors to not throw. Any errors encountered by destructors should then be logged using some logging facility, perhaps a C-style global handler. The destructor must either call only noexcept code, or must catch exceptions from throwing code.
Long-term, option 2 is more consistent with current C++ direction. In this case, the following macro may come in handy, to ensure that any code called from destructors is noexcept:
#define NoExcept(EXPR) \
    ([&]() { static_assert(noexcept(EXPR), "Expression can throw"); }, (EXPR))
This is unfortunately necessary because otherwise, you have to resort to extensive code duplication. When used as an operator, noexcept returns a boolean, so you have to test it like this:
static_assert(noexcept(INSERT_VERY_LONG_EXPRESSION), "Can throw");


The main asset of the US is its value system

And what the US is doing, with respect to spying and whistleblowing, compromises it.

I was following a thread discussing the benefits of having the NSA and CIA, when they didn't even stop the Chattanooga shooter — who shot four marines, but not before writing about Islam on a public blog. Discussion evolved to that maybe the benefits are economic. That maybe these agencies don't stop most bad things from happening, but the US reaps benefits from stealing secrets from other countries.

I find it disturbing to talk about stealing as something acceptable just because it's done between countries instead of between people.

How would stealing from other countries benefit the people of the US? To make use of what you've stolen, you have to give it to some company. Does that really help the average American, or just tycoons with ties to spooks?

To the extent that the US is doing things right, it has been, and should continue to be, the country to be stolen from; not a country that needs to resort to stealing. If you have to steal from other countries, it means you're behind.

There's no reason to steal from various countries around the world if you can just be a nice country into which smart people all want to immigrate, and live there. You don't get to be that with an intrusive, spying government.

A principal contributor to the US being great has been attracting smart people with integrity and constructive values. Einstein, Schwarzenegger, Tesla — born in Europe. Elon Musk — born in South Africa. Sergey Brin — born in Moscow.

You may not like some of them, but these are examples of millions of capable individuals, who would not have come to the US if the US was just like Russia. These people, and their parents, were attracted to the US because of its value system; because it's not like oppressive regimes abroad. And they go on to build the country.

Don't be victim to Dunning Kruger effect, and think that without them, you would do just as well.

The more you adopt a value system of a mediocre country, the more this engine will stop, and the more you'll become mediocre yourselves. For countries as much as individuals, stealing costs; it's short-term gain for long-term loss.

See my previous post on this topic: Whistleblowing policy proposal


Whistleblowing policy proposal

We face the problem of what to do with people like Assange, Manning, and Snowden, to encourage justified whistleblowing, and yet for secrecy to still be available to projects that really need it. (And let's be mindful here that many projects that believe they need secrecy, really do not.)

I propose that any act of whistleblowing done demonstrably out of idealism, and in good faith, should be protected and given immunity for, one time in a person's life. People who have used this card should lose their career, and no longer be employable in this or any other career that requires clearance. But they should be able blow the whistle one time.

Everyone currently employed in careers that require clearance would have to have this card still available to them — to avoid someone stacking a team full of people who've already been made to spend their card on purpose.

Everyone in powerful positions would then expect that everyone around them has this one-time get-out-of-jail-free card to report truly problematic issues, at cost of irrevocable career loss.

All secret projects would then be highly concerned with making sure that what they're doing is not against the inner conscience of anyone involved.

To the extent that we can also make sure that careers requiring confidentiality do not, and cannot, employ psychopaths, we would have people with a conscience at all levels of government, making sure that even when something is done in secret, it is being done in line with the wider society's values.

And, perhaps quite rightfully — this would lead to many fewer secret projects.

SFTP and OpenSSH's "do as we please"

It bugs me that OpenSSH pull all the same crap that Microsoft was vilified for years ago.

They sabotaged the SFTP standardization process. They are "embracing and extending" as they please, and leveraging their market share in unilateral decisions that ignore the needs of other implementations.

Their quality is also not as awesome as their contributors sometimes seem to believe. They seem to be doing better recently — but historically, they have a nice long list of security issues. This is not to mention bugs that are just bugs, and require workarounds. Just the latest of these is an SFTP data stream corruption if there are commands producing output in .bashrc; because you know, it makes sense for OpenSSH to launch their SFTP subsystem in a way that runs .bashrc; and it makes sense to send garbage output from .bashrc like it was part of the SFTP session to the client. ;)

So that's one workaround we're adding in our next Bitvise SSH Client version.

But this is not to say that OpenSSH are the worst. That would be certainly unfair. There are other implementations (recently, coughciscocough; years ago, coughipswitchcough) which have given us much more trouble. OpenSSH at least does care about fixing stuff that's unambiguously broken, which is more than I can say for some vendors.

So my beef with OpenSSH isn't the quality. It's how they pull the same crap Internet Explorer did; but no one blinks an eye. They avoid responsibility and custodianship that would be expected of a private project with their market share, because they're held to a different standard.

A decade ago, they made a public announcement that they're not going to support SFTP versions higher than version 3: for the simple reason that their interest is BSD and Linux, and they just don't care to implement extensions needed by other platforms. Since they're the major implementation, this stopped SFTP standardization in its tracks. We therefore now have SFTP version 3, implemented by OpenSSH; and then we have SFTP version 6, implemented by most reasonable people.

Because of this, we also remain without an SFTP RFC. Instead, de facto standards are past drafts:
But it wasn't enough for OpenSSH to refuse to support SFTP versions that address the needs of other platforms. Instead, when they need functionality specified by SFTPv6 — they implement their own extensions to SFTPv3.

Just the latest instance that I found today is OpenSSH's decision to not implement the space-available extension in SFTPv6; and instead to implement statvfs@openssh.com.

Fine. That's okay. We'll implement statvfs@openssh.com to accommodate a client that expects it. But geez, if only OpenSSH didn't act quite like Microsoft, fifteen years ago.

"We are a world unto ourselves, self-sufficient. We do not care for the needs of others."


Aggregated exceptions: Proposal summary

Based on my previous post about errors in destructors and aggregated exceptions, I first made a Reddit thread, and then a proposal in the ISO C++ Future Proposals group. Based on feedback I've received, the proposal has evolved, and would benefit from a page where its current state is published.

I summarize the proposal's state. I will update this if I get further feedback prompting change.


There is a problem which I believe is limiting C++ in its potential for powerful expression. The problem is the single-exception policy, and its direct consequence: the marginalization of destructors, exemplified in recent years by how they're now marked by default noexcept.

I believe this problem is currently viewed incorrectly by many, and I wish to propose a solution. The solution is aggregated exceptions. I contend these are conceptually simple; resolve the Gordian Knot of destructor exceptions; are backward compatible, and straightforward to implement. :-)

There is a widespread belief, held passionately by many, which I believe is conceptually in error. This is that destructors are supposed to be nothing more than little cleanup fairies. That they should only:
  • quietly release resources, and
  • kindly shut up about any errors.
I find this a limiting and restrictive view, which does not permit full expression of destructors as a way to:
  1. schedule code for execution;
  2. determine the order of execution; but
  3. not dictate the exact trigger for execution to take place.
I propose that the limiting view of destructors is not inherently obvious, but is an accidental ideology. It arises not because we freely choose it, but because of a flaw that has plagued C++ since the introduction of exceptions. This flaw is the single-exception policy. This has prevented answers to questions such as:
  • What to do when a destructor throws, and an exception is already in flight?
  • What to do if we're destroying (part of) a container, and destructors throw for 2 or more of the contained objects?
I propose that we should not have to cope with not having answers for these questions in this day and age; and that support for unlimited aggregated exceptions answers them straightforwardly.

The support I propose:
  • Is conceptually simple.
  • Legitimizes exceptions in destructors.
  • Provides means for containers to handle, aggregate, and relay such exceptions.
  • Imposes no costs on applications that do not use this.
  • Provides a way for destructors to report errors. This is something for which there is currently no solid language support, outside of std::terminate.
  • Emancipates destructors as a way to schedule code for execution. This is to say any code; even code that may throw. This is a frequent usage pattern e.g. in database libraries, whose destructors must rollback; and rollback may involve exceptions.
The use of destructors for general execution scheduling, rather than only cleanup, is recognized as something the language reluctantly needs to support. C++ has always supported throwing from destructors. Even in the latest C++ versions, you can do so by declaring them noexcept(false). However, you better not throw if an exception is already in flight; and you better not store these objects in containers. My proposal addresses this in a more profound way that the noexcept approach does not.


Core changes:
  1. In a running program, the internal representation of an exception in flight is changed from a single exception to a list of exceptions. Let's call this the exception-list.
  2. std::exception_ptr now points to the beginning of the exception-list, rather than a single exception. Methods are added to std::exception_ptr allowing a catch handler, or a container in the process of aggregating exceptions, to walk and manage the exception-list.
  3. When the stack is being unwound due to an exception in flight; and a destructor exits with another exception; instead of calling std::terminate, the new exception is simply added to the end of the exception-list. Execution continues as it would if the destructor exited normally.

Catch handlers

Traditional catch handlers:
  • To maintain the meaning of existing programs as much as possible, a traditional catch handler cannot receive an exception-list that contains more than one exception. If an aggregated exception meets a traditional catch handler, then to preserve current behavior, std::terminate must be called. This means we need a new catch handler to handle multi-exceptions.
  • Notwithstanding the above, catch (...) must still work. This is often used in finalizer-type patterns that catch and rethrow, and do not care what they're rethrowing. This type of catch handler should therefore be able to catch and rethrow exception-lists with multiple exceptions. It also provides a method to catch and handle an exception-list as a whole. This can be done via std::current_exception, and new methods added to std::exception_ptr.
We introduce the following new catch handler type:
catch* (<exception-type>) {
We call this a "catch-any" handler. It has the following characteristics:
  • It matches every occurrence of a matching exception in an exception-list. This means it can be called repeatedly, multiple times per scope, if there are multiple matches. We cannot do multiple calls to traditional handlers, because traditional handlers are not necessarily multi-exception aware, and do not expect to be called multiple times in a row.
  • All catch-any handlers must appear before any traditional catch handlers in same scope. This is because the catch-any handlers filter the list of exceptions, and can be executed multiple times and in any order, whereas the traditional catch handler will be the ultimate handler if it matches. Also, the traditional handler will std::terminate if it encounters an exception-list with more than one exception remaining.
  • If there are multiple catch-any handlers in the same scope, they will be called potentially repeatedly, and in an order that depends on the order of exceptions in the exception-list.
  • If a catch-any handler throws or re-throws, the new exception is placed back into the list of exceptions currently being processed, at the same position as the exception that triggered the handler. If there remain exceptions in the list, the search of catch-any handlers continues, and the same catch-any handler might again be executed for another exception in the list.
  • If a catch-any handler exits without exception, the exception that matched the handler is removed from exception-list. If this was the last exception, forward progress resumes outside of catch handlers. If more exceptions are in list, other catch-any handlers at current scope are tested; then any catch handlers at current scope are tested; and if there's no match, unwinding continues at the next scope.

Exception aggregation with try-aggregate and try-defer

For handling and aggregation of exceptions, we introduce two constructs: try-aggregate and try-defer.
  • Try-aggregate starts a block in which there can be one or more try-defer statements that aggregate exceptions.
  • At the end a try-aggregate block, any accumulated exceptions are thrown as a group.
  • If there are no aggregated exceptions, execution continues.

The following code is currently unsafe if the A::~A() destructor is declared noexcept(false):
struct D
    A *a1, *a2;
    ~D() { dispose(a1); dispose(a2); }

template <typename T> void dispose(T* ptr) 
Problems with this code are as follows:
  • dispose() does not use SFINAE to require that T is std::is_nothrow_destructible. Therefore, dispose() must take exceptions from T::~T() into account — and it does not.
  • The D::~D() destructor makes two calls to dispose(), which is a function that may throw. If disposal of the first member throws, the second member will not be properly disposed.
To allow this type of code to work, C++11 pushes to make destructors noexcept. But this leaves a hole where a destructor can still be declared noexcept(false), and then the above code will not work.

With exception aggregation, the above situation can be handled using try-aggregate and try-defer. To avoid introducing contextual keywords, I use try* for try-aggregate, and try+ for try-defer:
struct D {
    A *a1, *a2;
    ~D() {
        try* {
            try+ { dispose(a1); }
            try+ { dispose(a2); }

template <typename T> void dispose(T* ptr) 
    try* {
        try+ { ptr->~T(); }
This performs all aspects of destruction properly, while catching and forwarding any errors in a controlled and orderly manner. The syntax is clear, and easy to use.


With this support, a container can now handle any number of destructor exceptions gracefully. If a container is destroying 1000 objects, and 10 of them throw, the container can aggregate those exceptions using try* and try+, relaying them seamlessly once the container's task has completed.

Since containers are written with templates, this does not need to impose any cost on users that use noexcept destructors. If the element uses a noexcept destructor, exception aggregation can be omitted. This can be done currently using SFINAE, or in the future with a static_if — assuming one is introduced.

Users who previously stored objects with throwing destructors in containers were doing so unsafely. With aggregated exceptions, and containers that support them, such types of use become safe.

What are the uses?

  • Simple resource-freeing destructors can now throw; as opposed to being coerced, via lack of support, to either abort the program or ignore errors.
  • Destructors are now suitable for complex error mitigation, such as database or installation rollback. Currently, it is unsafe to use a destructor to trigger rollback. It forces you to either ignore rollback errors, or abort if one happens — even if there are actions you would want to take instead of aborting.
  • You can now run any number of parallel tasks, and use exceptions as a mechanism to collect and relay errors from them. Under a single-exception policy, you have to rely on ad-hoc mechanisms to collect and relay such errors.

Limited memory environments

Implementation of an aggregated exception-list will most likely require dynamic memory. This poses the question of what to do if memory runs out. In this case, I support that std::terminate should be called when memory for exception aggregation cannot be secured.

For applications that need to guarantee that exception unwinding will succeed in all circumstances, we can expose a function to pre-reserve a sufficient amount of memory. For example:
bool std::reserve_exception_memory(size_t max_count, size_t max_bytes);
If this is a guarantee that your program must have:
  1. You analyze the program to find the maximum number of exceptions it may need to handle concurrently.
  2. You add a call to the above function to reserve memory at start.
I do not see this as much different than reserving a large enough stack — a similar problem that limited memory applications must already consider.

For applications that cannot make an estimate, or are not in a position to pre-allocate, we also introduce the following:
template <typename T> bool std::can_throw();
With aggregated exceptions, this provides similar functionality that std::uncaught_exception() provides currently. It provides destructors with a way to detect a circumstance where throwing an exception would result in std::terminate(); and in that case, allows the destructor to adapt.

When std::reserve_exception_memory() has been called with parameters appropriate for the program, std::can_throw<T>() would always return true. It would also always return true outside of destructors.

A program that doesn't wish to use any of this could also continue to use existing mechanics with no change in behavior. A program can still use noexcept destructors. If it uses destructors that are noexcept(false), it can still call std::uncaught_exception() and not throw if an exception is in progress. To avoid aggregated exceptions from containers, the program can still avoid using containers to store objects whose destructors are noexcept(false) — which is currently the only safe option.

If the program adheres to all the same limitations that we have in place today, it will experience no shortcomings. However, a function like std::reserve_exception_memory() would make it safe to use aggregated exceptions in limited memory environments.


Q. If you have some class Derived : Base, and the destructor of Derived throws an exception, what do you do with Base?

This is supported by C++ as-is, and remains unchanged in this proposal. If Derived throws an exception, the Base destructor is still called. If this is a heap-allocated object, being destroyed via delete, then operator delete is still called.

Q. Every destructor call is going to have to check for these deferred exceptions. Aren't you adding a bunch of conditional branch instructions to a lot of code?

When a destructor is called, this conditional branching is already there. Currently, it calls std::terminate. With multi-exceptions, it would call something like std::aggregate_exceptions.

Q. Suppose I have struct A, whose destructor always throws. Then I have struct B { A a1, a2 }. What happens when B is destroyed?
  1. a2.~A() is called, and throws. If B is being destroyed due to an exception in progress, the exception from ~A() is added to the existing exception-list. If there is no exception in progress, a new exception-list is created, and holds this first exception.
  2. a1.~A() is called. This throws, and its exception is appended to the existing exception-list.

Q. Suppose I have struct A, whose destructor always throws "whee!". Then I call a function void F() { A a; throw 42; }. What happens?
  1. throw 42 is called, creating an exception-list with a single exception, int = 42.
  2. a.~A() is called, which throws, and appends its exception to the existing exception-list. The exception-list now has two exceptions: (1) int = 42; and (2) char const[] = "whee!".

What's wrong with noexcept?

Forcing destructors to be noexcept is a kludge. It is an architectural misunderstanding — a patch to cover up a defect.

There is no reason the language can't handle multiple exceptions in stack unwinding. Just add them to a list, and require catch-any handlers to process them all. If any single exception remains, it can be handed to a traditional catch handler. All code can now safely throw. Containers can aggregate exceptions.

This is a focused change that fixes the problem at a fundamental level, emancipates destructors, and allows handling of parallel exceptions. Any number of errors can be handled seamlessly, from the same thread or multiple, and there's no longer code that can't throw.

Instead of fixing a hole in the road, forcing destructors to be noexcept is a sign that says: "Avoid this hole!" Instead of fixing the road, so it can be traveled on, noexcept creates a bottleneck in traffic, and blocks an entire lane from use.