Apple design is anti-nerd

I figured out why I dislike Apple, and the average person loves it.

The Apple design is anti-nerd.

It allows people to use technology without being ashamed that their skill is rudimentary. It puts regular folks on equal footing with skilled users. It is the intent of the product that its uses are limited and basic. By design, it lacks advanced features and flexibility, so that it cannot be used more effectively by being good at it. It's not just the low skill threshold; it is the low skill ceiling that makes people want to use it. Some uncool, nerdy person can't outskill you at iTunes.

Then, it's pricey and stylish, and therefore a fashion statement: something nerds don't know how to use.

It turns the table around on nerds. Therefore, normal people love it. Genius. :)


Science, spirituality, and the limits of the materialist paradigm

"The first gulp from the glass of natural sciences will turn you into an atheist, but at the bottom of the glass God is waiting for you."
This is a quote attributed to Werner Heisenberg – a pioneer of quantum mechanics, known for the Heisenberg principle of uncertainty.

There are a number of my friends who oppose religion; for good reasons that I also used to champion. Such people are very much miffed by Heisenberg's mention of "God". They argue the concept means nothing; that it's useless. "God" and "spirituality" are just labels we put on things we don't understand. Anything we don't yet know, science will eventually explain. Until it does, it is useless to guess.

I would argue guessing is an essential part of the scientific process. Most progress started first by guessing. However, more than just this – I wish to address the assertion of spirituality being useless.

The materialist paradigm exists for a reason. If it's what makes a person happy and makes them comfortable, who am I to tell them that they "need" something different? If someone is happy with that understanding of the world, that's fine.

But the fact is that the materialist paradigm is false. I know it is false from experience.

In this, I find the words of Morpheus appropriate:

"Unfortunately, no one can be told what the Matrix is. You have to see it for yourself."

Experiences exist, and are available from time to time, which can provide you with subjective evidence that materialism is false. The thing with these experiences is, however, that they are not available on demand. This means they aren't easily reproducible.

An often glossed-over property of the scientific method is that, by necessity, it simplifies the unsimplifiable. This is necessary to make any progress at all. However, it is done literally by throwing crucial data away. What science cannot explain, it dismisses as if it never existed. A plot point on a graph that doesn't fit the equation is not pursued relentlessly to find an explanation; it is dismissed as measurement error.

If you're a software developer – like I am – you may have done your share of debugging. You may have had the experience of when you see a weird bug happen, and then you can't reproduce it again. This is the weird plot point of the graph. You may have dismissed this bug, pretended you did not observe it; treated it as a "measurement error". And you might not see it again for months. But the bug is there.

In time, if your software is used enough, users will observe the effects of that bug, and you may be reminded of its existence. Just because you didn't chase it down, it didn't go away. If you pay attention, then in time, you might collect enough data to find that bug, and finally fix it. But you'll never collect that data if you don't pay attention; if you keep believing that the bug "shouldn't" be there.

A shortcoming of science is how often it doesn't do that. All scientific measurement is riddled with these inexplicable phenomena, but for the most part, they're continually being dismissed. Much science, though not all, is an attempt to "understand" by shoehorning the world to fit an equation. It's a pretense that the world obeys rules we are comfortable with - whereas in fact, it very much may not.

So – many people, including friends of mine, believe spirituality is useless. But it is science that is in fact useless, if certain assumptions that we take for granted about the world happen to be false. We are trusting science to eventually provide us with ultimate answers. But the scientific method can only provide us with ultimate answers if those answers can be found within the world.

If the world is in fact an illusion; if the gateway out of this illusion is in fact the mind; then making measurements using contraptions that are part of the illusion will not provide us with an understanding of what is outside.

If you investigate yourself; if you investigate the mind; and by that I mean, paying attention to your mind; not by taking EEG measurements of someone's brain, or poking in there with a scalpel; because the brain is not the mind, and is most likely only a projection, an extension, an outer layer of the mind;

... if you pay attention to your mind, then you may find answers today, instead of waiting hundreds of years before science can conclusively tell you: "Sorry - it turns out you just needed to look into yourself."

Science is a tap, yes. But what comes out of this tap is just more information about the world, which makes sense within the world. If the world is an illusion, chances are that science will never give us information about the outside of the illusion, because all science takes place within it.

But if our minds exist outside of the illusion – then there's potential to access this knowledge directly.


The ethics of non-consensual monogamy: coercion and dead bedrooms

Here's a hypothesis.

Monogamy is only ethical if both partners continue to choose it. Not just once, but every day; and without guilting each other into it. Each of the partners has to continue to choose it, and the choice has to be truly free; without conditions or attached strings.

Folks have begun to warm up to the idea that open relationships can sometimes work, for a few weird people. However, even among people accepting of this – even among those who are poly – the idea of physical loyalty remains sacrosanct. The idea remains dominant that, if you made monogamous vows, it is your duty to uphold them. No one respectable should cheat. Cheaters are literally worse than... racism.

Consider this, though.

Monogamy boils down to the expectation that you won't use your genitals in a way that isn't useful to, and approved by, your partner.

This is objectification. It is abrogation of each partner's individuality. It is dismissal of a person's independent sexual nature. It is a forced reduction of that nature to whatever might be acceptable to the other partner, and a dismissal of unmet needs that this forced reduction may create.

This is not love. Love is not forcing someone to shrink to a form in which they can't fully express themselves, based purely on your comfort and convenience.

Love is not something you give conditionally. That is trade. Love is given unconditionally. Except in jest, love does not involve statements such as: "I swear I'm going to cut off your X if I ever find you cheating!" That's not love, that's a threat of abuse. (Notice how it's only ever cute if it's said by a lovely woman?)

Many people live, and suffer, in non-consensual monogamy. This is monogamy to which a person once agreed, but might no longer agree to, if they could give it up without losing something important. Many of these are "dead bedroom" relationships; relationships that aren't even monogamous, as much as they are celibacy in a couple. Where one partner desires sex, and the other doesn't, so the sex happens once in a blue moon – and if it does, reluctantly.

This wouldn't have to be a problem, if the partner who doesn't want sex didn't expect the other to "just deal with it". They may have no interest in their partner's genitals – but they sure as hell expect no one else to touch them. If someone does – holy betrayal: may the vengeance of hell be upon thee!

I contend that this is objectification of the partner whose needs aren't being met. It's a dismissal of this person's independent sexual nature, and a reduction of their sexuality to a small fraction of what it naturally would be. Yet, people argue: "You made marriage vows – you better stick to it."

Well, no. If people have to stick to their agreements, it is a necessary stipulation that those agreements also be fair; they have to actually meet everyone's needs. Contrary to the broken moral compasses of the monogamous majority, a person cannot actually sign away their individuality with marriage.

We can make vows, and those vows have legitimacy as an expression of a couple's hopes and aspirations. However, marriage vows cannot be a contract. They cannot be a contract for the same reason that we would never, in this day and age, consider legitimate an agreement where a person becomes a slave of another; or where they become an indentured servant. Individuality is something you cannot give away. Not even with marriage.

The assumption of the monogamous majority, that their partner's genitals are theirs to own, is implicitly false. It cannot be true, because we cannot contract away our individuality.

Not infrequently, this false belief smashes headlong into reality, and survives this like a glass bottle crashing into rock. People realize that, despite their assumptions; despite their vows; they cannot actually own their partner. They never could; and this realization utterly destroys them.

Monogamy, in practice, can be beautiful. However, it cannot be beautiful to the extent that it's based on a false belief of owning a person. In order to work, monogamy has to be chosen; not by one partner, imposing it on the other, but by both. It has to be chosen not just once, but freely, every day. It has to not involve hostage-taking and coercion. There can't be any "You can't have sex, with me or anyone – or I'll make sure you never again see your children."

When monogamy is chosen by both partners, without strings; and continues to be chosen every day – such monogamy is beautiful, and healthy.

Previous similar post: Against the hating of cheaters


How the Yugoslav army dealt with liabilities

This is an anecdote told by my wife occasionally.

Jana and I are from Slovenia, which used to be part of communist Yugoslavia. My wife's grandmother had a sister who used to work in Belgrade, in the headquarters of the Yugoslavian army, as an assistant or secretary. She was close to where important things happened.

As a hobby, she was into sewing / tailoring / knitting, and for this reason, she purchased West German magazines which were ubiquitous at the time – thick, heavy catalogs for people into this hobby; Burda was one of them. The army supervised people working in its headquarters, so they knew about her reading these magazines, and this was suspicious. She was interrogated about it more than once.

Eventually; some time in her middle age years – not soon enough for retirement; she wanted a change of scenery, to go live back home, and quit. At this point, she became untrusted and a liability, and the way they dealt with that is that they had her interned in a psychiatric hospital, and subjected to electroshocks and lobotomy, until she became hardly aware of herself; a shadow of the former human being.

She lived out the remainder of her life, up to age 80 or so, in this state. She spent these years in a home for assisted living, not far from where Jana's family lives. Most of the time, she could not tell you the date.


Ossification and Hawaii: Impressions of a C++ working group

I've recently interacted informally with the mailing list of the ISO C++ Working Group. I've tried to float the following ideas.

Aggregated exceptions. I think I came up with a neat and complete proposal, but it's too ambitious for an existing language, given the changes it proposes and its relatively niche concern. We've migrated to VS 2015, so I've begrudgingly accepted noexcept destructors. And since C++11, lambdas provide a practical way to solve problems where one might previously want to use complex destructors.

So I guess we can live without multi-exceptions. Okay.

I then tried to float an is_relocatable property. A shortcoming of C++ right now is that it allows object content to be moved; but it doesn't allow movement of objects themselves. Even though anywhere from 50% to 100% of the objects we store in containers can be moved with memcpy — formally, this is undefined behavior. This is a problem for container resizing, and requires inefficient deep copying when noexcept move construction or destruction aren't available — even though the objects could be moved with a trivial memcpy. Various libraries work around this by implementing their own "is relocatable" concept: Qt has Q_MOVABLE_TYPE, EASTL has has_trivial_relocate, BSL has IsBitwiseMovable, Folly has IsRelocatable. I also saw this need, and rolled my version of this concept in EST (not yet published), and in a previous version of Atomic/Integral (to be published — hopefully, soon).

The need for a standardized concept is apparent. What I would most like to see is fairly simple:
  • In type_traits, a standard is_relocatable property.
  • A way to declare a type relocatable without exceedingly ugly syntax. My favorite:

    class A relocatable { ...
  • To avoid unnecessary declarations leading to developer mistakes, a way for the compiler to infer that an obviously relocatable type is in fact relocatable. For instance, if type Str is relocatable, then the following should be also:

    struct A { Str x; };

    It is possible to infer this in a safe way by limiting this inference to types where (1) the compiler declares an implicit move constructor, and (2) all direct bases and non-static members are relocatable.
Do you think I was successful?

There were people who staunchly opposed even adding a property — even though this one is needed, and current type_traits is littered with them (and those are also useful).

In fact — there had been an attempt to propose this as is_trivially_destructive_movable. This was shot down by the C++ committee because it would require conceptualizing the idea that object lifetime is "ended" at one address, and "begun" again at another address. This is too much of a conceptual problem. (Even though object lifetime doesn't end — it just continues...)

Not to mention the wall of opposition to any kind of compiler inference of a relocatable property. Notwithstanding that this would be purely an improvement; wouldn't break anything; and would allow this optimization to fit in with every day use.

Exasperated with this failure to find support for what seemed like a modest and innocuous improvement, I tried the least contentious possible idea. Can we just have a version of realloc — we could call this version try_realloc — that tries to expand the memory in place, and fails if it's unable? In the absence of a relocatable property, containers could at least use try_realloc to try to expand existing memory in place, before committing to a potentially deep copy.

Everyone agrees this is a good idea, and it turns out something similar had been proposed.

But it didn't go anywhere. Why not?

Well, the person who wrote that proposal couldn't afford to take a week off to travel to a C++ working group meeting to champion and defend the proposal. And so, it died that way.

Open standards — if you attend our meeting in Hawaii

Nominally, the C++ standardization process is open. In practice, it's gated by who can justify sending representatives to week-long meetings in far-off places. These meetings take place two or three times a year, and the next one takes place in Kona, Hawaii.

It goes without saying that, if you're not present on at least one of the full WG meetings, your chances of making an impact on the C++ language are slim, at the least. As the preceding anecdote shows — if you don't attend, forget about even something as trivial as a realloc improvement.

This wastes resources on travel; excludes people who could contribute; and silences worthwhile ideas that don't happen to have a champion with disposable time and money.

About a decade ago, I participated in SSH standardization. Some members of that group did travel, but this had no impact on a person's ability to affect the direction of the standard, or allow their voice to be heard. The Internet Engineering Task Force, which supervised standardization of SSH, does organize regular meetings; but this is no way required to publish RFCs, or contribute to them.

Holding face to face meetings is an inefficient and exclusionary process that became unnecessary around the year 2000. Yet it continues to persist. I wonder if this is because most people who would vote to change it enjoy being sponsored by their companies to travel. After all, it's necessary if people do it...

When I voiced this concern, members of the group were of course up in arms. It really is to get work done!

But the next WG21 meeting is being held this October at the absurd location of Kona, Hawaii. This is 5000 miles from New York, 7,400 miles from Berlin, and 2400 miles from San Francisco.

It would be too rushed to arrange this as a fly-in and fly-out 2-3 day event. If that were the case, it might as well be held in Bentonville, AR, in a non-descript Walmart office building. To allow work to get done, it has to be a leisurely occasion of 5 nights and 6 days. This allows for convenient weekend arrival or departure, which I'm sure no one will use to treat themselves to a volcano trip — or a leisurely day at the beach.

The average fuel economy of long-distance travel is >3L of jet fuel per 100 miles per passenger seat. With 90 - 100 attendees traveling an average of 5,000 miles each, return trip, this is going to involve the burning of 27,000 liters of jet fuel, and the release of 68 metric tons of CO2 into the atmosphere.

All of this must happen 2-3 times per year, because otherwise it's impossible to advance the language.

Some members of the group said they've tried to attend remotely, but that it just doesn't work as well. Well, of course it doesn't work, when the event is being set up so that you're a second class participant.

With meetings held in places like Hawaii, they're spending at least $2,000 per person per event. Annual cost is $400,000 - $600,000, just in what each participant pays. You could get absolutely amazing teleconferencing if your budget for it was $500,000 per year. And that's just one ISO working group. How many other active groups are there? What tech could you get for a $2 or $10 million annual budget?

But of course — that would make sense if you wanted open standards, where anyone could contribute.

As opposed to a week in Hawaii, each year...


Love and function

Someone asked the following conundrum. This was asked in the context of whether it's "shallow" for a person to refuse another as a partner, based solely on that they aren't sexually compatible:

"If your love is predicated on sex then is it really love or is it just two people using each other?"

Loving someone, and being useful to them, are not opposites. The two work together. To love someone is to offer yourself to be useful to them. It is to serve them gladly, with the expectation that this will be appreciated and returned. To accept being loved is to welcome this offer; to return it, and appreciate it.

Love is a willingness to serve: without coercion, and without feeling coerced.

"Love" and "relationship", though, are different things. Every one of us can love everyone, hypothetically. However, we can't have functional relationships with people who can't meet our needs.

Relationships are love + function. If you take away the function, the love remains. However, without function, love alone is not enough for a relationship.

This is why relationships based on compatibility can work. We can love everyone — if there's no reason against it. So when two people are complementary, there's no reason for love not to arise. But the reverse is not true: two people who feel deep and passionate love for each other can simply not be compatible.

"Good, traditional" traits

I find that "good" and "traditional" don't exactly go hand in hand.

If it makes sense, it's not called tradition. It's called common sense. If it's called tradition, it means that at some level, it doesn't make sense. It's being practiced despite it.

It does not make a person good if they follow imperatives that violate sense. It makes them compliant.

Being compliant makes sense, to an extent. However, being overly compliant makes you a tool. At best, you're a tool for nonsense. At worst, you're a tool for perpetuation of suffering and hardship.

Attractiveness is not shallow

There are large groups of men online — they're mostly men — who consider themselves unattractive, and adopt this as their identity, and an embittered perch from which to carp about life.

If you're an unattractive man — or woman — stop the lifestyle that makes you feel and look that way.

Most people can look great if they invest the effort. You aren't going to get taller, and you aren't going to grow a bigger penis. But you can fix almost anything else - lose fat, gain muscle, develop a sense of style, develop self confidence due to the results you have achieved.

None of this is beyond anyone's reach save a handful of really unfortunate people. Chances are that you're not one of those. Chances are that, if you think of yourself as unattractive, it's a result of a lifetime of ugly thoughts leading to disrespect and neglect of yourself and your body.

Now tell me. Who wants a person who chooses a lifetime of these ugly thoughts? Who has the option to invest the effort and improve his self confidence and looks, but avoids doing so in favor of whining about people, insulting them for their choices, and continuing to neglect his body?

Attractiveness is not shallow. You're not being judged for something outside your control. It's not in your genes, you're not "big boned". No one's depriving you of self confidence. No one but yourself dressed you poorly.

Attractiveness is 99% a consequence of mental habits, attitudes, and lifestyle. Most people don't have to be unattractive if they adopt a healthy inner life. So if you are, and you don't have to be, that speaks volumes.


VS 2015 projects: "One or more errors occurred"

For the most part, I find that Visual Studio 2015 is awesome. However, it did ship with kinks that need to be worked out. Not least, it has crashed on me from time to time, while working on a large solution.

I recently experienced a problem where I couldn't open Visual C++ projects (.vcxproj) after I copied them to another location. When trying to open the projects, VS 2015 would give me nothing more than this "extremely useful" error:

That is some talent, right there. That must have taken a lot of work to get this right.

After trying various things, and having researched this without success for a few hours, I finally got the idea to try opening the projects in Visual Studio 2013, instead.

Behold: VS 2013 actually displayed a useful error, which informed me that my projects had Import dependencies that I forgot to move.

So, yeah. If VS 2015 helpfully tells you that "one or more errors occurred" — try VS 2013.


Algorithm for selective archival or aggregation of records

Given the absurdity of the US patent system, it comes to mind that I should publish all technical ideas, no matter how inane, when I think of them — simply to provide a prior art reference, in case anyone claims that idea in a future patent.

The ideas I publish are arrived at independently, and based on publicly available information. This does not mean that existing and currently "valid" patents don't exist that cover parts of my ideas. It does, however, mean that such patents are obvious, and that the patent system is legalized enabling of extortion.

Without further ado, here's an idea for how to archive old log files to:
  • maintain required storage under a manageable threshold;
  • store the latest data, as well as older data in the event of undetected corruption;
  • avoid predictability to an attacker who might rely on a particular pattern of deletion.
We assume that log files are being archived in regular periods. This period may be an hour, a day, a week, or whatever works for an application.

We will store the log files in ranks, such that rank 0 are the most recent log files, and they get bumped to a higher rank as each rank fills. We allow for an unlimited number of ranks.

We define a maximum number of log files per rank. We call this M, and this number shall be 3 or more. It could be, for example, 4.

We determine the proportion of log files that we will keep each time as we move log files up the ranks. We call this number K, and it is a fraction between 0 and 1. A natural choice of K would be 1/2 or 1/3.

Each rank will start empty, and will with time become populated with log files.

When generated, every log file is first added to rank 0. Then, for each rank, starting with rank 0:
  • We test if the number of log files in that rank has reached M.
  • If it has:
    • We determine a subset S of log files in this rank that contains K*M of the oldest log files. If K is a fraction 1/N, e.g. 1/2, it would be natural to round the number of log files in the subset down to a multiple of N. For example: if K=0.5, S might contain 2 log files.
    • To avoid predictability of which log files are kept, we randomly discard a K proportion of log files in subset S. For example, if K = 1/2, we randomly discard one of the two log files. If predictability is desired, we discard instead based on a predetermined rule. For example, we could discard every other log file.
    • We graduate the log files in S that have not been discarded to the next rank. By this, I mean that we remove them from the current rank, and add them to the next.
We repeat this process with each rank, until we reach one that is empty.

This process keeps around the most recent log files; keeps around the older ones, going all the way back to origin; randomizes which log files are discarded; and uses O(log(T)) storage.

This process works not only for log files, but for any other kind of historical data storage. Instead of discarding information when graduating records from each rank, information can also be aggregated, while reducing the number of records. For example, when selecting the subset of records S that will be graduated to the next rank, instead of deleting a K proportion of records, all the records in S could be aggregated into a single record, which is then propagated to the next rank.

This would allow storage of information such as statistics and totals that is more granular for recent information, and more sparse for older information - yet does not lose information about overall totals.

A natural extension of the above principles is to define ranks to match natural periods of time. For example, rank 0 could be days, rank 1 could be weeks, rank 2 could be months, rank 3 could be years. The maximum number of records per rank, M; and the proportion of records kept between ranks, K; would then be rank-dependent.

Exceptions in destructors and Visual Studio 2015

If you're migrating code to Visual Studio 2015, you may have run into the following warning:
warning C4297: 'A::~A': function assumed not to throw an exception but does
note: destructor or deallocator has a (possibly implicit)
    non-throwing exception specification
You may not have seen this warning with GCC or Clang, so you may think VS 2015 is just bothering you. Wrong! GCC should be warning you even more so than Visual Studio (I'll explain why), but it does not.

You may also think that throwing an exception from a destructor in C++ is inherently undefined behavior. Wrong! Throwing an exception from a destructor in C++ is extremely well defined.

In C++03, throwing an exception from a destructor works as follows:
  • If there is no exception in flight, it means the destructor is being called through forward progress. In this case an exception in the destructor causes the beginning of unwinding — backward progress — just as if the exception was thrown anywhere else. The object whose destructor threw continues to be orderly destroyed, all the subobject destructors are still called, and operator delete is still called if the object's destruction was triggered by the delete keyword.
  • If there is an exception in flight, a destructor can still throw. However, an exception thrown from a destructor cannot meet with the exception in flight. It can still pass out of a destructor and be caught by another destructor if the throwing destructor was called recursively. However, C++ does not support exception aggregation. If the two exceptions meet, such that they would have to be joined to unwind together, the program is instead terminated abnormally.
In C++11 and later:
  • Everything exactly the same as above, except that destructors now have an implicit noexcept declaration, which is deduced to be the same as the destructor that the compiler would generate. This means that a user defined destructor is noexcept(true) by default, unless it is explicitly declared noexcept(false), and unless a base class or an aggregated object declares a destructor explicitly as noexcept(false).
  • If an exception leaves a noexcept(true) destructor, the C++ standard now requires std::terminate to be called. GCC does this; Clang does this; Visual Studio 2015 does this unless you enable optimization — which of course you will for production code. If you enable optimization, then against the spec, Visual Studio 2015 appears to ignore noexcept, and allows the exception to pass through.
Even though, like other compilers, GCC will call std::terminate if an exception leaves a noexcept destructor; and even though GCC will do so more consistently than VS 2015 — the behavior doesn't go away with -O2; GCC produces absolutely no warnings about this, even with -Wall.

In this case, therefore, we have Visual Studio 2015 producing a useful warning which exposes code incorrectness, which GCC does not produce.

Why the change in C++11?

Mostly, move semantics and containers. Exceptions from destructors in stack-allocated objects are usually not problematic, assuming the destructor checks std::uncaught_exception to see if it can throw. However, because C++ supports neither exception aggregation, nor a relocatable object property, a throwing move constructor or destructor make it next-to-impossible to provide a strong exception safety guarantee when e.g. resizing a vector.

It is possible that relocatable may be supported in the future, allowing objects to be moved via trivial memcpy instead of move construction + destruction. This would make it possible to safely resize a vector containing objects whose destructors may throw. But that leaves the question of what to do when multiple destructors throw when destroying or erasing the vector. That would require exception aggregation, which in turn would be ineffective without making developers aware; and at this time, that seems not to be feasible.

It seems likely we may get relocatable some time, but probably not multi-exceptions any time soon. Planning for the next 10 years, it's best to design your code to have noexcept destructors.

What to do?

If you have code that currently throws from destructors, plausible things to do are:
  1. Band-aid to restore C++03 behavior: declare destructors noexcept(false). Not only those that trigger the warning, but also those that may call throwing code. This addresses the VS 2015 warning, and fixes behavior with compilers that should issue a warning, but do not. This is safe to do if destructors are checking std::uncaught_exception before throwing.
  2. Destructor redesign: you can comply with the spirit of C++11, and change destructors to not throw. Any errors encountered by destructors should then be logged using some logging facility, perhaps a C-style global handler. The destructor must either call only noexcept code, or must catch exceptions from throwing code.
Long-term, option 2 is more consistent with current C++ direction. In this case, the following macro may come in handy, to ensure that any code called from destructors is noexcept:
#define NoExcept(EXPR) \
    ([&]() { static_assert(noexcept(EXPR), "Expression can throw"); }, (EXPR))
This is unfortunately necessary because otherwise, you have to resort to extensive code duplication. When used as an operator, noexcept returns a boolean, so you have to test it like this:
static_assert(noexcept(INSERT_VERY_LONG_EXPRESSION), "Can throw");