Transaction ethics as root of social/libertarian disagreement

In order for two parties to enter a transaction, both have to benefit from it. However, the benefit can be distributed unequally. A potential transaction has a minimum price where it's still acceptable for the seller, and a maximum price where it's still acceptable for the buyer. The total value gained from a transaction is the spread between the maximum and the minimum.

An unfair, but still voluntary, transaction is one where the negotiated price is closest to one of the extremes, instead of closer to the middle. The result is that one party gains most of the benefit from the transaction. A transaction where one person gets 90% of the value, and the other gets 10%, is frequently seen as unethical by many people, and is described as "robbing" the other person.

You can see that people consider such transactions unethical in ultimatum games, such as the dictator game. In experiments, most people will accept a reward with a 40-60 split. But when offered a 10-90 split, a large proportion will choose 0% (no reward) just to punish the dictator.

The whole social democracy vs. brutally free markets disagreement rests on people recognizing, or not recognizing, the unfairness of unbalanced transactions. The brutal free market people argue that 10-90 transactions are voluntary, and therefore okay. Social democrats argue that it's unethical to foist such transactions on people just because you have negotiating power.


Janez Janša

Those outside observers who even know about the existence of Slovenia may be wondering:
  • Why Slovenia saw it fit to hold a referendum about whether people should have equal rights, after a law had already been passed to make rights equal.
  • Why those who voted saw it fit to overwhelmingly vote against equal rights.
The reason is that there are political points to be gained from undoing a law backed by a currently unpopular government.

For decades, Slovenia has had a prominent but divisive political figure, Janez Janša, who has been Prime Minister at one point but is now in opposition. Mr. Janša builds support for himself by fomenting division. He has surmised that, if properly cultivated, Slovenia can be divided into two groups of people. On the one hand are the secularists: left-leaning – socially liberal, economically statist descendants of people who were privileged when Slovenia was part of Yugoslavia until 1990. On the other hand are the Catholics: socially conservative, economically liberal, predominantly religious descendants of people who were underdogs. Not only were the underdogs treated poorly under socialism, but tens of thousands were summarily executed when Tito came to power after WWII. This makes for a wonderful source of division.

As leader of the "underdog" group, he does whatever he can to get people to more strongly identify either with his group, or not-his-group. The stronger identification on both sides makes it more likely that those who identify with his group will feel compelled to vote for him within that group. This means:
  • Cultivating pet grievances of his group. One way is to be constantly bringing up people killed 70 years ago.
  • When in power, deliberately pissing off the secularists, e.g. by bringing Catholicism into education. This gets the secularists to identify against the Catholics. This strengthens Janša's political leadership of the Catholics, and the perceived need for this leadership.
  • When in opposition, leveraging things like popular opposition to gay marriage. It was overwhelmingly his camp that voted against.
He foments division, because without division, he has nothing. He would be a faction leader without a faction.

Yet, consider this. Mr. Janša was himself a political prisoner, for six months, during the final years of the Yugoslav regime. His behavior seems consistent with a person who has never come to forgive his captors, and who appears to hold a grudge against everyone who inherits their position ideologically. As such, Mr. Janša cannot be a leader of Slovenia as a country. He does not represent all people. He represents one group of people against the others. Being like this, he can only ever lead a faction.

Suppose Mr. Janša forgave. Suppose that, after 25 years at the top of Slovenian politics, he came to realize he doesn't need to be cultivating that grudge, and its associated baggage.

Imagine the leader he could be, in that case.


The above is, of course, not nearly all of the story. Janša's supporters argue he was again politically persecuted when he was embroiled in the Patria scandal. He was found guilty of corruption (involved in huge bribes) and sentenced to 2 years in prison. In the eyes of his supporters, this made him a martyr. After a stint behind bars, the Constitutional Court overturned the judgment, on the grounds that he did not receive a fair trial. But that's not the same as innocence.

Supporters further argue that other political leaders have not been less partisan. That's true; there hasn't been a leader most people could tolerate since probably Janez Drnovšek. But this is a tu quoque fallacy. Just because everyone else is engaging in tribalism, that doesn't mean it's ethical to score politically by organizing a referendum to take away a minority's rights.

In this vote, the worst of Slovenia came to shine. A leader who thrives on division; hate and fear-based propaganda; and homophobia in small towns and country-side.

And guess what the campaign was called?

"For children."


Survey of SSL/TLS use in SMTP mail exchangers

Some eight years ago, I made a logic error. Last month, I discovered it, proceeded to slap my forehead, and fixed it. We prepared new releases containing the fix, and began the process of notifying our users.

I now, therefore, have rough statistics about the prevalence and availability of TLS/SSL when sending email. Our notifications went to tens of thousands of unique addresses, with the following rough results. Percentages are based on the number of unique addresses in each category, so large email services (Google, Outlook, Yahoo) are represented according to their size. However, the results are filtered through the self-selected lens that is our users:

16% NoTls The SMTP server did not offer STARTTLS (large majority), or TLS handshake failed due to protocol error (small fraction; errors encountered were SEC_E_INVALID_TOKEN and SEC_E_ILLEGAL_MESSAGE).
20% Tls_NoHostAuth The SMTP server offered TLS, but could not be authenticated. By far the most common reason was SEC_E_WRONG_PRINCIPAL, followed by SEC_E_UNTRUSTED_ROOT. A ten times less common reason was SEC_E_CERT_EXPIRED. A rare reason was TRUST_E_CERT_SIGNATURE.
50% Tls_AnyServer The SMTP server was authenticated as the intended MX DNS name, but the MX name itself is not part of the recipient's email domain. Security of the connection hinges on how much you trust DNS MX lookup results, which are completely unauthenticated. Most large email services (e.g. Google, Outlook, Yahoo) are in this category, but many results could be bumped into Tls_DomainMatch by applying common knowledge. For example, one could trust *.google.com servers as being representative of @gmail.com addresses, but this requires having that knowledge.
14% Tls_DomainMatch The SMTP server was authenticated as the intended MX DNS name, and the MX name itself is a subdomain of (or equals the domain of) the target email address. This provides stronger assurance that the destination SMTP server is representative of the recipient domain.

To the extent that our user base is representative (or isn't), a large majority of recipients (84%) can receive email via SMTP using opportunistic encryption, which may protect against passive eavesdropping; but only a minority (14%) employ setups resistant to man-in-the-middle attack without the benefit of additional, external knowledge.

After you have exchanged email with a particular recipient, you can add their mail exchanger to a list of known mail servers for that recipient, and trust that host more highly in future deliveries. Assuming this strategy, up to two thirds of recipients (categories Tls_AnyServer + Tls_DomainMatch) could be considered resistant to a non-pervasive man-in-the-middle attack, as long as change of destination mail server identity triggers some kind of audit or warning.


Republicanism is fascism

US Republicanism is not just "like" fascism. It is fascism. It fits criteria of Umberto Eco's 1995 definition.

Based on Wikipedia:
  • The Cult of Tradition, combining cultural syncretism with a rejection of modernism.
  • The Cult of Action for Action's Sake, which dictates that action is of value in itself, and should be taken without intellectual reflection. This, says Eco, is connected with anti-intellectualism and irrationalism, and often manifests in attacks on modern culture and science.
  • Disagreement Is Treason - fascism devalues intellectual discourse and critical reasoning as barriers to action.
  • Fear of Difference, which fascism seeks to exploit and exacerbate, often in the form of racism or an appeal against foreigners and immigrants.
  • Appeal to a Frustrated Middle Class, fearing economic pressure from the demands and aspirations of lower social groups.
  • Obsession with a Plot and the hyping-up of an enemy threat; this often involves an appeal to xenophobia with an identification of there being an internal security threat; Eco cites Pat Robertson's book The New World Order as a prominent example of a plot obsession.
  • Pacifism is Trafficking with the Enemy - because "Life is Permanent Warfare", there must always be an enemy to fight. This leads to a fundamental contradiction within fascism: the incompatibility of ultimate triumph with perpetual war.
  • Contempt for the Weak - there is not much empathy for the poor, the mentally ill, the disabled.
  • Selective Populism - the people have a common will, which is not delegated; this casts doubt upon democratic institutions, where leaders and government are seen to "no longer represent the will of the people".
  • Newspeak - fascism employs an impoverished vocabulary in order to limit critical reasoning.
  • Non-truths - lying and spreading of propaganda.


The doghouse: Vagaries of the Excel COM interface

Wow. It's been a while since I've seen a more fragrant turd than the COM interface exposed by Excel which would, theoretically, allow you to interact with workbooks from e.g. PowerShell.

Theoretically is the key word, I'm sad to say. In practice, it's so buggy and poorly designed that you might save more time if you did what you wanted by hand.

Let's start with the simplest possible script:
  $xl = New-Object -com Excel.Application
  $wb = $xl.Workbooks.Open("C:\Temp\file.xls")

What does this do?

Each invocation of this script creates a copy of Excel which then just hangs around. Indefinitely. Like this:

If you just created the COM object, Excel would exit. But since you opened a workbook, it does not. Not even closing the PowerShell window exits these instances. You have to actually go and kill them using a task manager application.

Okay. Let's try modifying the script to close the workbook:

  $xl = New-Object -com Excel.Application
  $wb = $xl.Workbooks.Open("C:\Temp\file.xls")

Does this work?

Har har har. No:

But according to the docs, there's supposed to be a method named Close. Why is it not there? Hmmm.

Let's try this:

  $xl = New-Object -com Excel.Application
  $wb = $xl.Workbooks.Open("C:\Temp\file.xls")

Yeah. We have to wait unspecified amounts of time in unspecified places. This increases the chances that the script will succeed.

This gives you an impression of the type of quality and robustness we are dealing with here.

But the Excel processes are still hanging around. What else do we need to do for them to quit?

  $xl = New-Object -com Excel.Application
  $wb = $xl.Workbooks.Open("C:\Temp\file.xls")

Ah! Now the processes are cleaning up. It's like a magic incantation: we have to tell Excel three times for it to leave.

Note that the $false parameter to Workbook.Close is necessary. Otherwise, Excel opens an interactive dialog asking if we want to save changes to the unmodified file, opened by script.

Non-performance, non-concurrency

I eventually got a script running that would process workbooks and extract information from specific columns. As I type this, the script is chugging along, at a leisurely pace of... 8 cells per second.

It's 2015; we have multi-core computers running at GHz speeds. We have gigabytes of RAM, and solid-state hard drives. And the architecture of PowerShell, combined with the Excel COM interface, allows me to extract information out of workbooks at about the speed of a dot-matrix printer in 1990.

I saw in Process Explorer that the Excel instance spawned by the script is using nearly 100% of a single core. Computers have multiple cores, so I thought I'd tweak the script to run several concurrent instances, and get the job done 4x or 8x faster.

Nope. After launching 5 concurrent instances, instead of a single Excel process consuming 11% total CPU, I had five Excel processes consuming 2% total CPU each. So not only is it slow; but there's also a global lock somewhere that prevents concurrency.


"auto" considered (often) harmful

Edit: Thanks to Drammon, Simon Brand, and Nicola Gigante for important corrections (see comments).

It's now in vogue to write C++ code like this:

  auto const& container = Function();
  for (auto const& element : container)
      auto const& member = element.AccessMember();

const& is necessary because auto strips references and const/volatile qualifiers. This is good: it is apparent when you're not copying a whole object, even if its type is hidden from view.

But please don't do this (too much).

The value of strong typing in C++ is not only in ensuring consistency at compile-time, it's also to document what the code is doing.

The above snippet requires that the reader knows:
  1. what Function returns
  2. what container.begin() or begin(container) returns
  3. what element.AccessMember() returns
in order to just know that the code is doing this:
  Container const& container = Function();
  for (Container::Element const& element : container)
      Member const& member = element.AccessMember();

This reduces readability of the code. By using auto this way, you're throwing away an important self-documenting property of the language.

There's an argument that this improves maintainability because if the return value of Function changes to a type that behaves the same, you need to make fewer code changes.

But maintainability is not just about reducing code changes; it's about ensuring their correctness. A developer needs to understand the changes they are making, and that a change is propagated correctly throughout the program. auto makes code harder to understand, and hides places affected by changes.

auto is definitely needed with lambdas:
  auto lambda = [&] (Seq x) -> bool

In this case, auto is the right thing to do – otherwise, you're wrapping the unnamed, compiler-specific raw lambda type into an std::function, and doubly declaring the function signature.

If you find that the classes you are using require really obtuse syntax to use explicit types:
  std::map<unsigned int, std::string> const& container = Function();
  for (std::map<unsigned int, std::string>::value_type const& element : container)

... then maybe that's the fault of an unfriendly design of the library you are using. In cases like that, I much prefer that, instead of auto, we use a suitable type alias:
  using MyMap = std::map<unsigned int, std::string>;

  MyMap const& container = Function();
  for (MyMap::value_type const& element : container)


Our bitching and moaning: Why the Middle East cannot have peace

In a nutshell: Terrorism and wars in the Middle East are not about Islam vs. the West. Instead, there cannot be peace in the Middle East as long as major world powers – the US and its allies on the one hand, and Russia on the other – have strong and opposite preferences about the price of oil; and of course, as long as the price is controlled by who has more influence over large oil producing countries.

It's easy to see ourselves as enlightened, and to perceive Middle Easterners as these backwards people with fundamentalist beliefs, who can't stop fighting each other and us, regardless of our "noble" attempts to "free" them. We try to "liberate" them from the Taliban, Hussein, and al-Assad – and they attack us!

Except we have never intervened in the Middle East to liberate; or to help build anyone's peaceful country. We only intervene to stir the shit, but secular values require peace to come about. As we stir shit, unrest and extremism prosper. After World War II, the Middle East had perfectly good trends towards democracy and secularism, and might have been peaceful and enlightened today. But then we helped!

We could bring peace to the Middle East if we invested into 30 years of governing and rebuilding each country. But that would be colonialism. So instead we shoot things up and leave. Preferably, from air.

We have our allies in the Middle East, one of which is Saudi Arabia. Just today, The Guardian wrote about how a Saudi court sentenced a poet to death for renouncing Islam. There is no shortage of this: just two months ago, they sentenced a young man to death by crucifixion for protesting the government.

Our allies in our war against terrorism. So much better than terrorists!

The explanation is not as clear-cut, or as favorable to us, as we would like. We are allies with Saudi Arabia because they help prop up the petro dollar; support our military presence in the Middle East; and are willing to sacrifice economically by helping the US and its allies. The price of gas at US pumps wouldn't be under $2 right now if the Saudi weren't pumping oil to sink the price, which costs them both in short-term revenue and long-term reserves. The official reason is to "protect global market share", which makes as little sense as it sounds. The price of oil is inelastic, so the supply only needs to be reduced by 1-3% for a 10% increase in price. But they can't exactly come out and say it's due to US pressure, or that a big reason is to clip the wings of Putin - whose strength is backed by Russian oil and gas exports, and who has been invading countries with whom the West would have wanted to be allies.

Why does the US want to topple al-Assad – a secular, but brutal dictator? To free the people? Or because he's allied with Russia, and provides their only Mediterranean base in the Syrian port of Tartus?

It's a geopolitical game in which you can't always pick all your friends if you're invested in the outcome. Saudi Arabia decapitates protesters, but they want to be our friend? Saudi Arabia is our friend now.

This pragmatism does not extend just to Saudi Arabia. Consider the endemic rape of young boys in Afghanistan, where the US army silently condones allied local men using young boys as sex slaves.

Of course, instead of all this screwing over the Middle East, which does nothing to liberate anyone, or to bring peace to any country, we could also invest in nuclear energy and renewables, to wean our Western economies from oil dependency. But that would require either not so much panic against nuclear power – because terrorism and wars in the Middle East are not better, but most people don't connect the two – or a breakthrough in energy storage, so that renewables could sustain constant power supply.

For example, the Germans will panic against nuclear energy, and force a plan to shut down all their nuclear plants. But they need energy from somewhere, and the Middle East controls the price of this energy. So then the entire Middle East is at war, and Germany has to accept Syrian refugees. And then they moan about the state of the world, and blame Americans, but fail to see that the refugees they're taking in are a not-so-indirect consequence of their opposition to nuclear power.

On the other side of the Atlantic, Americans will bitch and moan about nuclear plants requiring so much government investment, because Americans are individualist and everything must be privately run. But then they also blame Obama if the price at the gas pump is high. So Obama, like George W. before him, takes the full might of the US military to bear on the Middle East, and fixes global oil prices. And then Americans bitch and moan about the expense of paying for all this military.

We can have peace in the world. But first, we must come to terms with our want to push problems away. The attitudes "not with my tax dollars", and "not in my backyard", may seem to provide short-term relief. But in the long run, they only make big problems bigger. Not least our problem with climate – which has helped create the Syrian crisis.


The advantages of Seq, and the demerits of std::string const&

A few weeks ago, I was reviewing some code, and found something similar to this:
  std::string host = ...;

  int port = 23;
  size_t pos = host.find(":");
  if (pos != host.npos)
      port = atoi(host.substr(pos + 1).c_str());
      host = host.substr(0, pos);

Looks reasonable enough. This code parses a parameter in “host:port” format. It's perfectly decent where it appears. It will run rarely, once per process invocation, so performance is irrelevant. It’s an acceptable way to achieve what’s intended.

But suppose this code was in a tight inner loop. Suppose it was part of a parser that needs to digest hundreds of megabytes of data, and performance is relevant.

In that case, this code does two suboptimal things:
  • A heap allocation to copy the “port” portion of the string into. Why not just read the number from the original string?
  • Another heap allocation to copy the “host” portion of the string. Again, why not just read from the original string?

Seq as an improvement over std::string const&

There are two purposes for which C++ programmers commonly use strings:
  1. To store and own character content. This is what string and wstring do.
  2. To pass character content without passing ownership. This is what string const& does.
I argue that passing string const& is almost always a mistake. It chains the string provider in ways that aren't necessary for the consumer to read the string. All you really need is to pass a pointer and a length. You need a lightweight Seq object.

A Seq object, essentially, is this:
  struct Seq
      byte const* p { nullptr };
      size_t      n { 0 };

In practice, a useful Seq implementation will also contain numerous methods that a user can use to read from the Seq. My implementation has these, among others:
  struct Seq
      uint   ReadByte             (...)
      uint   ReadHexEncodedByte   (...) 
      uint   ReadUtf8Char         (...) 
      Seq    ReadBytes            (...) 
      Seq    ReadUtf8_MaxBytes    (...) 
      Seq    ReadUtf8_MaxChars    (...) 
      Seq    ReadToByte           (...) 
      Seq    ReadToFirstOf        (...) 
      Seq    ReadToFirstOfType    (...) 
      Seq    ReadToFirstNotOf     (...) 
      Seq    ReadToFirstNotOfType (...)
      Seq    ReadToString         (...) 
      Seq    ReadLeadingNewLine   (...) 
      uint64 ReadNrUInt64         (...) 
      int64  ReadNrSInt64         (...) 
      uint32 ReadNrUInt32         (...) 
      uint16 ReadNrUInt16         (...) 
      byte   ReadNrByte           (...) 
      uint64 ReadNrUInt64Dec      (...) 
      int64  ReadNrSInt64Dec      (...) 
      uint32 ReadNrUInt32Dec      (...) 
      uint16 ReadNrUInt16Dec      (...) 
      byte   ReadNrByteDec        (...) 
      double ReadDouble           (...) 
      Time   ReadIsoStyleTimeStr  (...)

You get the idea. All the basic primitives you'd need to read character content belong in Seq.

The basic benefit of Seq is that it's lightweight, containing only a pointer and a length, and can point not just to a whole string, but also a substring. It does not require unnecessary functionality, like a whole string object, just to pass a sequence of characters without ownership.

A secondary, but even more central benefit is that it serves as a focal point for a powerful set of string reading methods that leverage each other, allowing for both elegant and efficient string reading.

Using Seq, the earlier "host:port" example can be rewritten like this:
  Seq hostPort = ...;

  Seq host = hostPort.ReadToByte(:);
  uint32 port = 23;
  if (hostPort.n)
      port = hostPort.Drop(1).ReadNrUInt32Dec();

This is not more complex than the string version. Yet this version does its task without unnecessary heap allocations, and would be much more efficient if implemented where performance matters.

So, it's like std::string_view?

The proposed C++ extension std::string_view implements a similar concept. Main differences:
  • std::string_view is mainly the lightweight reference. It lacks a powerful library of string reading methods. Seq, as in the example above, shows emphasis on stream-like reading. Read methods consume part of Seq and return the part that was read as another Seq. A fully useful Seq implementation covers the basic primitives of string reading in an elegant way.
  • std::string_view is an std::long_inconvenient_name. However, this is understandable given a standard library designed by dark warlocks whose mystical powers derive from conjuring, and causing the world to use, long inconvenient names. :)
I emphasize the use of Seq as a default for string passing and reading, not special case. This is encouraged by giving it a practical name, and building a library of string reading methods around it.

std::string_view could do the same, but it needs more power than just remove_prefix and remove_suffix.


Can we stop with this idiocy of private courts?

There are smart people out there – people in many ways a lot like me, i.e. borderline idiot savants – who are attracted to the idea that the world needs to be saved through some kind of easy, adversarial revolution; rather than through a huge amount of incremental and cooperative effort. I suspect this is because cooperation seems boring; imposing one's will on others with violence seems fun; and plain old effort is hard and boring. Frequently, these are white middle-class Americans who do not recognize just how damn good they're having it, and how much worse things are in many other parts of the world.

Not in all parts for all things, of course. There are specific things that are genuinely better in other places. But overall, things are pretty great in the US for the middle and upper class. And yet, some of these same people can't stop going on about how awful everything is; and how everything could be much better if we just overthrew the entire system, and replaced it with something completely different. Like, for example: Let us fix real problems in our justice system by replacing it with private courts!

This is all based on the libertarian delusion, the foundation of which is to pretend that some obvious facts of life do not exist; and then coming up with solutions that might work in a fictional world that conforms to those assumptions. In this way, libertarianism and communism are the same mistake expressed in different ways. Both are ideologies that make bullheaded assumptions – about man, about the world – and then try to shoehorn people into it.

The fact of life that people are ignoring here is that not everyone has equal power. If we are to measure power and influence, some people have not just thousands, but millions of times more than others.

Private courts are essentially arbitrage. Arbitrage can work for equally powerful parties. But that's the only situation where it works.

What happens in arbitrage is, if the bigger party has equal choice in what arbitrators they're willing to deal with, arbitrage overwhelmingly favors the bigger party, because the bigger party controls a much bigger chunk of who gets all the arbitrage business than the smaller parties.

Allowing large businesses to dictate terms of dispute resolution effectively prevents class action lawsuits, which are an important way to hold large businesses accountable over systemic abuse.

As the small party in arbitrage, you have no choice. You either go with the corrupt arbitrators chosen by large corporations, where decisions always favor the large businesses; or you don't get service. If there are any payouts, they are such that it doesn't hurt the corporation, and they can continue systemic abuse as a business model, because paying out small amounts from time to time is cheaper.

You either get to agree to use their courts, or you don't get service. Good luck.


On Daylight Saving Time

Around this time of year, complaints pop up about the need to adjust clocks by an hour. I am usually one of the complainers, so I wondered: why do we still change clocks to DST and back, when apparently, most people hate it? If most of us prefer a single time throughout the year - what are the obstacles to making this happen?

After some research, I suspect the answer is:
  • We do in fact prefer more daylight during evenings in summer. Because of this, we might prefer permanent DST.
  • However, we can only borrow the evening daylight from mornings during summer. In winter, there's no light to borrow, unless we want mornings to be black.
In northern places, like Seattle, permanent DST would mean sunrise at 9 am at end of December and early January. Even as far south as Dallas, with permanent DST, sunrise in January would be at 8:30 am.

It appears that permanent DST would be great for those of us who get up late (more daylight!); but it would make winter mornings dreary for people who need to be at work or school at 8 am or 9 am.

Conversely, permanent standard time would make summer evenings end early, and in exchange we'd get the sun waking us up at 4:15 am.

It follows that there is wisdom in the current arrangement, resulting from forces of nature. sigh

Interestingly, the British did try permanent DST from 1968 - 1971. Apparently, mornings were dreary. Though note that Britain is further north than most of the US.

Russia – even further north – also tried year-round DST in 2011, but moved to year-round standard time in 2014 after people wearied of dark winter mornings. And in summer, their sunsets are at 9 pm, anyway.


Modern-day pyramid building

A commonly understood truth is that consumer spending is good for the economy. A common misunderstanding is that this is true for all forms of consumer spending: even $500 million superyachts; private jets; and the building and maintenance of luxurious villas that most of the year, no one lives in.

A superyacht is absolutely no better than an Egyptian tomb. It is to build a huge and expensive thing for one person (and their close friends), instead of spending the time and resources building something more widely useful. To think this is better than a pyramid is to fool ourselves with economic sophistry. Anyone who looks at this from a 1,000 year perspective will consider us foolish.

To say that superyachts and private jets are "good for the economy" is equivalent to the Broken window fallacy. Superyachts, jets, and villas that no one lives in, are extremely expensive broken windows.

Progressive consumption tax

I do not suggest defining some arbitrary categories of products as "luxury goods", and taxing them out of existence. We shouldn't be in the business of defining what is luxury and what isn't.

Instead, I suggest a progressive personal consumption tax, where all personal spending is taxed at 0% for the first X$ per year until some threshold, then 5%, then 10%, and so on. Optimally, a smooth formula should be used such that the tax percentage just keeps growing, indefinitely. The more you spend on personal consumption, the higher the consumption tax you pay.

The tax rate would automatically raise to some level resembling 900% if your annual personal consumption is in the hundreds of millions. It doesn't matter what you spend it on – one superyacht, one hundred supercars, or 1,000 paintings.

This does require spending to be differentiated as to whether it's personal spending, or an investment, or a business expense. However, this does not invent new complexity that is not needed for income tax administration already.

But what of the lost jobs?

When you impose a tax on luxury goods that causes some jobs to go away, of course that produces tangible, noticeable pain when workers producing those goods are no longer needed. When the economy restructures to reabsorb those people, this is diffuse, distributed, and subtle. You notice the pain, but not the relief. This may lead a person to think that damage was done by discouraging production of luxuries.

But this is not so. Even outside of tax collected, the very fact that pointless things are not being produced is a benefit to the economy. The tax is meant to reduce this type of consumption, which means it will cause job losses in the short term. But longer term, the money will go somewhere else, and jobs lost will be reabsorbed. The real benefit is in the long run (generations), not the short run (next few tax years).

But consumption moves abroad

Unfortunately, it does not do much to progressively tax consumption when people can quite simply board their jets, and take their yachting to another country, without taxation. A consumption tax is subject to tax avoidance even more so than taxation of income. Who hasn't yet taken advantage of state or country borders to benefit from shopping in a city with no sales tax? When it comes to superyachts, jurisdiction shopping is a no-brainer. Of course the yachts are going to be built and used where taxes are minimal.

To the extent that consumption can move abroad, any progressive taxation of personal consumption therefore has to be accompanied with measures to discourage avoidance. Most Western economies already tax worldwide income. Perhaps taxation of worldwide spending might not be so inconceivable.


Spec author "fixes mistake", wastes everyone's man-year

I came across the Referrer Policy proposal, which is already implemented in Chrome and Firefox, and allows sites to provide more privacy for their users by restricting referrer information sent to other websites when users follow links.

For example, with default browser behavior, if you are browsing the following page:
... then if you click any links on that page which take you to a third-party site, for example:
... your browser will kindly send to that site the full address of the "Big Dicks" page you came from.

This has some unfortunate privacy implications, so finally, browsers (except Microsoft's, of course) are allowing sites to exert more control over what referrer information is sent with outgoing links.

One of the nice new policies a site can choose is origin-when-cross-origin. Or is it? In 2014, the First Public Working Draft of the spec made a "mistake", and defined this policy without the third dash. In 2015, this was noticed, and the spec author decided to "fix" it, adding a third dash in later Editor's Drafts.

This has resulted in a situation where Firefox versions 36 - 40 implement the previous spelling (two dashes), and versions 41+ implement the new spelling (three dashes). As of today, Mozilla Developer's Network still documents the old, two-dash spelling. FxSiteCompat (not affiliated with Mozilla) documents the fix, and states "The legacy wrong value will no longer be supported in the future."

Meanwhile, announcements like this one continue to link to the old version of the spec, and the old version is still what you will find if you look up the spec on W3.org. If you want to use this feature, you'll spend an hour figuring it out. And if you don't, you are likely to use the old version instead of the new one – possibly leading to your referrer policy breaking in the future.

Ahh – the great results of "fixing" things that already shipped, and perhaps were not even broken. :-)


This ex-libertarian endorses Sanders

I have less than billionaires, or even the wealthy millionaires, but more than most people. What I have, I built "myself": this is to say, with much personal effort; but also with critical, non-negligible components of luck, and considerable help and work done by the right other people. My own personal work has been indispensable – but on its own, it wouldn't have been enough.

From my little perch, it seems to me that claiming one's little empire and yelling "I built all of this myself!" is nothing short of hypocrisy and egocentrism. There's no way one person builds an empire. Hundreds, or even thousands of people build it. The leadership provided by one person or several is critical, but you are not yourself building the empire. Other people are building it for you, with your partial guidance.

For there to be thousands of people who can help you build your empire, there has to be infrastructure before your business even starts. There have to be schools for your future employees to be educated. There have to be roads and telecommunications. There have to be hospitals and doctors. There has to be peace and order. There has to be a community.

Standing on top of this empire, and yelling to the world how "I built all of this myself! I want all of the rewards for me, and only me! I am an island! Any taxation is stealing!" seems to me so utterly... blind; hypocritical; narrow-minded; self-absorbed; and oblivious to other people's contributions.

So, yeah. I definitely believe that US politics would benefit a lot from moving in the direction of Sanders.

Full disclosure:

I'm originally from a high-tax European country, where taxation and bureaucracy felt oppressive. I was libertarian, and moved to Costa Rica, where I currently benefit from no taxation on foreign income. I have spent years in multiple countries where I saw the results of low taxation and low public investment. I'm now applying to move to the US, and if accepted, I expect to pay significant income tax there, in exchange for a better lifestyle, mostly due to those taxes.

I used to be libertarian before I succeeded (relatively speaking: I can't afford a jet, or anything like that). Now that I have – it seems fairly obvious how much of other people's work it takes. Not just to succeed; but to have an environment in which it is worth succeeding.


When monospace fonts aren't: The Unicode character width nightmare

Some things haven't changed since the 1970s. Programming is still done in text files; and though we have syntax highlighting and code completion, source code is still best displayed in monospace.

Other aspects of computing work best with monospace, also. The Unix shells; PowerShell; the Windows Command Prompt. Email is still sent with a copy in plaintext, which has to be wrapped on a monospace boundary. Not least, this persists because HTML email is excessively difficult to render securely, and there are user agents that still work better with plaintext.

In all of these situations, the problem presents itself that the originator has to anticipate how text will be rendered in advance. You cannot just send text and expect the recipient to flow it. You have to predict the effects of Tab characters correctly, and word wrap the text in advance, often not knowing the software that will be used for display. In terminal emulation, e.g. xterm via SSH, when the server sends the client a character to render, the server and the client need to agree by how many positions to advance the cursor. If they disagree, the whole screen can become corrupted.

As long as you stick to precomposed Unicode characters, and Western scripts, things are relatively straightforward. Whether it's A or Å, S or Š – so long as there are no combining marks, you can count a single Unicode code point as one character width. So the following works:
Nice and neat, right?

Unfortunately, problems appear with Asian characters. When displayed in monospace, many Asian characters occupy two character widths. How do we know which ones?

Our problems would be solved if the Unicode standard included this information. Unfortunately – as far as I can tell – the Unicode Consortium takes the stance that display issues are completely the renderer's problem, and makes no effort to include information about monospace character widths. (Edit – incorrect: see update below.)

If you're on Unix, you may have access to wcwidth. However: "This function was removed from the final ISO/IEC 9899:1990/Amendment 1:1995 (E), and the return value for a non-printable wide character is not specified." What this means is that the results of wcwidth are system-specific.

In 2007, Markus Kuhn implemented a generic version of wcwidth, which we now use in the graphical SSH terminal console in Bitvise SSH Client. However, this is more than 8 years old at this point, and is based on Unicode 5.0, whereas the current latest version is 8.0.

So I had the idea that maybe we could "just" extract up-to-date information from Windows. It's 2015, the following should render well, right?
	台北1234		(leading characters should be 2 spaces each)
	QRS12		(fullwidth latin; should be 2 spaces each)
	アイウ1234		(halfwidth kana; should be 1 space each)
It turns out – no. Perhaps you have an operating system with proper monospace fonts, which displays all of the above lined up. On my Windows 8.1, the problem looks like this:

IE Chrome Firefox Notepad VS 2015

Note how nothing lines up: not in Internet Explorer; not in Chrome; not in Firefox; not in Notepad; not in the latest version of Visual Studio – the environment in which Windows is developed (Edit: apparently not - see comments). Half-width kana are displayed kinda correctly by the Consolas font used in Notepad and Visual Studio; but that's it.

It turns out, when locale is set to English (United States), Windows just doesn't seem to use monospace fonts for Asian characters. Indeed, setting the Windows locale to Chinese (Simplified) produces this:

This is better; but now, the half-width kana are borked. sigh

Note that the above isn't a Windows problem only. This is how the same text displays on Android:

It boggles my mind that it's 2015, and we still don't have a single, authoritative answer to this question: how many character positions should each Unicode character occupy in a monospace font?


Because I'm providing examples of incorrect character rendering, this may offer the misleading impression that this is just a font problem.

This isn't just a font problem. It's that there's no standard monospace character width information, independent of font used.

The above incorrect renderings involve systems using non-monospace fallback fonts. However:
  • Even if you only have a fallback font that's not mono, you can coerce it into the right character positions if you know the character widths. The above examples could work correctly – although the renderings might be less than perfect – if software knew the intended character widths.
  • Even if you do not have a fallback font, and are just displaying placeholder boxes – you still need to know character widths to render the rest of the text properly, and for Tab characters to work.
Operating systems could work around this problem by providing better font support. We now have terabyte hard drives, so there's no reason all cultures shouldn't be simultaneously supported. However, that still leaves the underlying issue – that we need standardized monospace character widths.

Update and additional information

It turns out that Unicode does in fact provide character width information for East-Asian characters. It's just not as neat as one number. When is it ever? :)

The information is in EastAsianWidth.txt, which is part of the Unicode character database. The data provides an East_Asian_Width property, which is explained in this technical report.

This is basically what is needed... with some unfortunate limitations:
  • Hundreds of characters are categorized as ambiguous width (property value A). These characters include anything from U+00A1 (inverted exclamation mark, ¡) to U+2010 (hyphen, ‐) to U+FFFD (replacement character, �). Many of these characters (but not all!) have different widths depending on system locale. For example, U+00F7 (division character, ÷) has a width of 1 on Windows under English (United States), but a width of 2 under Chinese (Simplified, China).
  • In some cases, width can differ even between different fonts under the same locale. For example, on Windows under Chinese (Simplified, China), U+FFFD (replacement character) renders as narrow (1 position) with a raster font, and wide (2 positions) as TrueType.
  • Some characters categorized as one width are still displayed as another width by certain systems. For example, U+20A9 (Won sign, ₩) has width property value H (half-width), but is displayed as wide (two positions) by Windows under locale Chinese (Simplified, China). It is displayed as narrow under locale English (United States).
There are also scripts like Devanagari that just don't seem to have a monospace representation. I was unable to get Windows to display Devanagari characters in console. They do display in Notepad, but they don't obey any kind of monospace font rules, at all.

There are other efforts to provide information on character widths, including the utf8proc library that's part of Julia. Interestingly, this library derives its information by extracting it from Unifont. Unifont, in turn, is an impressive open source Unicode font with a huge coverage of characters.


C++ Relocator proposal

Last month, I spent two weeks working on the following formal proposal for a new C++ feature:

Relocator: Efficiently moving objects

After incorporating much feedback in the C++ Proposals forum, I believe this proposal represents not only my ideas; but close to a consensus of everyone who expressed interest in this feature. I believe the document is fairly polished. I have submitted it as proposal P0023 via Ville Voutilainen, chair of the Evolution section of the ISO C++ working group (WG21).

It so happens that Ville is also the person who most vocally disagreed with my observations last month about problems in C++ standardization – to the extent of us colliding in a somewhat fiery altercation:

Ossification and Hawaii: Impressions of a C++ working group

When I submitted this proposal, Ville reiterated his position that I need to find a champion to represent it in the next WG21 meeting in October in Hawaii. This is how the Working Group goes about its work.

So far – despite considerable positive feedback – no one has volunteered to actually go to Hawaii for this. It seems most likely that I, also, will not be in a position to make the trip.

I think this will provide a data point about whether, and to what extent, ISO C++ has a problem.

I make the following prediction:

If I don't go to Hawaii; and no one else goes to champion this proposal; then it will not even be considered, despite support and interest in the C++ Proposals forum, and relative lack of opposition.

I'm not saying that the proposal should be accepted. I'm saying: if no one goes to Hawaii for it, it will not even be considered. It will be as though the work was never done; and the proposal never existed.

And if this happens; as I think it will – then, not fortunately at all, it will reinforce the idea that decent proposals, with decent support, are being lost, simply due to the nature of the standardization process.

Acetaminophen (paracetamol, Tylenol) more toxic than thought

Real-time monitoring of oxygen uptake in hepatic bioreactor shows CYP450-independent mitochondrial toxicity of acetaminophen and amiodarone

Prediction of drug-induced toxicity is complicated by the failure of animal models to extrapolate human response, especially during assessment of repeated dose toxicity for cosmetic or chronic drug treatments. [...] Importantly, exposure to widely used analgesic, acetaminophen, caused an immediate, reversible, dose-dependent loss of oxygen uptake followed by a slow, irreversible, dose-independent death, with a TC50 of 12.3 mM. Transient loss of mitochondrial respiration was also detected below the threshold of acetaminophen toxicity.
I posted previously about a paper suggesting that acetaminophen may cause autism and ADHD in vulnerable children.


Apple design is anti-nerd

I figured out why I dislike Apple, and the average person loves it.

The Apple design is anti-nerd.

It allows people to use technology without being ashamed that their skill is rudimentary. It puts regular folks on equal footing with skilled users. It is the intent of the product that its uses are limited and basic. By design, it lacks advanced features and flexibility, so that it cannot be used more effectively by being good at it. It's not just the low skill threshold; it is the low skill ceiling that makes people want to use it. Some uncool, nerdy person can't outskill you at iTunes.

Then, it's pricey and stylish, and therefore a fashion statement: something nerds don't know how to use.

It turns the table around on nerds. Therefore, normal people love it. Genius. :)


Science, spirituality, and the limits of the materialist paradigm

"The first gulp from the glass of natural sciences will turn you into an atheist, but at the bottom of the glass God is waiting for you."
This is a quote attributed to Werner Heisenberg – a pioneer of quantum mechanics, known for the Heisenberg principle of uncertainty.

There are a number of my friends who oppose religion; for good reasons that I also used to champion. Such people are very much miffed by Heisenberg's mention of "God". They argue the concept means nothing; that it's useless. "God" and "spirituality" are just labels we put on things we don't understand. Anything we don't yet know, science will eventually explain. Until it does, it is useless to guess.

I would argue guessing is an essential part of the scientific process. Most progress started first by guessing. However, more than just this – I wish to address the assertion of spirituality being useless.

The materialist paradigm exists for a reason. If it's what makes a person happy and makes them comfortable, who am I to tell them that they "need" something different? If someone is happy with that understanding of the world, that's fine.

But the fact is that the materialist paradigm is false. I know it is false from experience.

In this, I find the words of Morpheus appropriate:

"Unfortunately, no one can be told what the Matrix is. You have to see it for yourself."

Experiences exist, and are available from time to time, which can provide you with subjective evidence that materialism is false. The thing with these experiences is, however, that they are not available on demand. This means they aren't easily reproducible.

An often glossed-over property of the scientific method is that, by necessity, it simplifies the unsimplifiable. This is necessary to make any progress at all. However, it is done literally by throwing crucial data away. What science cannot explain, it dismisses as if it never existed. A plot point on a graph that doesn't fit the equation is not pursued relentlessly to find an explanation; it is dismissed as measurement error.

If you're a software developer – like I am – you may have done your share of debugging. You may have had the experience of when you see a weird bug happen, and then you can't reproduce it again. This is the weird plot point of the graph. You may have dismissed this bug, pretended you did not observe it; treated it as a "measurement error". And you might not see it again for months. But the bug is there.

In time, if your software is used enough, users will observe the effects of that bug, and you may be reminded of its existence. Just because you didn't chase it down, it didn't go away. If you pay attention, then in time, you might collect enough data to find that bug, and finally fix it. But you'll never collect that data if you don't pay attention; if you keep believing that the bug "shouldn't" be there.

A shortcoming of science is how often it doesn't do that. All scientific measurement is riddled with these inexplicable phenomena, but for the most part, they're continually being dismissed. Much science, though not all, is an attempt to "understand" by shoehorning the world to fit an equation. It's a pretense that the world obeys rules we are comfortable with - whereas in fact, it very much may not.

So – many people, including friends of mine, believe spirituality is useless. But it is science that is in fact useless, if certain assumptions that we take for granted about the world happen to be false. We are trusting science to eventually provide us with ultimate answers. But the scientific method can only provide us with ultimate answers if those answers can be found within the world.

If the world is in fact an illusion; if the gateway out of this illusion is in fact the mind; then making measurements using contraptions that are part of the illusion will not provide us with an understanding of what is outside.

If you investigate yourself; if you investigate the mind; and by that I mean, paying attention to your mind; not by taking EEG measurements of someone's brain, or poking in there with a scalpel; because the brain is not the mind, and is most likely only a projection, an extension, an outer layer of the mind;

... if you pay attention to your mind, then you may find answers today, instead of waiting hundreds of years before science can conclusively tell you: "Sorry - it turns out you just needed to look into yourself."

Science is a tap, yes. But what comes out of this tap is just more information about the world, which makes sense within the world. If the world is an illusion, chances are that science will never give us information about the outside of the illusion, because all science takes place within it.

But if our minds exist outside of the illusion – then there's potential to access this knowledge directly.


The ethics of non-consensual monogamy: coercion and dead bedrooms

Here's a hypothesis.

Monogamy is only ethical if both partners continue to choose it. Not just once, but every day; and without guilting each other into it. Each of the partners has to continue to choose it, and the choice has to be truly free; without conditions or attached strings.

Folks have begun to warm up to the idea that open relationships can sometimes work, for a few weird people. However, even among people accepting of this – even among those who are poly – the idea of physical loyalty remains sacrosanct. The idea remains dominant that, if you made monogamous vows, it is your duty to uphold them. No one respectable should cheat. Cheaters are literally worse than... racism.

Consider this, though.

Monogamy boils down to the expectation that you won't use your genitals in a way that isn't useful to, and approved by, your partner.

This is objectification. It is abrogation of each partner's individuality. It is dismissal of a person's independent sexual nature. It is a forced reduction of that nature to whatever might be acceptable to the other partner, and a dismissal of unmet needs that this forced reduction may create.

This is not love. Love is not forcing someone to shrink to a form in which they can't fully express themselves, based purely on your comfort and convenience.

Love is not something you give conditionally. That is trade. Love is given unconditionally. Except in jest, love does not involve statements such as: "I swear I'm going to cut off your X if I ever find you cheating!" That's not love, that's a threat of abuse. (Notice how it's only ever cute if it's said by a lovely woman?)

Many people live, and suffer, in non-consensual monogamy. This is monogamy to which a person once agreed, but might no longer agree to, if they could give it up without losing something important. Many of these are "dead bedroom" relationships; relationships that aren't even monogamous, as much as they are celibacy in a couple. Where one partner desires sex, and the other doesn't, so the sex happens once in a blue moon – and if it does, reluctantly.

This wouldn't have to be a problem, if the partner who doesn't want sex didn't expect the other to "just deal with it". They may have no interest in their partner's genitals – but they sure as hell expect no one else to touch them. If someone does – holy betrayal: may the vengeance of hell be upon thee!

I contend that this is objectification of the partner whose needs aren't being met. It's a dismissal of this person's independent sexual nature, and a reduction of their sexuality to a small fraction of what it naturally would be. Yet, people argue: "You made marriage vows – you better stick to it."

Well, no. If people have to stick to their agreements, it is a necessary stipulation that those agreements also be fair; they have to actually meet everyone's needs. Contrary to the broken moral compasses of the monogamous majority, a person cannot actually sign away their individuality with marriage.

We can make vows, and those vows have legitimacy as an expression of a couple's hopes and aspirations. However, marriage vows cannot be a contract. They cannot be a contract for the same reason that we would never, in this day and age, consider legitimate an agreement where a person becomes a slave of another; or where they become an indentured servant. Individuality is something you cannot give away. Not even with marriage.

The assumption of the monogamous majority, that their partner's genitals are theirs to own, is implicitly false. It cannot be true, because we cannot contract away our individuality.

Not infrequently, this false belief smashes headlong into reality, and survives this like a glass bottle crashing into rock. People realize that, despite their assumptions; despite their vows; they cannot actually own their partner. They never could; and this realization utterly destroys them.

Monogamy, in practice, can be beautiful. However, it cannot be beautiful to the extent that it's based on a false belief of owning a person. In order to work, monogamy has to be chosen; not by one partner, imposing it on the other, but by both. It has to be chosen not just once, but freely, every day. It has to not involve hostage-taking and coercion. There can't be any "You can't have sex, with me or anyone – or I'll make sure you never again see your children."

When monogamy is chosen by both partners, without strings; and continues to be chosen every day – such monogamy is beautiful, and healthy.

Previous similar post: Against the hating of cheaters


How the Yugoslav army dealt with liabilities

This is an anecdote told by my wife occasionally.

Jana and I are from Slovenia, which used to be part of communist Yugoslavia. My wife's grandmother had a sister who used to work in Belgrade, in the headquarters of the Yugoslavian army, as an assistant or secretary. She was close to where important things happened.

As a hobby, she was into sewing / tailoring / knitting, and for this reason, she purchased West German magazines which were ubiquitous at the time – thick, heavy catalogs for people into this hobby; Burda was one of them. The army supervised people working in its headquarters, so they knew about her reading these magazines, and this was suspicious. She was interrogated about it more than once.

Eventually; some time in her middle age years – not soon enough for retirement; she wanted a change of scenery, to go live back home, and quit. At this point, she became untrusted and a liability, and the way they dealt with that is that they had her interned in a psychiatric hospital, and subjected to electroshocks and lobotomy, until she became hardly aware of herself; a shadow of the former human being.

She lived out the remainder of her life, up to age 80 or so, in this state. She spent these years in a home for assisted living, not far from where Jana's family lives. Most of the time, she could not tell you the date.


Ossification and Hawaii: Impressions of a C++ working group

I've recently interacted informally with the mailing list of the ISO C++ Working Group. I've tried to float the following ideas.

Aggregated exceptions. I think I came up with a neat and complete proposal, but it's too ambitious for an existing language, given the changes it proposes and its relatively niche concern. We've migrated to VS 2015, so I've begrudgingly accepted noexcept destructors. And since C++11, lambdas provide a practical way to solve problems where one might previously want to use complex destructors.

So I guess we can live without multi-exceptions. Okay.

I then tried to float an is_relocatable property. A shortcoming of C++ right now is that it allows object content to be moved; but it doesn't allow movement of objects themselves. Even though anywhere from 50% to 100% of the objects we store in containers can be moved with memcpy — formally, this is undefined behavior. This is a problem for container resizing, and requires inefficient deep copying when noexcept move construction or destruction aren't available — even though the objects could be moved with a trivial memcpy. Various libraries work around this by implementing their own "is relocatable" concept: Qt has Q_MOVABLE_TYPE, EASTL has has_trivial_relocate, BSL has IsBitwiseMovable, Folly has IsRelocatable. I also saw this need, and rolled my version of this concept in EST (not yet published), and in a previous version of Atomic/Integral (to be published — hopefully, soon).

The need for a standardized concept is apparent. What I would most like to see is fairly simple:
  • In type_traits, a standard is_relocatable property.
  • A way to declare a type relocatable without exceedingly ugly syntax. My favorite:

    class A relocatable { ...
  • To avoid unnecessary declarations leading to developer mistakes, a way for the compiler to infer that an obviously relocatable type is in fact relocatable. For instance, if type Str is relocatable, then the following should be also:

    struct A { Str x; };

    It is possible to infer this in a safe way by limiting this inference to types where (1) the compiler declares an implicit move constructor, and (2) all direct bases and non-static members are relocatable.
Do you think I was successful?

There were people who staunchly opposed even adding a property — even though this one is needed, and current type_traits is littered with them (and those are also useful).

In fact — there had been an attempt to propose this as is_trivially_destructive_movable. This was shot down by the C++ committee because it would require conceptualizing the idea that object lifetime is "ended" at one address, and "begun" again at another address. This is too much of a conceptual problem. (Even though object lifetime doesn't end — it just continues...)

Not to mention the wall of opposition to any kind of compiler inference of a relocatable property. Notwithstanding that this would be purely an improvement; wouldn't break anything; and would allow this optimization to fit in with every day use.

Exasperated with this failure to find support for what seemed like a modest and innocuous improvement, I tried the least contentious possible idea. Can we just have a version of realloc — we could call this version try_realloc — that tries to expand the memory in place, and fails if it's unable? In the absence of a relocatable property, containers could at least use try_realloc to try to expand existing memory in place, before committing to a potentially deep copy.

Everyone agrees this is a good idea, and it turns out something similar had been proposed.

But it didn't go anywhere. Why not?

Well, the person who wrote that proposal couldn't afford to take a week off to travel to a C++ working group meeting to champion and defend the proposal. And so, it died that way.

Open standards — if you attend our meeting in Hawaii

Nominally, the C++ standardization process is open. In practice, it's gated by who can justify sending representatives to week-long meetings in far-off places. These meetings take place two or three times a year, and the next one takes place in Kona, Hawaii.

It goes without saying that, if you're not present on at least one of the full WG meetings, your chances of making an impact on the C++ language are slim, at the least. As the preceding anecdote shows — if you don't attend, forget about even something as trivial as a realloc improvement.

This wastes resources on travel; excludes people who could contribute; and silences worthwhile ideas that don't happen to have a champion with disposable time and money.

About a decade ago, I participated in SSH standardization. Some members of that group did travel, but this had no impact on a person's ability to affect the direction of the standard, or allow their voice to be heard. The Internet Engineering Task Force, which supervised standardization of SSH, does organize regular meetings; but this is no way required to publish RFCs, or contribute to them.

Holding face to face meetings is an inefficient and exclusionary process that became unnecessary around the year 2000. Yet it continues to persist. I wonder if this is because most people who would vote to change it enjoy being sponsored by their companies to travel. After all, it's necessary if people do it...

When I voiced this concern, members of the group were of course up in arms. It really is to get work done!

But the next WG21 meeting is being held this October at the absurd location of Kona, Hawaii. This is 5000 miles from New York, 7,400 miles from Berlin, and 2400 miles from San Francisco.

It would be too rushed to arrange this as a fly-in and fly-out 2-3 day event. If that were the case, it might as well be held in Bentonville, AR, in a non-descript Walmart office building. To allow work to get done, it has to be a leisurely occasion of 5 nights and 6 days. This allows for convenient weekend arrival or departure, which I'm sure no one will use to treat themselves to a volcano trip — or a leisurely day at the beach.

The average fuel economy of long-distance travel is >3L of jet fuel per 100 miles per passenger seat. With 90 - 100 attendees traveling an average of 5,000 miles each, return trip, this is going to involve the burning of 27,000 liters of jet fuel, and the release of 68 metric tons of CO2 into the atmosphere.

All of this must happen 2-3 times per year, because otherwise it's impossible to advance the language.

Some members of the group said they've tried to attend remotely, but that it just doesn't work as well. Well, of course it doesn't work, when the event is being set up so that you're a second class participant.

With meetings held in places like Hawaii, they're spending at least $2,000 per person per event. Annual cost is $400,000 - $600,000, just in what each participant pays. You could get absolutely amazing teleconferencing if your budget for it was $500,000 per year. And that's just one ISO working group. How many other active groups are there? What tech could you get for a $2 or $10 million annual budget?

But of course — that would make sense if you wanted open standards, where anyone could contribute.

As opposed to a week in Hawaii, each year...


Love and function

Someone asked the following conundrum. This was asked in the context of whether it's "shallow" for a person to refuse another as a partner, based solely on that they aren't sexually compatible:

"If your love is predicated on sex then is it really love or is it just two people using each other?"

Loving someone, and being useful to them, are not opposites. The two work together. To love someone is to offer yourself to be useful to them. It is to serve them gladly, with the expectation that this will be appreciated and returned. To accept being loved is to welcome this offer; to return it, and appreciate it.

Love is a willingness to serve: without coercion, and without feeling coerced.

"Love" and "relationship", though, are different things. Every one of us can love everyone, hypothetically. However, we can't have functional relationships with people who can't meet our needs.

Relationships are love + function. If you take away the function, the love remains. However, without function, love alone is not enough for a relationship.

This is why relationships based on compatibility can work. We can love everyone — if there's no reason against it. So when two people are complementary, there's no reason for love not to arise. But the reverse is not true: two people who feel deep and passionate love for each other can simply not be compatible.

"Good, traditional" traits

I find that "good" and "traditional" don't exactly go hand in hand.

If it makes sense, it's not called tradition. It's called common sense. If it's called tradition, it means that at some level, it doesn't make sense. It's being practiced despite it.

It does not make a person good if they follow imperatives that violate sense. It makes them compliant.

Being compliant makes sense, to an extent. However, being overly compliant makes you a tool. At best, you're a tool for nonsense. At worst, you're a tool for perpetuation of suffering and hardship.

Attractiveness is not shallow

There are large groups of men online — they're mostly men — who consider themselves unattractive, and adopt this as their identity, and an embittered perch from which to carp about life.

If you're an unattractive man — or woman — stop the lifestyle that makes you feel and look that way.

Most people can look great if they invest the effort. You aren't going to get taller, and you aren't going to grow a bigger penis. But you can fix almost anything else - lose fat, gain muscle, develop a sense of style, develop self confidence due to the results you have achieved.

None of this is beyond anyone's reach save a handful of really unfortunate people. Chances are that you're not one of those. Chances are that, if you think of yourself as unattractive, it's a result of a lifetime of ugly thoughts leading to disrespect and neglect of yourself and your body.

Now tell me. Who wants a person who chooses a lifetime of these ugly thoughts? Who has the option to invest the effort and improve his self confidence and looks, but avoids doing so in favor of whining about people, insulting them for their choices, and continuing to neglect his body?

Attractiveness is not shallow. You're not being judged for something outside your control. It's not in your genes, you're not "big boned". No one's depriving you of self confidence. No one but yourself dressed you poorly.

Attractiveness is 99% a consequence of mental habits, attitudes, and lifestyle. Most people don't have to be unattractive if they adopt a healthy inner life. So if you are, and you don't have to be, that speaks volumes.


VS 2015 projects: "One or more errors occurred"

For the most part, I find that Visual Studio 2015 is awesome. However, it did ship with kinks that need to be worked out. Not least, it has crashed on me from time to time, while working on a large solution.

I recently experienced a problem where I couldn't open Visual C++ projects (.vcxproj) after I copied them to another location. When trying to open the projects, VS 2015 would give me nothing more than this "extremely useful" error:

That is some talent, right there. That must have taken a lot of work to get this right.

After trying various things, and having researched this without success for a few hours, I finally got the idea to try opening the projects in Visual Studio 2013, instead.

Behold: VS 2013 actually displayed a useful error, which informed me that my projects had Import dependencies that I forgot to move.

So, yeah. If VS 2015 helpfully tells you that "one or more errors occurred" — try VS 2013.


Algorithm for selective archival or aggregation of records

Given the absurdity of the US patent system, it comes to mind that I should publish all technical ideas, no matter how inane, when I think of them — simply to provide a prior art reference, in case anyone claims that idea in a future patent.

The ideas I publish are arrived at independently, and based on publicly available information. This does not mean that existing and currently "valid" patents don't exist that cover parts of my ideas. It does, however, mean that such patents are obvious, and that the patent system is legalized enabling of extortion.

Without further ado, here's an idea for how to archive old log files to:
  • maintain required storage under a manageable threshold;
  • store the latest data, as well as older data in the event of undetected corruption;
  • avoid predictability to an attacker who might rely on a particular pattern of deletion.
We assume that log files are being archived in regular periods. This period may be an hour, a day, a week, or whatever works for an application.

We will store the log files in ranks, such that rank 0 are the most recent log files, and they get bumped to a higher rank as each rank fills. We allow for an unlimited number of ranks.

We define a maximum number of log files per rank. We call this M, and this number shall be 3 or more. It could be, for example, 4.

We determine the proportion of log files that we will keep each time as we move log files up the ranks. We call this number K, and it is a fraction between 0 and 1. A natural choice of K would be 1/2 or 1/3.

Each rank will start empty, and will with time become populated with log files.

When generated, every log file is first added to rank 0. Then, for each rank, starting with rank 0:
  • We test if the number of log files in that rank has reached M.
  • If it has:
    • We determine a subset S of log files in this rank that contains K*M of the oldest log files. If K is a fraction 1/N, e.g. 1/2, it would be natural to round the number of log files in the subset down to a multiple of N. For example: if K=0.5, S might contain 2 log files.
    • To avoid predictability of which log files are kept, we randomly discard a K proportion of log files in subset S. For example, if K = 1/2, we randomly discard one of the two log files. If predictability is desired, we discard instead based on a predetermined rule. For example, we could discard every other log file.
    • We graduate the log files in S that have not been discarded to the next rank. By this, I mean that we remove them from the current rank, and add them to the next.
We repeat this process with each rank, until we reach one that is empty.

This process keeps around the most recent log files; keeps around the older ones, going all the way back to origin; randomizes which log files are discarded; and uses O(log(T)) storage.

This process works not only for log files, but for any other kind of historical data storage. Instead of discarding information when graduating records from each rank, information can also be aggregated, while reducing the number of records. For example, when selecting the subset of records S that will be graduated to the next rank, instead of deleting a K proportion of records, all the records in S could be aggregated into a single record, which is then propagated to the next rank.

This would allow storage of information such as statistics and totals that is more granular for recent information, and more sparse for older information - yet does not lose information about overall totals.

A natural extension of the above principles is to define ranks to match natural periods of time. For example, rank 0 could be days, rank 1 could be weeks, rank 2 could be months, rank 3 could be years. The maximum number of records per rank, M; and the proportion of records kept between ranks, K; would then be rank-dependent.