Svoboda in orožje

(Prvotno objavljeno kot odziv na zabaven video na Libertarcu.)

Nekateri zagovarjajo splošno pravico do nošnje orožja s hipotezo, da bi bilo okolje, v katerem so vsi oboroženi, veliko vljudnejše in posledično manj nevarno.

Levitt in Dubner v knjigi Freakonomics pravita, da dokazov za to hipotezo primanjkuje, oziroma da dokazno gradivo, ki je na voljo, kaže prej v nasprotno smer.

Če se gremo te "pravice" do skrajnosti, razmislimo o tem, zakaj ne bi dovolili kar vsakemu imeti najmočnejšega vojaškega razstreliva. Mogoče celo kake male atomske bombe. Ali kemičnega orožja. Konec koncev vse take stvari lahko pridejo kdaj prav tudi civilno.

Če si lahko privoščiš malo atomsko bombo, potem si lahko verjetno privoščiš tudi kakšno malo goro. In če hočeš to goro zravnati, ker bi rad imel na njenem mestu golf igrišče, bi ti prišla ena mala atomska bomba čisto prav. Kajne? Potencial civilne uporabe je torej razviden. Da ne smemo imeti atomskih bomb, je kršenje človekovih pravic!

Podobno recimo z biološkim orožjem. Recimo, da v prostem času delaš humane eksperimente na 1,000 podganah. Po zakonodaji je treba vsako podgano po enem eksperimentu humano ubiti. To se pa najlaže stori s kakšnim primernim plinom. Zakaj torej ne bi smel imeti doma takih plinov? Spet kršenje človekovih pravic!

Orožje ne služi za vsakdanjo rabo, temveč pride prav v posebnih okoliščinah. Te okoliščine so ponavadi stresne in udeleženci v njih so pod pritiskom. Kadar je v takih okoliščinah zraven orožje, se zlahka zgodi, da posledice, ki bi bile sicer začasne, postanejo trajne.

Svoboda je dobro načelo, ker več svobode ponavadi pomeni več užitka, manj škode, več prosperitete v življenju. Svoboda je pomembna, ker nas drugače množica, obuta v cementne čevlje, vse vleče nazaj. Ampak načelo svobode drži tam in samo tam, kjer drži - ne pa po neki magiji kar povsod, absolutno in vedno. Pri kemičnem orožju in atomskih bombah, recimo, ne drži. In po gradivu, ki nam je na voljo, prav tako ne drži pri puškah in pištolah.


Newcomb's problem

I just recently read again Eliezer's article about Newcomb's problem.

To summarize the "problem":

It's Christmas, and a superintelligent being called Omega from another dimension comes to your living room and leaves you 2 boxes. The boxes are rigged as follows:
  1. Box A is transparent and contains $1,000.
  2. Box B is opaque and contains either $1,000,000 or nothing.
  3. You can take either both boxes or only box B.
  4. Omega has filled box B with a million dollars if, and only if, it has predicted that you will take only box B. If Omega predicts that you will take both boxes, then box B contains nothing.
  5. Omega is not present when you make your decision. It has already left, and will not return to you again.
  6. However, Omega is superintelligent. It has been observed delivering boxes like this before, and has never been observed to predict incorrectly. People who take only box B always get $1,000,000, and people who take both boxes always find box B empty, netting them $1,000.
So where's the dilemma? You take only box B and pocket the million, right? Why doubt the superintelligence?

Well, there are some confused people that would like to persuade you that the rational thing is to take both boxes. Here is how they argue. Omega has already left, so the state of box B is already determined. It is either full, or it is empty. If it is full, then taking both boxes nets you $1,001,000, as opposed to $1,000,000 if you only take box B. But if box B is empty, then taking both nets you $1,000, which is more than $0 if you take only box B in this case (being empty).

So you should take both boxes. Then, because Omega has predicted you will do so, box B is empty, and you get only $1,000.

I am writing this because, apparently, intelligent people have actually spent considerable time arguing about whether it is "rational" to take only box B, or whether a rational person "should" take both boxes.

How people can get genuinely confused about this eludes me. Quite obviously, the way the problem is framed, there are only two possible futures to choose from. Either there's future F1 where you take box B, and it contains a million, because Omega always predicts correctly. Or there's future F2 where you take both boxes, and you get $1,000. The very framing of the problem dictates that future F3, where you take both boxes and find both of them full, is impossible or very implausible. Likewise impossible or very implausible is F4, where you take only box B and find it empty.

So then the supposed "rationalists" come and say, hey, we don't believe the framing of the problem. Omega has already departed, so future F3 must be possible. So we take both boxes. But hey, we believe the framing of the problem after all. Omega knew that I would pick both boxes, so box B is empty. What a paradox!

Well, yes, usually, if you try to believe two mutually exclusive things simultaneously, you get yourself into a paradox. Either you believe the framing of the problem, or you don't. If you believe that Omega's predictions are always correct, you take only box B. If you believe that Omega is correct X% of the time, then your decision depends on your estimate of X, and there's no paradox either way.

But you don't simultaneously believe that Omega could be wrong, but then again, it must always be right by definition. Believing both is simply stupid.

And as for those who say that it is rational to pick both boxes even believing that Omega's predictions are always and unfailingly correct... well. I rest my case.

Chinese dishonesty

Freakonomics publishes a Q&A with Leslie Chang, author of a recent book Factory Girls, a closeup of the lives of workers in China. I found the following a fascinating part of the dialog:
Q. You followed students for a semester at a school that teaches factory girls how to become “white-collar” workers. A major part of the curriculum teaches students how to lie effectively. How do the concepts and values being taught in these classes affect the manufacturing economy that these women make up?

A. A major part of the curriculum involved how to lie your way through job interviews into an office position. This ultra-pragmatism is pervasive in Chinese society today; people are less concerned with abstract notions of right and wrong than with getting things done. In economic terms, this fosters a business climate in which companies copy each others’ products, steal employees and business plans, and compete ruthlessly over tiny profit margins. But with little trust or sense of long-term planning and investment, they find it hard to grow and develop their businesses.

This system also takes an emotional toll on individuals. Everyone I knew in Dongguan had stories of being cheated and robbed and lied to, and over and over people told me, “You can only rely on yourself.” But even though this is a world marked by corruption and deceit, it is at the same time highly functional. It just functions by its own set of rules.
The latter two sentences (my italics) might be interpreted as bias against the unseen. I believe Leslie Chang when she says a world like that is functional, but just how much more functional could it be if one didn't have to expect outright deceit at every turn?

Here's another "gem":
Self-help gurus like Ding Yuanzhi have a large following among China’s migrant workers. Yuanzhi, whose book Square and Round has sold around six million copies, gives the following advice to migrant workers:

Now I will talk about copying. I think copying is very important. Everyone always talks about how innovation is important. But you need to invest a lot of time to innovate and the risk is high. Why not take things that have already been proven to work in other places? That is copying.
With respect to this paragraph, we could observe that economic growth has two components: originating new ideas, and spreading them. There is no growth without either. The two mechanisms, however, are partly in conflict. What good is working hard at originating an idea, if it will be copied so fast that you can't take advantage of it?

What Ding Yuanzhi is proposing here is maximizing the spreading of ideas in such a way as to disincentivize the origination of new ideas in the first place. This can only work if there's another market, say like the United States, which respects origination of ideas, and provides innovators ways to get returns on their investments at least there.

There are claims that the United States was quite like this in the 19th century as well, and that such endemic hustling is merely a phase in the evolution of a high-growth economy. But is it a necessary phase?

Would the Chinese not benefit more if they could actually trust each other? Would they not receive more investment if foreigners could trust them?

Would the Chinese not receive investment of a different quality, allowing them to perform more demanding and higher paid tasks, if they could be trusted not to sell pretty much any plans and information they can access, to pretty much anyone?

Endemic dishonesty is a burden. The problem is, it is a burden that's embedded in their culture, and it's a burden that a single-party system promotes. When allegiance to the Party is a pre-condition for any career, you can be sure that people who have careers are people who fake their allegience.

Create a system in which honesty is a handbrake, and guess what; it's going to be populated by the dishonest.


Tao te ching

One of my favorite wisdoms:
A man is born gentle and flexible.
At his death he is hard and stiff.
Green plants are tender and filled with sap.
At their death they are withered and dry.

So it is that the stiff and unbending is the disciple of death.
The gentle and yielding is the disciple of life.

Thus an army without flexibility never wins a battle.
A tree that is unbending is easily broken.

The hard and strong will fall.
The soft and flexible will overcome.
Since translations from Chinese vary widely, I took some liberty with the translation to reflect the proper meaning as I perceive it. Specifically, I replaced "weak" with "flexible". Other translations use "lithe", or "supple", which are less clumsy, but would not be as easily understood by my non-English friends.

Just to be sure, Tao te ching also contains a lot of crap, otherwise.



Have you noticed that the lyrics to Shania Twain's Ka-Ching! now have a whole different feel? They suddenly sound so... appropriate, and prophetic. :-)
We've created us a credit card mess
We spend the money that we don't possess
Our religion is to go and blow it all
So it's shoppin' every Sunday at the mall


When you're broke go and get a loan
Take out another mortgage on your home
Consolidate so you can afford
To go and spend some more when
you get bored


All we ever want is more
A lot more than we had before
So take me to the nearest store

Can you hear it ring
It makes you wanna sing
You'll live like a king
With lots of money and things


The ineffectiveness of economic stimulus

A number of venerable economists believe in the Keynesian governmental economic stimulus concept, in which big government spending is supposed to boost a flagging economy.

To summarize roughly: when everyone across the board starts saving too much, this causes a fall in consumption, which causes a fall in production, which causes a fall in investment, which causes a fall in economic growth, all of which generally harms human well-being.

Many economists believe that, when this happens, the cure is for the government to print money and spend it. This ought to have a multiplier effect on the economy: for every newly created dollar the government thus spends, the recipient might save 20 cents, but spend 80 cents. The next person down the line might do the same, saving 16 cents and spending 64 cents, and so on until, ultimately, each $1 thus created ought to result in $5 of trickle-down spending.

Venerable economists think that this ought to boost the economy and get the GDP right back on track.

Except, it doesn't. The multiplier as practically measured is not 5. It is more like 1 to 1.4; at best. Each dollar the government spends like this raises GDP by... one dollar. Maybe $1.40.

At first, this seems counterintuitive. Assuming that the $1 the government created didn't previously exist - assuming that it's new money that would not have been present in the economy - then it ought to circulate in the economy like any other money. If the savings rate in times like this is 20%, this money too should be saved at a savings rate of 20%, so there should be a multiplier effect. Why don't we see one?

Let's take a look at what actually happens. Suppose that 90% of people who want work have work, and 10% do not. If government spending goes to existing businesses, then it's going to people who already have work. Now they have a bit more work. But what do they do with the extra money from the government that they would not have received otherwise?

Given the empirical measurements, it would appear that people treat the extra money as just that - extra money. Even if their average savings rate is 20%, the extra money is not saved at the same rate; instead, since it is surplus and the times are bad, it is 80% saved and 20% spent. This leads to a meagre multiplier of no more than 1.4, as observed. The money doesn't trickle down to the economy.

This suggests that stimulus might be more effective if it was used to purchase work from people who are currently unemployed. People who lack sufficient income to live reasonably in the first place would spend more of this windfall on consumption, likely increasing the multiplier more than if the money is given to existing businesses where people, obviously, already have work. But still, the multiplier will not be much more than 2, because when the formerly unemployed spend their new money, they will spend it with businesses where people already have work, and those people and businesses will save most of the extra income. The trickling down stops fast in hard times.

Meanwhile, Christina Romer and David Romer find that the multiplier from tax cuts is about 3. It would appear that every $1 of tax cuts raises GDP by $3. Still - this too might be true in economic good times more so than hard times.

Ever done indoor skydiving? You enter a vertical tunnel where you stand on a mesh while a huge fan blows wind up at 100 mph or so. You wear an oversized suit and you need to adopt a certain position in order for the wind to lift you. If you are gripped with fear, you reach down to protect from falling. Your form now fails to capture the wind, and indeed you fall. As long as you adopt the form of fear, the wind can't lift you.

So it seems to be with the economy. As long as people are afraid, they're going to save most of their extra income. This is something that only very radical government policies might be powerful enough to change; and I mean policies far more radical than tax cuts or deficit spending.

We are better off without those policies. The economy is undergoing some reconfiguring; it is now apparent to everyone that some parts didn't work, and they need to be thrown away. Not knowing how we are connected to those parts makes things uncertain, and in times like that, it's normal that most people will save. But as long as governments do not meddle too much, a new order will arise, and the economy will soar again.


The falsity of formally proven software

Eric Drexler falls into the hole of "imagine we can prove programs correct":
Why does this matter to us ordinary mortals? Because proof methods can be applied to digital systems, and in particular, will be able to verify the correctness (with respect to a formal specification) of compilers [pdf], microprocessor designs [pdf] (at the digital-abstraction level), and operating system microkernels [...] If this doesn’t seem important, it may be because we’re so accustomed to living with systems that have built on foundations made of mud, and thinking about a future likewise based on mud. All of us have difficulty imagining what could be developed in a world where computers didn’t crash, were guaranteed to be immune from virus attack, and could safely download code written by the devil himself, and where crucial pieces of software could be guaranteed to not leak data.
The bolding is mine. Eric Drexler is missing that, if you have a formal specification for a program, then you have the program. Programming is writing a formal specification. In the absence of hardware error, programs always behave according to their formal specification - their source code.

But what if the formal specification, when actually put to work, turns out to specify things that we didn't really want? Ah... Therein lies the rub.

The reliability of software can be improved (a lot!) by designing and adopting new programming languages which restrain the programmer's freedom of expression in subtle and wise ways that cause errors to be avoided as much as possible, while maximizing the ability to get things done. But far from it that we will ever produce programs that are error-free.

Bugs are in the mind of the programmer. The computer will always do what the programmer tells it to do. Bugs indicate the programmer's lack of awareness, confusion about what he wants, conflicting ideas. As long as human minds are faulty, the formal specifications they produce will be faulty as well - and this is the eternal source of bugs.