## Climbing Levels of Complexity

December 16, 2009

Considering for example group theory or complex analysis one quickly realizes that successful mathematical theories have plenty of examples. Therefore, if one takes a step into uncharted domain it is never bad to start such an enterprise with an example.

In my last post on the ‘origin of life’ – polymath project I have observed that people expect models explaining the emergence of life having ‘levels of complexity’. Of course, I was very vague about what ‘levels of complexity’ actually are. Anybody is vague about that.

The goal of this post is to remember and discuss a well-known example of emergence and to shed some light on the problem.

In the beginning of the last century physicists recognized classical mechanics as emergent. For centuries Newton’s laws built a solid fundament for science and then, in (essentially) a sudden, everything became different. Classical mechanics was realized to be a ‘limit’ of quantum mechanics. This is commonly known as the correspondence principle and its formulation is remarkably vague. In what sense has this limit to be taken?

A theorem of Ehrenfest tells us that if we consider means of observables we get back Newton’s laws quite easily. Classical momentum (position) is interpreted as the mean of the momentum (position) observable of quantum mechanics. Since the mean is just some (well-defined) limit everything seems fine. However, that is not the problem. Let me just ask:

What is it, that has classical momentum?

Is there a cannonball (and not just some quantum state)? I would certainly approve this if one is flying towards me and I won’t start to calculate the probability that it is tunneling through me. In a way the cannonball has become independent of the underlying quantum laws and now just obeys the new classical ‘in the mean’-laws.

For me this is the essence in climbing ‘levels of complexity’. We do not just get new laws, which are suitable interpreted, means of fundamental laws. We also get new objects or states governed by the new laws and no longer by the fundamental laws. There is no tunneling of cannonballs however I will get shot to pieces, something quantum theory does not cover adequately. Quantum theory simply is not supposed to tell me how to stop the bleeding.

Forget about my clouded comments. Is there something that we can learn from this example? I think so. The lesson we learn is that there is at least one important example of ‘climbing the levels of complexity’ in which the state space changes with the level. Sure, a reductionist could argue that in principle we can describe the cannonball by a quantum state. For all practical purposes however the state space changes. This observation is not sufficiently appreciated.

For my taste this was not enough mathematics. Next time I return to set theory. There are still two paradoxes left to talk about, namely Cantor’s paradox and Burali-Forti.

## Origin of Life – Revisited

December 11, 2009

Roughly one month ago I have reported on Tim Gowers’s polymath project to model the origin of life. Now it is time to go back there and see what has happened.

It is in the nature of polymath, which is simply put ‘more of the same’, that progress on such a fundamental question is slow. Organizers and participants should simmer that at low heat. After all, it is not just a math Olympiad problem with a solution known to exist.

What have we got so far (according to my frail understanding)?

A first observation is, that the players seem to be interested in implementation and representation rather than the underlying symmetries or rules of the problem. They have in mind a cellular automaton like a sand-pile model or Conway’s game of life. This is interesting in so far as the best mathematicians tell us first to learn about the problem, state the laws governing it and then, as a last step, search for representations as known objects. Is it obvious that the problem has a solution among Turing machines? Not to me, at least. Why should we then restrict ourselves to Turing machines? One reason might be, that they are well understood. A second might be that if you believe in the Church-Turing thesis, Turing machines are all you need.

This  summarizes the first observation: The players apparently presuppose the Church-Turing thesis (albeit without mentioning it).

But, what else could they do instead of looking for Turing machines?

They could go for the laws and they did. It was rapidly recognized that Gowers’s item 5 or even the one before play a central role.

4. It should have a tendency to produce identifiable macroscopic structures.

What does that mean? In his comment Tim Gowers explains:

My impression is that there are several levels of complexity and that each level provides a sort of primeval soup for the next level up, if that makes sense.

Anyhow, a sort of answer is that from any model I would look for the potential to produce objects that could serve as the basic building blocks for a higher-level model.

This last sentence seems to be a recurring motive among players. There seems to be a consensus that such levels should be part of a model. If you now start with some bottom level, then how do you proceed from one level to another? That is indeed a problem.

Some believe, for example, that if you put electrons and other elementary particles together in the right combination you will end up with a human. Like a watchmaker making a watch. The watch obeys the same laws as do their parts. In this spirit a human obeys the same laws as say an electron. That line of thinking nearly inevitably leads to the ‘free will vs. determinism‘ controversy.

Others believe that there is no evidence for the above to hold. There is essentially only one known process to ‘produce’ humans. The natural one. This process is radically different from ‘putting pieces together’ and it is not obvious that the laws governing the pieces hold for the whole. There is, let’s call it, a ‘barrier’ separating the level of humans from the level of electrons. This is hard to imagine for a reductionist.

If we take the existence of ‘levels of complexity’ as a given, then the decisive question currently seems to be:

• What are the rules to go from one ‘level of complexity’ to the next?

I am very excited on what answers they come up with …

## An Uncertainty Principle for Markets

December 9, 2009

Today our goal is to derive an exact formulation of an uncertainty principle in markets. To that purpose we have established in earlier posts a commutation relation between demand ${d_i}$ and price ${p_i}$ of a good ${i}$ in a market. I state it again:

Prices ${p_i}$ and demands ${d_j}$ interact according to

$\displaystyle [p_i,d_j]=i \mu_i p_i \delta_{i,j} \ \ \ \ \ (1)$

for a fixed real ${\mu_i\in\mathbb{R}}$.

What I didn’t tell you so far is how measurement of market observables is supposed to work. Let me just close this gap. Measurement of an observable, e.g. the price of good ${i}$, in a market in state ${\xi}$ (e.g. in this case selling a small quantity of good ${i}$) will result in a jump of the market into a new state ${\zeta}$ being an eigenvector of the observable. The outcome of the measurement will be a real number ${\zeta_i}$ (e.g. the price), the eigenvalue of the observable corresponding to ${\zeta}$ with probability

$\displaystyle \textnormal{prob }(\zeta_i)=\frac{\left\langle \xi|\zeta \right\rangle \left\langle \zeta|\xi\right\rangle}{\|\xi\|^2}.$

For an observable ${a}$ on ${X}$ one can show that its mean value at state ${\xi\in X}$ is given as

$\displaystyle \overline{a_\xi}=\frac{\left\langle a \xi|\xi\right\rangle}{\|\xi\|^2}.$

The dispersion of an observable ${a}$ on ${X}$ is given as

$\displaystyle \overline{\left(\triangle a \right)^2_\xi}= \frac{\left\langle\left(a-\overline{a_\xi} \text{id}_X \right)^2 \xi|\xi\right\rangle}{\|\xi\|^2 }.$

Now we are in the shape to state the uncertainty principle in markets. In essence it claims that prices and demands of a good cannot be measured with arbitrary precision. Moreover, an explicit lower bound on the maximal simultaneous precision is given. Its proof is essentially a straight forward application of Cauchy-Schwarz inequality.

Proposition. For a market in state ${\xi}$ the dispersions of ${p_i}$ and ${d_i}$ satisfy

$\displaystyle \overline{\left(\triangle p_i \right)^2_\xi} \, \overline{\left(\triangle d_i \right)^2_\xi} \geq \frac{\mu_i^2}{4} \| \sqrt{p_i} \xi\|^4.$

In the asymmetric case ${\mu_i\neq0}$, the right-hand side is strictly larger than zero.

Proof. Since dispersion and mean do not depend on the norm of a state we can, without loss of generality, assume that ${\|\xi\|=1}$ and obtain

$\displaystyle \overline{\left(\triangle p_i \right)^2_\xi} \, \overline{\left(\triangle d_i \right)^2_\xi} = \left\langle\left(p_i - \overline{p_i} \text{id}_X \right)^2\xi|\xi\right\rangle \left\langle\left(d_i - \overline{d_i} \text{id}_X \right)^2\xi|\xi\right\rangle.$

Now Cauchy-Schwarz inequality implies

$\displaystyle \begin{array}{rcl} \overline{\left(\triangle p_i \right)^2_\xi}\, \overline{\left(\triangle d_i \right)^2_\xi} & \geq & \left\langle\left(p_i - \overline{p_i} \text{id}_X \right) \left(d_i - \overline{d_i} \text{id}_X \right) \xi|\xi\right\rangle \\ & & \qquad \times \left\langle\left(d_i - \overline{d_i} \text{id}_X \right) \left(p_i - \overline{p_i} \text{id}_X \right) \xi|\xi\right\rangle. \end{array}$

Since ${ab = \frac{1}{2}[a,b]_+ + \frac{1}{2i}i[a,b]}$ with ${[a,b]_+=ab+ba}$ we obtain

$\displaystyle \begin{array}{rcl} \overline{\left(\triangle p_i \right)^2_\xi} \, \overline{\left(\triangle d_i \right)^2_\xi} & \geq & \left\langle\frac{1}{2} [d_i - \overline{d_i} \text{id}_X, p_i - \overline{p_i} \text{id}_X]_+\xi|\xi\right\rangle^2 \\ & & \qquad +\left\langle\frac{1}{2i} [d_i - \overline{d_i} \text{id}_X, p_i - \overline{p_i} \text{id}_X]\xi|\xi\right\rangle^2 \end{array}$

and since the first term is positive

$\displaystyle \begin{array}{rcl} \overline{\left(\triangle p_i \right)^2_\xi} \, \overline{\left(\triangle d_i \right)^2_\xi} & \geq & \left\langle\frac{1}{2i} [d_i - \overline{d_i} \text{id}_X, p_i - \overline{p_i} \text{id}_X]\xi|\xi\right\rangle^2 \\ & \geq & \left\langle\frac{1}{2i} [d_i,p_i]\xi|\xi\right\rangle^2. \end{array}$

Now (1) and the fact that positive observables have a square root yields the final inequality

$\displaystyle \begin{array}{rcl} \overline{\left(\triangle p_i \right)^2_\xi} \, \overline{\left(\triangle d_i \right)^2_\xi} & \geq & \frac{\mu_i^2}{4} \|\sqrt{p_i} \xi\|^4. \end{array}$

Since ${\mu_i \|\sqrt{p_i} \xi\|^4}$ can only be zero if ${\mu_i}$ is zero the proposition is proved.

Admittedly, that was a bit dry, but it does the job and that is sometimes all that is necessary in mathematics. Now the pace increases and we are heading with giant leaps towards the time evolution equations for markets …

## A New Look

December 4, 2009

This blog got overhauled.

If the content is not clear, then at least the presentation should be.

## Russel’s Paradox and Fixed Points of Involutions

December 2, 2009

The n-category cafe is a place for creative and inspiring discussions. Admittedly, I do not understand enough of category theory to contribute and therefore I was really happy when in the recent discussion about Feferman set theory I saw the following two quotes.

By the end of his remark John Baez states:

I don’t know any good way to deal with Russell’s paradox, but I believe there is one.  I believe someday we’ll find it.  But I don’t think we’ll find it by trying to ‘weasel out’ of the problem. Somehow we need to think a new way — a clearer way, that makes the paradox just disappear.

I’ve pretty much entirely moved over to a point of view in which it doesn’t make sense to ask of any two sets A and B the question ‘is A∈B?’  For example, I don’t think it makes sense to ask whether ∅∈ℚ.  And once you’ve adopted this point of view, the ‘paradox’ dissolves.

These two statements are crystal clear. John Baez is not satisfied with the state of discussion on the paradox and Tom Leinster answers this by disallowing certain questions. His idea, in a way, exemplifies how we deal with this problem usually. We disallow things.

What if we can go the other way. Like with the introduction of the complex numbers. Instead of disallowing the square root of a negative number we take it and see how far we get.

In our situation this means we allow the question ‘is A∈B’ for all objects under consideration and then rapidly approach Russel’s paradox (and some others). From this paradox we learn that there are sets which contain and not contain themselves simultaneously. However:

That is only a problem in a two valued logic!

Let us assume there are other truth values but ‘true’ and ‘false’. Especially, that there is at least one truth value equal to its own negation. In this sense, negation is an involution on the set of truth values as we know it and negation has a fixed point. This fixed point is something new, like the square root of -1. In my opinion, that is what we learn from the paradox. The logic of mathematics has to be multi-valued.

Unfortunately this innocent looking assumption/observation leads us far away from anything we are used to know about sets. Circularity starts to surface everywhere and instead of calling it a ‘bug’ we are forced to call it a ‘feature’ (if we want to proceed on this way).

I elaborate this in further posts here, albeit in small steps, because the consequences are really mind-boggling.