A further glance at ‘Polymath and the Origin of Life’

January 8, 2010

Polymath and the origin of life has finished its second month. Remember, Tim Gowers plans to set up a polymath project to explain abiogenesis. The project should use cellular automata or similar devices to explain the emergence of life. Right at the beginning of his proposal he has posed a couple of questions on what properties these machines or models should have and what exactly should constitute the scope of the project. I quote:

Question 1: Should one design some kind of rudimentary virtual chemistry that would make complicated “molecules” possible in principle?

The alternative is to have some very simple physical rule and hope that the chemistry emerges from it (which would be more like the Game of Life approach).

If the emergence of life does not depend on the details of the underlying chemistry we could choose a ‘simple’ model and proceed. However, that seems to be circular. We do not know enough examples of ‘life’ to know what exactly constitutes a viable approximation to chemistry. We might get lost in arbitrariness.

The other approach uses the one known example of ‘life’and its ‘fundamental’ laws. Approximations to it might still result in the emergence of some sort of chemistry and then ‘life’.

If I had to choose, I would take the second approach. Even if we do not succeed in generating life, finding suitable approximations to Schrödinger’s equation which result in toy chemistries seems to be already a respectable finding.

Question 2: How large and how complicated should we expect “organisms” to be?

If everything turns out to be working we might be able to describe “organisms” in a different frame and size thus may not play an important role.

Added later: I haven’t quite made clear that one aim of such a project would be to come up with theoretical arguments. That is, it would be very nice if one could do more than have a discussion, based on intelligent guesswork, about how to design a simulation, followed (if we were lucky and found collaborators who were good at programming) by attempts to implement the designs, followed by refinements of the designs, etc. Even that could be pretty good, but some kind of theoretical (but probably not rigorous) argument that gave one good reason to expect certain models to work well would be better still. Getting the right balance between theory and experiment could be challenging. The reason I am in favour of theory is that I feel that that is where mathematicians have more chance of making a genuinely new contribution to knowledge.

When I was a teen some twenty-five or thirty years ago I was very impressed by the genetic model in Gödel, Esch, Bach of Douglas Hofstadter. I took my Apple IIe computer and coded a version. (The specification left some elbow room for interpretations to say the least). The microprocessor was a Motorola 6502, 8-bit running at 1 MHz. A month later it became clear: I cannot generate anything even remotely similar to ‘life’. I guess, nobody could that. Today I am writing this blog entry on a dual core laptop Intel P8400 running at 2,26 GHz and I am not trying to code that genetic model again. Why?

It is not only the computing power what distinguishes me from my earlier version. I also do no longer believe that ’emergence’ should be treated as a phenomenon which can be reached in a finite number of steps. I rather think that some sort of ‘limit’ should be involved, like in the definitions of the first infinite ordinal number, velocity or temperature. If that is the case, then the use of computers is limited until the ‘correct’ approximations are known and the question on the ‘size’ of organisms also is answered: they might be huge.

I have distilled a couple of items, which I think have to be addressed in one way or the other to make the project a ‘success’ (whatever that means).

  • Find a suitable definition or concept of life. This definition has to be fairly robust and still open for interpretation. Something like: life is emergent and evolves.  A crystal emerges, but does not evolve and car designs evolve, but do not emerge. If we find something that emerges and evolves we are done.
  • Currently we do not know what emergence and evolution are. Therefore we collect examples of emergent behavior in all branches of science (this is true polymath and emergence seems to be in all concepts proposed so far). Describe these examples in a way accessible to all participants.
  • Use taxonomy or whatever other scientific method to extract the ‘abstract’ information from these examples.
  • Single out or even develop a mathematical theory related to emergence like calculus to mechanics.
  • Explain evolution and its circularity within this framework.
  • Make it happen! This is the more practical part of modelling an emergent and evolving phenomenon.

These items do not have a natural order. Currently most work was done on developing foundations for the practical part (the last item). Gowers gave a list of 7-8 desirable properties and discussed momentum- and energy conservation.

Let me just note that energy conservation seems problematic. While fundamental physical laws exhibit time translation symmetry, it is not obvious whether and how the same holds for e.g. evolutionary adaption. What does that mean? The following could happen: If we switch from the description of the system on a fundamental level (with energy conservation) to the description of the system on the ‘life’ level by say some ‘limit’ procedure we might get emergent laws depending on time. Such an effect might be necessary or even desirable to explain concepts like adaption, learning and free will. Energy conservation (aka time shift symmetry) might play the same negligible role for ‘life’ as quantum tunneling for cannon balls.

In the project, the emphasis so far seemed to be on understanding how one has to code the problem. However, also definitions were given, toy chemistries were proposed, examples of emergent behavior were given and so on. My items do not seem to be too far off and if that is true there seems to be much work to be done in 2010.

Advertisements

Mechanics and Markets

November 25, 2009

When we talk about markets we often use terms like equilibrium or even market force. We choose this terminology for a reason. The analogy to the well established theories of mechanics and quantum mechanics is intended and the pictures we have in mind are a pendulum or even a simple spring. Their restoring forces seem to model the market forces and therefore we frequently observe argumentations very similar to:

if prices increase, then demand decreases and vice versa finally, because of some process still to be described, the market settles down in an equilibrium (called Walrasian price equilibrium).

As a start, that sounds convincing. There just remains one big question. Is that a good picture? Or, even more to the point:

Are there any justifications for the existence of market forces?

Rather than answering this question (regular readers know my standpoint anyway) I would like to justify why this question is actually reasonable and should be asked and answered. In physics this question is answered to the positive, in economics the situation is a little blurry to say the least. I continue by comparing mechanics with economics in catchwords. Thereby pointing out similarities, but also discrepancies and, in a way, recalling ‘the story so far’.

Basic notions

Let me start with two of the fundamental notions in mechanics, namely position and momentum. In earlier posts we have identified their counterparts in economics as price and demand.

Symmetries

In mechanics the intuition is that momentum is invariant under translation of position. In economics we need demand invariance under price-scaling.

Commutation relations

These symmetries lead to commutation relations of the form {[A,B]=\text{id}} in quantum mechanics and {[A,B]=A} in economics (cf. here). This difference is essential and has a huge impact, albeit not immediately.

Bounded representations

Both commutation relations imply that the symmetry groups do not have representations on a finite-dimensional vector space (cf. here).

Unbounded representations

While there are no bounded representations, we get unbounded representations on the Hilbert space {L^2(\mathbb{R}^n)} of square integrable functions. Momentum and demand operators are differential operators, whereas position and price are (different) multiplication operators (cf. here).

Uncertainty principle

The uncertainty principle of quantum mechanics is well-known. So far I didn’t write about that here in the blog, but in economics the commutation relations imply inequalities which can also be interpreted as some sort of uncertainty principle. I shall come back to this later.

Time evolution

As described in scientific laws to get the time evolution in quantum mechanics one chooses an action, one uses Legendre transform to obtain the energy, one derives the canonical equations and essentially plugs in the above representation to obtain Schrödingers equation governing the time evolution of a quantum system. That surely sounds more complicated than it actually is.

Why can’t we just do that for markets and obtain market equations governing their time evolution? Now, there are a couple of technical difficulties. The most prominent probably is that the Legendre transform of a market action is not invariant under time translation. Hence, in markets there is no conservation of energy. This fact alone makes the usage of a term like market force a little obscure. What is meant by force if there is no energy or at least no energy conservation?

That essentially is the programme for the rest of the year. I shall spell out the maths behind the uncertainty principle for markets and then delve into the technical details of obtaining a time evolution for markets.

Stay tuned …


Commutation Relations in Markets

September 8, 2009

To derive commutation relations in microeconomics we first have to reach sure ground. What is a minimal set of assumptions we need to derive something interesting, but still comprehensive enough to describe something meaningful?

In a market, it is definitely safe to assume that we have {n} goods for some number {1\leq n \in \mathbb{N}}. This goods are being traded and therefore we need to talk about prices and demand. Call {p_i} the price of good {i} and {d_i} the demand for good {i} and let {1\leq i \leq n}. What else do we need?

Sure, we need a lot more, but not now! As we have seen in Scientifc Laws all we need now is a symmetry between price and demand. The key to this symmetry is found in any basic text book like e.g. Microeconomic Theory by A. Mas-Colell, M.D. Whinston and is called invariance of demand under price-scaling. What is meant by that? Let me just give you an example. When continental europe introduced the Euro currency, many nations swapped their national currency for the new Euro. In Germany, 1 Euro was worth 1.95583 Deutsche Mark. All prices, wages, debts aso. where scaled by {\frac{1}{1.95583}}. The day after, no increase of demand for fridges, cars, credits aso. was observed. That was no surprise for economists. Where should a change of demand come from? A redefinition of the currency is not enough to generate demand. That is generally believed and a pillar in the following argumentation.

Just for the sake of completeness let me emphasize that price scaling, as introduced above form a group. Whenever we scale by a factor {\alpha\in\mathbb{R}_{>0}} and then scale by a factor {\beta\in\mathbb{R}_{>0}} we obtain a scaling by the factor {\alpha\beta}. Scaling with 1 is the neutral element and for each scale factor {\alpha\in\mathbb{R}_{>0}} we can go back by scaling with {\frac{1}{\alpha}}.

As mathematicians, we often represent abstract groups (like the above price-scaling) as linear operators acting on some vector space. To that purpose, we choose the state of the market to be given by a non-zero vector {\xi} in a Hilbert space {X} with inner product denoted by {\left\langle \cdot | \cdot \right\rangle}. Of course, in the moment you can think of {X} as a finite dimensional Hilbert space {\mathbb{R}^n} or {\mathbb{C}^n}. On the other hand, it is always good to be suspicious and fixing the dimension to be finite might be premature. Observables are self-adjoint operators on this Hilbert space and satisfy the following axioms:

  • (MA1) The price {p_i} of good {i} is a positive observable on {X} for all goods {1\leq i\leq n}.
  • (MA2) The demand {d_i} of good {i} is an observable on {X} for all goods {1\leq i\leq n}.
    A positive observable {a} on {X} is an observable with {\langle a\xi | \xi \rangle>0} for all {0\neq\xi} in the domain {D(a)} of {a}.

    By a famous result of E. Noether, symmetries and invariants are closely tied together. What are the market invariants of the asymmetric market under the price-scaling symmetry? To see this, let {\left(U_i(\alpha)\right)_{0 < \alpha\in \mathbb{R}}} be a strongly continuous family of unitary operators on {X} such that

    \displaystyle  	U_i^{-1}(\alpha)p_i U_i(\alpha)=\alpha p_i.

    The family {U_i(\cdot)} satisfies the following properties for all {\alpha>0} and {\beta>0}:

    • {U_i(1)= \textnormal{id}_X}
    • {U_i(\alpha)U_i(\beta)=U_i(\alpha\beta)=U_i(\beta)U_i(\alpha)}
    • {U_i^{-1}(\alpha) = U_i\left(\frac{1}{\alpha}\right)}

    Define {T_i(t):=U_i(e^t)} and observe

    • {T_i(0)= \textnormal{id}_X}
    • {T_i(t)T_i(s)=T_i(t+s)=T_i(s)T_i(t)}
    • {T_i^{-1}(t) = T_i(-t)}

    This yields {T_i} to be a strongly continuous group of unitary operators acting on {X}. Thus, the theorem of Stone ensures the existence of a skew-adjoint generator {A_i}. Set {\alpha = e^t} and with {U(\alpha)=T(\ln \alpha)} it follows that

    \displaystyle \begin{array}{rcl} p_i & = & \frac{d}{d\alpha}\left(U_i^{-1}(\alpha)p_i U_i(\alpha)\right) \\ & = & \frac{d}{d\alpha}\left(T_i(-\ln \alpha)p_i T_i(\ln \alpha)\right) \\ & = & - \frac{1}{\alpha} T_i(-\ln \alpha) A_i p_i T_i(\ln \alpha) + \frac{1}{\alpha} T_i(-\ln \alpha)p_i A_i T_i(\ln \alpha). \end{array}

    Evaluation at {\alpha=1} yields

    \displaystyle  	 	\left[p_i, A_i\right] = p_i. \ \ \ \ \ (1)

    Since a generator commutes with the strongly continuous group it generates it is easily seen that {\beta_i A_i + \gamma_i\textnormal{id}_X} also commutes with {U_i(\alpha)} for any {\beta_i,\gamma_i\in\mathbb{C}}. Hence {\beta_i A_i + \gamma_i\textnormal{id}_X} represents a market invariant under price-scaling.

    Now we derive an economic interpretation of {A_i}. We know already that {\beta_i A_i + \gamma_i\textnormal{id}_X} represents a market invariant under price-scaling for any {\beta_i,\gamma_i\in\mathbb{C}}. Since {A_i} is skew-adjoint and {\beta_i A_i + \gamma_i\textnormal{id}_X} needs to be an observable, we get that {\beta_i = i \mu_i } and {\gamma_i = \omega_i} for some {\mu_i, \omega_i \in\mathbb{R}}. Furthermore, since scaling of one price does not influence scaling of the others (i.e., {\left[p_i, U_j(\alpha)\right]=0} for {i\neq j}) we can use (1) and obtain

    \displaystyle  	\left[p_i, i \mu_i A_j - \omega_i \textnormal{id}_X\right] = i \mu_i p_i \delta_{i,j}.

    The operator {i \mu_i A_i + \omega_i \textnormal{id}_X} is an observable and is invariant under price-scaling. Economic intuition therefore leads us to identify this operator with the demand respectively excess demand for good {i} if {\mu_i\neq 0}. The real parameter {\omega_i} is identified as endowment. The other real parameter {\mu_i} represents a new feature. Intuitively it measures the difference of first selling and then buying a good versus first buying and then selling that good.
    The observations in the last paragraph yield the final axioms.

  • (MA3) The endowment {\omega_i} of good {i} is a real number {\omega_i \in\mathbb{R}} for all goods {1\leq i\leq n}.
  • (MA4) Prices {p_i} and demands {d_j} interact according to

    \displaystyle  			\left[p_i, d_j\right]=i \mu_i p_i \delta_{i,j}

    for a fixed real {\mu_i\in\mathbb{R}}.
    There are still a lot of things to say, e.g. on how measurements are done, on the dimension of the Hilbert space {X}, on representations of demand {d_i} and price {p_i} as operators and on a comparison to the commutation relations of quantum mechanics. Stay tuned …


  • Scientific Laws

    September 2, 2009

    As I have told you earlier, my guest is very sceptical about our scientific achievements. What follows are the notes I took, when he gave me a short summary of what he considers ‘our strategy’.

    In modern understanding of science, the fundamental laws seem to be consequences of various symmetries of quantities like time, space or similar objects. To make this idea more precise scientists often use mathematical arguments, thereby choosing some set {X} as state space encoding all necessary information on the considered system. The system then is thought to evolve in time on a differentiable {n}-dimensional path {x_i(t)\in X} for all {t\in\mathbb{R}} and {1\leq i \leq n\in\mathbb{N}}. Quite frequently there is a so-called Lagrange function {L} on the domain { X^n \times X^n \times \mathbb{R} } and a constraint function {W} on the same domain. The path {x(\cdot)} is required to minimizes or maximizes the integral

    \displaystyle  \int_0^T L\left(x(s),\dot{x}(s),s\right)ds

    under the constraint

    \displaystyle  W\left(x(s),\dot{x}(s),s\right)=0.

    (Under some technical assumptions) a path does exactly that, if it satisfies the Euler-Lagrange equations

    \displaystyle \frac{d}{dt}\frac{\partial L}{\partial \dot{x}_i}-\frac{\partial L}{\partial x_i}=\lambda \frac{\partial W}{\partial \dot{x}_i}

    for some function {\lambda} depending on {X^n \times X^n \times \mathbb{R}}.

    Define {y_i:=\frac{\partial L}{\partial \dot{x_i}}} and observe that (under suitable assumptions) this transformation is invertible, i.e. the {\dot{x}_i} can be expressed as functions of {x_i, y_i} and {t}. Next, define the Hamilton operator

    \displaystyle  H(x,y,t) = \sum_{i=1}^n \dot{x}_i(x,y,t) y_i - L(x,\dot{x}(x,y,t),t)

    as the Legendre transform of {L}. The Legendre transformation is (under some mild technical assumptions) invertible.

    Now, (under less mild assumptions, namely holonomic constraints) two things happen. The canonical equations

    \displaystyle \frac{d x_i}{d t} = - \frac{\partial H}{\partial y_i} \left(=[x_i, H]\right), \frac{d y_i}{d t} = \frac{\partial H}{\partial x_i}\left(=[y_i, H]\right),\frac{d H}{dt} = -\frac{\partial L}{\partial t}

    are equivalent to the Euler Lagrange equations. Here {[\cdot,\cdot]} denotes the commutator bracket {[a,b]:= ab-ba}. Furthermore, if {L} does not explicitly depend on time, then {H} is a constant. That is the aforementioned symmetry. {H}, the energy, is invariant under time translations.

    Given all that, the solution of the minimisation or maximisation problem can then be given (either in the Heisenberg picture) as

    \displaystyle  x(t) = e^{t H} x(0) e^{-t H}, y(t) = e^{t H} y(0) e^{-t H}

    or (in the in this case equivalent Schrödinger picture,) as an equation on the state space

    \displaystyle  u(t)= e^{t H}u(0).

    This description is equivalent (under mild technical assumptions) to the following initial value problem:

    \displaystyle  \dot{u}(t)=H u(t), u(0) = u_0\in X.

    where the operator {H} is the ‘law’. More technically, the law is the generator of a strongly continuous (semi-)group of (in this case linear and unitary) operators acting on (the Hilbert space) {X}. As an example of this process he mentioned the Schrödinger equation governing quantum mechanical processes.

    His conclusion was that the frequently appearing ‘technical assumptions’ in the above derivation make it highly unlikely for laws to exist even for systems with, what he calls, no emergent properties. ‘If that was true’, I thought ‘then … bye bye theory of everything!’ He explained further, that under no reasonable circumstances it is possible to extrapolate these laws to the emergent situation. I am not sure, whether I understand completely what he means by that, but his summary on how we find scientific laws is in my opinion way too simple. It can’t be true and I told him.

    With just a couple of ink strokes he derived the commutation relations for exchange markets from microeconomic theory. That left me speechless, since I always thought, that there cannot be ‘market laws’. Markets are on principle unpredictable! They are, or?