Utility and Time – Statement of the Problem

January 27, 2010

Ultimately our goal is to get some description of price evolution derived from first (economic) principles. In earlier posts (1, 2, 3) I have shown what can be deduced from ‘demand invariance under price-scaling’. As described there we still assume {n} goods being traded in a market, hence there are prices {p_i} and demands {d_i} for {1\leq i\leq n} attributed to these goods. That was the setting so far and now we are going to take the first steps into time.

We assume that good {i} is consumed over time and describe consumption {c_i(\cdot)} as a positive real function. Consumption of good {i} from time {a} to time {b} is measured by {\int_a^b c(s)ds}. The participants in the market we call agents. An agent attributes to each consumption vector {c} a utility {u}. Technically this is a positive, increasing and concave function. In all our examples {u} and {c} will be sufficiently differentiable. Utility is increasing since more consumption is considered better and it is concave since we assume ‘diminishing marginal utility‘. The latter does not always hold in economic situations. However, most introductory examples are concave and as a start this seems safe.
I assume ‘time impatience‘, that means, consumption now is better than consumption in the future. That assumption is not undisputed, but, as a model for the finite life span of the agents, this too seems safe for such an introductory text. Overall utility from time {a} to time {b} is measured by {\int_a^b r^s u(c(s)) ds} for some discount rate {0<r<1}.

Agents have attached a wealth level {w(\cdot)}, that means for all time holds {\sum_i^n p_i(t) c_i(t) = w_i(t)}. They consume according to their prescribed wealth and they consume according to their demand ({c_i=d_i}). The last assumption closes the gap between {p} and {c}. We assume that demand is given as {d_i=\dot{p_i}} and thus we obtain in summary {c_i = \dot{p}_i}.

Now we are in the shape to state the problem: agents in a market maximize utility

\displaystyle  \int_0^T r^s u(c(s)) ds

according to the constraint

\displaystyle  \sum_{i=1}^n p_i(t) c_i(t) - w(t) = 0

for given time {0<T\leq\infty}, discount rate {0<r<1}, wealth function {w} and utility function {u}.

Voilà, we end up with a constrained Euler-Lagrange equation. But beware! There are a couple of traps jamming all intuition we might have from mechanics or similar theories with conserved energy (understood as the Legendre transform of the Lagrangian). I certainly elaborate on this in one of the next entries.

Advertisements

Problem of the Month – January 2010 (10-12 years)

January 16, 2010

The following problem (link in german) January 2010 is part of this years mathematics competition of Baden-Württemberg for pupils at the age of 10-12.

  • Prove that you cannot get 50 as the sum of exactly 5 numbers from the set  {5,9,13,17,19}.
  • Prove that there are exactly 2 ways to get 50 as the sum of exactly 5 numbers from the set {6,11,13,15,17}.

Nice problem with a nice solution. Albeit maybe a little hard at that age.


Climbing Levels of Complexity II

January 11, 2010

If emergence is not a real, existing phenomenon, but rather a description from a different perspective or a coordinate transformation or a state space transform or … something similar as indicated in my blog post Climbing Levels of Complexity , then it might be invertible.

What does that mean? As an example, starting with quantum theory, statistical mechanics and under technical assumptions like e.g. space being completely filled with matter we can get Hamiltonians describing phases like e.g. crystals. That might be the best understood example of emergence so far. On the other hand, we (considered as human beings) live in a world of solids, liquids and gases. Nevertheless we were able to derive the underlying quantum mechanical laws. From our description in terms of emergent properties we can go back to the ‘fundamental’ equations.

Let’s brainstorm!

Evolution as a notion is not easy to grasp (for mathematicians). Could it be, just as a thought, that this is what evolution means: If we describe a system in terms of emergent laws such that one can get back (some) fundamental laws, then the system is called evolving.

That is bold, I know, but evolutionary adaption might exactly be that: Understanding the environment (which unfortunately evolves itself and thus creates troublesome circularities) in terms of the population and … as a gift … if halves the work we have according to my last post. If ‘life’ can be defined as emergent and evolving and if we have the transform governing emergent systems, then we are done. In the cases the transform is invertible we have ‘created’ life in the other cases not.

A truly random thought as long we we do not have the transform …


A further glance at ‘Polymath and the Origin of Life’

January 8, 2010

Polymath and the origin of life has finished its second month. Remember, Tim Gowers plans to set up a polymath project to explain abiogenesis. The project should use cellular automata or similar devices to explain the emergence of life. Right at the beginning of his proposal he has posed a couple of questions on what properties these machines or models should have and what exactly should constitute the scope of the project. I quote:

Question 1: Should one design some kind of rudimentary virtual chemistry that would make complicated “molecules” possible in principle?

The alternative is to have some very simple physical rule and hope that the chemistry emerges from it (which would be more like the Game of Life approach).

If the emergence of life does not depend on the details of the underlying chemistry we could choose a ‘simple’ model and proceed. However, that seems to be circular. We do not know enough examples of ‘life’ to know what exactly constitutes a viable approximation to chemistry. We might get lost in arbitrariness.

The other approach uses the one known example of ‘life’and its ‘fundamental’ laws. Approximations to it might still result in the emergence of some sort of chemistry and then ‘life’.

If I had to choose, I would take the second approach. Even if we do not succeed in generating life, finding suitable approximations to Schrödinger’s equation which result in toy chemistries seems to be already a respectable finding.

Question 2: How large and how complicated should we expect “organisms” to be?

If everything turns out to be working we might be able to describe “organisms” in a different frame and size thus may not play an important role.

Added later: I haven’t quite made clear that one aim of such a project would be to come up with theoretical arguments. That is, it would be very nice if one could do more than have a discussion, based on intelligent guesswork, about how to design a simulation, followed (if we were lucky and found collaborators who were good at programming) by attempts to implement the designs, followed by refinements of the designs, etc. Even that could be pretty good, but some kind of theoretical (but probably not rigorous) argument that gave one good reason to expect certain models to work well would be better still. Getting the right balance between theory and experiment could be challenging. The reason I am in favour of theory is that I feel that that is where mathematicians have more chance of making a genuinely new contribution to knowledge.

When I was a teen some twenty-five or thirty years ago I was very impressed by the genetic model in Gödel, Esch, Bach of Douglas Hofstadter. I took my Apple IIe computer and coded a version. (The specification left some elbow room for interpretations to say the least). The microprocessor was a Motorola 6502, 8-bit running at 1 MHz. A month later it became clear: I cannot generate anything even remotely similar to ‘life’. I guess, nobody could that. Today I am writing this blog entry on a dual core laptop Intel P8400 running at 2,26 GHz and I am not trying to code that genetic model again. Why?

It is not only the computing power what distinguishes me from my earlier version. I also do no longer believe that ’emergence’ should be treated as a phenomenon which can be reached in a finite number of steps. I rather think that some sort of ‘limit’ should be involved, like in the definitions of the first infinite ordinal number, velocity or temperature. If that is the case, then the use of computers is limited until the ‘correct’ approximations are known and the question on the ‘size’ of organisms also is answered: they might be huge.

I have distilled a couple of items, which I think have to be addressed in one way or the other to make the project a ‘success’ (whatever that means).

  • Find a suitable definition or concept of life. This definition has to be fairly robust and still open for interpretation. Something like: life is emergent and evolves.  A crystal emerges, but does not evolve and car designs evolve, but do not emerge. If we find something that emerges and evolves we are done.
  • Currently we do not know what emergence and evolution are. Therefore we collect examples of emergent behavior in all branches of science (this is true polymath and emergence seems to be in all concepts proposed so far). Describe these examples in a way accessible to all participants.
  • Use taxonomy or whatever other scientific method to extract the ‘abstract’ information from these examples.
  • Single out or even develop a mathematical theory related to emergence like calculus to mechanics.
  • Explain evolution and its circularity within this framework.
  • Make it happen! This is the more practical part of modelling an emergent and evolving phenomenon.

These items do not have a natural order. Currently most work was done on developing foundations for the practical part (the last item). Gowers gave a list of 7-8 desirable properties and discussed momentum- and energy conservation.

Let me just note that energy conservation seems problematic. While fundamental physical laws exhibit time translation symmetry, it is not obvious whether and how the same holds for e.g. evolutionary adaption. What does that mean? The following could happen: If we switch from the description of the system on a fundamental level (with energy conservation) to the description of the system on the ‘life’ level by say some ‘limit’ procedure we might get emergent laws depending on time. Such an effect might be necessary or even desirable to explain concepts like adaption, learning and free will. Energy conservation (aka time shift symmetry) might play the same negligible role for ‘life’ as quantum tunneling for cannon balls.

In the project, the emphasis so far seemed to be on understanding how one has to code the problem. However, also definitions were given, toy chemistries were proposed, examples of emergent behavior were given and so on. My items do not seem to be too far off and if that is true there seems to be much work to be done in 2010.


Program

January 5, 2010

Currently I cannot keep my biweekly rhythm. Year end means too much work for me. However, I can see the light at the end of the tunnel and plan to continue soon.