S. Abramsky and J. Zvesper have uploaded
In section 4 ‘The Lawvere Fixpoint Lemma’ contains a ‘positive’ version of Cantor’s theorem.
S. Abramsky and J. Zvesper have uploaded
In section 4 ‘The Lawvere Fixpoint Lemma’ contains a ‘positive’ version of Cantor’s theorem.
When you ask a mountaineer why he is doing what he is doing a possible answer is: “Because it’s there“. Are mathematicians any different? Our motivation for doing mathematics also stems from the fact that there are still significant mathematical mountains. For example, the holy grail of mathematics, the Riemann Hypothesis is considered hard, interesting and a solution is worth a million dollar. Most likely new methods and ideas have to be developed and they will most certainly contain surprises for the experts. Several fields in mathematics would take a gigantic leap forward.
However, the way we do mathematics or the way we think about mathematics will not be changed. Whatever the solution will be, we do not expect it to be as staggering as quantum theory was for physics or as breathtaking as evolution theory was for biology. This is certainly good news if one thinks of mathematics as a dogmatic science. On the other hand, there might be interesting insights if we only just bend our preconceptions on the axiomatic approach or towards circularity a little. If successful, we might change our understanding of what mathematics actually is and how it should be done.
Are there widely accepted scientific problems for which we so far have no mathematical description or solution? Problems that are so hard, that maybe they will change the way we do mathematics? Problems that give us the impression that we only know what mathematics was in the past and that we shall never know what mathematics will look like in the future?
Without claim of completeness:
In biology, evolution captures the alteration of populations consisting of reproducing individuals and their environment over time by properties of the individuals. Well known concepts such as natural selection and “survival of the fittest” are justified by a plethora of observations and therefore the theory is a cornerstone of modern science. 150 years of research and there still is no comprehensive mathematical treatment of evolution. Why is this so hard? I have no answer to this, although two observations come to mind. 20th century mathematics is usually presented as a typed theory and thus promotes populations over individuals. From that viewpoint, evolution seems to consist of special cases only rather than to contain some abstract, mathematical content. Additionally, there are obvious circularities in the considered notions population/environment and survival/fitness. The biological content seems to be rather in these circles than in some mathematical bottom-up framework of axioms and theorems.
In economics, markets are places where agents satisfy their demand for goods by trading them. For more than 100 years we have markets as objects under scientific consideration. So far, we do not have a time evolution for prices and demands maybe even in terms of actions taken by agents and preferably extending existing results in microeconomics. What is the difficulty? Maybe the following observation sheds some light on the situation. It seems that correctly predicted prices lead to arbitrages which then change the behavior/demand of the agents. That circle seems to encode the basic essentials of economics. Theories with assumptions on either side, the prices or the demand, seem to assume away the problem and are then only loosely connected to observations. This reminds on games with incomplete information. Making assumptions on opponents strategies greatly simplifies the search for ones own optimal strategy. However, justifying these assumptions is the hard part with not much mathematics involved so far.
In physics, cosmology studies the dynamics and the evolution of the universe as a whole. By this definition an observer is part of the universe and thus ‘laws’ governing this universe put restrictions on what ‘observation’ actually is. On the other hand, the observer develops models describing the universe and thus himself as part of this universe. This circular observer/universe dependence is related to anthropic principles. The problem now consists in developing a correct argument to refute models essentially just by applying the observational fact of the existence of the observer. This is hard since the ‘logic’ an observer can use to reason about the universe is also part of the universe and might change if the universe is changed. As long as we do not know what ‘observation’ means it is not enough to exclude possible models just by the non-existence of a certain type of observer (e.g. carbon based life form), since there could be other observers experiencing their observations as e.g. we do ours and thus different models could still be ‘isomorphic as observed universes’ (whatever that means).
IV. Foundation of Physics
In physics, the outcome of conducted experiments and of developed theories is compared according to established rules. If the comparison meets certain standards the theory is called physical and is said to describe reality. The outcome of an experiment is usually described in terms of fundamental units of measurement. Different social communities use different units. A widely accepted standard is SI. Within SI the unit of ‘mass’ is kilogram, which is defined by reference to a certain pile of platinum-iridium alloy stored at a certain place in France. This procedure puts severe limits to the accuracy of measurements (as of 2010 this is one part in 10^8). The ultimate goal therefore is to liberate the SI system’s dependency on this definition by developing a practical realization of the kilogram that can be reproduced in different laboratories by following a written specification. The units of measure in such a practical realization would have their magnitudes precisely defined and expressed in terms of fundamental physical constants. Why has nobody done that so far? The problem seems to be to describe the foundations of physics in physical terms. That circularity seems to be hard to overcome. Moreover, the above argumentation is not only valid for ‘mass’, but for any other physical quantity and one would end up with a huge network of interdependent ‘written specifications’.
We know that there is life, that there is intelligence and that there is conciousness. Mathematics has little more to say about these. What is the problem? We might not have enough data, different intelligent life forms, to extract an abstract theory. However, we also might face a semantical problem. In 20th century mathematics we know of two ways to introduce notions. First, by description, like set or element. These notions are given by their relation to other described notions in a huge semantical circle. No further justification for them is given. Second, by definition, like function or number. Defined notions come together with a substitution process and can, at least in principle, be eliminated from the theory. The notions in the title so far withstood any attempts to being defined. It seems that these notions ’emerge’ by some sort of ‘limit’ procedure from other notions. Of course, this is only speculation and anytime soon a reasonable definition of ‘life’, ‘intelligence’ and ‘conciousness’ might be found. If not, however, we might take that as an indication that mathematics still has potential rather than limits.
I have not yet read all of Nic Weaver’s comment on the liar paradox however the beginning is interesting enough to mention it here.
Edit: If I understand him correctly, then he claims that the law of excluded middle is not applicable for what he calls ‘heuristic concepts’, which are (and here I interpret freely) concepts with some immanent circularity.
If emergence is not a real, existing phenomenon, but rather a description from a different perspective or a coordinate transformation or a state space transform or … something similar as indicated in my blog post Climbing Levels of Complexity , then it might be invertible.
What does that mean? As an example, starting with quantum theory, statistical mechanics and under technical assumptions like e.g. space being completely filled with matter we can get Hamiltonians describing phases like e.g. crystals. That might be the best understood example of emergence so far. On the other hand, we (considered as human beings) live in a world of solids, liquids and gases. Nevertheless we were able to derive the underlying quantum mechanical laws. From our description in terms of emergent properties we can go back to the ‘fundamental’ equations.
Evolution as a notion is not easy to grasp (for mathematicians). Could it be, just as a thought, that this is what evolution means: If we describe a system in terms of emergent laws such that one can get back (some) fundamental laws, then the system is called evolving.
That is bold, I know, but evolutionary adaption might exactly be that: Understanding the environment (which unfortunately evolves itself and thus creates troublesome circularities) in terms of the population and … as a gift … if halves the work we have according to my last post. If ‘life’ can be defined as emergent and evolving and if we have the transform governing emergent systems, then we are done. In the cases the transform is invertible we have ‘created’ life in the other cases not.
A truly random thought as long we we do not have the transform …
Polymath and the origin of life has finished its second month. Remember, Tim Gowers plans to set up a polymath project to explain abiogenesis. The project should use cellular automata or similar devices to explain the emergence of life. Right at the beginning of his proposal he has posed a couple of questions on what properties these machines or models should have and what exactly should constitute the scope of the project. I quote:
Question 1: Should one design some kind of rudimentary virtual chemistry that would make complicated “molecules” possible in principle?
The alternative is to have some very simple physical rule and hope that the chemistry emerges from it (which would be more like the Game of Life approach).
If the emergence of life does not depend on the details of the underlying chemistry we could choose a ‘simple’ model and proceed. However, that seems to be circular. We do not know enough examples of ‘life’ to know what exactly constitutes a viable approximation to chemistry. We might get lost in arbitrariness.
The other approach uses the one known example of ‘life’and its ‘fundamental’ laws. Approximations to it might still result in the emergence of some sort of chemistry and then ‘life’.
If I had to choose, I would take the second approach. Even if we do not succeed in generating life, finding suitable approximations to Schrödinger’s equation which result in toy chemistries seems to be already a respectable finding.
Question 2: How large and how complicated should we expect “organisms” to be?
If everything turns out to be working we might be able to describe “organisms” in a different frame and size thus may not play an important role.
Added later: I haven’t quite made clear that one aim of such a project would be to come up with theoretical arguments. That is, it would be very nice if one could do more than have a discussion, based on intelligent guesswork, about how to design a simulation, followed (if we were lucky and found collaborators who were good at programming) by attempts to implement the designs, followed by refinements of the designs, etc. Even that could be pretty good, but some kind of theoretical (but probably not rigorous) argument that gave one good reason to expect certain models to work well would be better still. Getting the right balance between theory and experiment could be challenging. The reason I am in favour of theory is that I feel that that is where mathematicians have more chance of making a genuinely new contribution to knowledge.
When I was a teen some twenty-five or thirty years ago I was very impressed by the genetic model in Gödel, Esch, Bach of Douglas Hofstadter. I took my Apple IIe computer and coded a version. (The specification left some elbow room for interpretations to say the least). The microprocessor was a Motorola 6502, 8-bit running at 1 MHz. A month later it became clear: I cannot generate anything even remotely similar to ‘life’. I guess, nobody could that. Today I am writing this blog entry on a dual core laptop Intel P8400 running at 2,26 GHz and I am not trying to code that genetic model again. Why?
It is not only the computing power what distinguishes me from my earlier version. I also do no longer believe that ’emergence’ should be treated as a phenomenon which can be reached in a finite number of steps. I rather think that some sort of ‘limit’ should be involved, like in the definitions of the first infinite ordinal number, velocity or temperature. If that is the case, then the use of computers is limited until the ‘correct’ approximations are known and the question on the ‘size’ of organisms also is answered: they might be huge.
I have distilled a couple of items, which I think have to be addressed in one way or the other to make the project a ‘success’ (whatever that means).
These items do not have a natural order. Currently most work was done on developing foundations for the practical part (the last item). Gowers gave a list of 7-8 desirable properties and discussed momentum- and energy conservation.
Let me just note that energy conservation seems problematic. While fundamental physical laws exhibit time translation symmetry, it is not obvious whether and how the same holds for e.g. evolutionary adaption. What does that mean? The following could happen: If we switch from the description of the system on a fundamental level (with energy conservation) to the description of the system on the ‘life’ level by say some ‘limit’ procedure we might get emergent laws depending on time. Such an effect might be necessary or even desirable to explain concepts like adaption, learning and free will. Energy conservation (aka time shift symmetry) might play the same negligible role for ‘life’ as quantum tunneling for cannon balls.
In the project, the emphasis so far seemed to be on understanding how one has to code the problem. However, also definitions were given, toy chemistries were proposed, examples of emergent behavior were given and so on. My items do not seem to be too far off and if that is true there seems to be much work to be done in 2010.
Considering for example group theory or complex analysis one quickly realizes that successful mathematical theories have plenty of examples. Therefore, if one takes a step into uncharted domain it is never bad to start such an enterprise with an example.
In my last post on the ‘origin of life’ – polymath project I have observed that people expect models explaining the emergence of life having ‘levels of complexity’. Of course, I was very vague about what ‘levels of complexity’ actually are. Anybody is vague about that.
The goal of this post is to remember and discuss a well-known example of emergence and to shed some light on the problem.
In the beginning of the last century physicists recognized classical mechanics as emergent. For centuries Newton’s laws built a solid fundament for science and then, in (essentially) a sudden, everything became different. Classical mechanics was realized to be a ‘limit’ of quantum mechanics. This is commonly known as the correspondence principle and its formulation is remarkably vague. In what sense has this limit to be taken?
A theorem of Ehrenfest tells us that if we consider means of observables we get back Newton’s laws quite easily. Classical momentum (position) is interpreted as the mean of the momentum (position) observable of quantum mechanics. Since the mean is just some (well-defined) limit everything seems fine. However, that is not the problem. Let me just ask:
What is it, that has classical momentum?
Is there a cannonball (and not just some quantum state)? I would certainly approve this if one is flying towards me and I won’t start to calculate the probability that it is tunneling through me. In a way the cannonball has become independent of the underlying quantum laws and now just obeys the new classical ‘in the mean’-laws.
For me this is the essence in climbing ‘levels of complexity’. We do not just get new laws, which are suitable interpreted, means of fundamental laws. We also get new objects or states governed by the new laws and no longer by the fundamental laws. There is no tunneling of cannonballs however I will get shot to pieces, something quantum theory does not cover adequately. Quantum theory simply is not supposed to tell me how to stop the bleeding.
Forget about my clouded comments. Is there something that we can learn from this example? I think so. The lesson we learn is that there is at least one important example of ‘climbing the levels of complexity’ in which the state space changes with the level. Sure, a reductionist could argue that in principle we can describe the cannonball by a quantum state. For all practical purposes however the state space changes. This observation is not sufficiently appreciated.
It is in the nature of polymath, which is simply put ‘more of the same’, that progress on such a fundamental question is slow. Organizers and participants should simmer that at low heat. After all, it is not just a math Olympiad problem with a solution known to exist.
What have we got so far (according to my frail understanding)?
A first observation is, that the players seem to be interested in implementation and representation rather than the underlying symmetries or rules of the problem. They have in mind a cellular automaton like a sand-pile model or Conway’s game of life. This is interesting in so far as the best mathematicians tell us first to learn about the problem, state the laws governing it and then, as a last step, search for representations as known objects. Is it obvious that the problem has a solution among Turing machines? Not to me, at least. Why should we then restrict ourselves to Turing machines? One reason might be, that they are well understood. A second might be that if you believe in the Church-Turing thesis, Turing machines are all you need.
This summarizes the first observation: The players apparently presuppose the Church-Turing thesis (albeit without mentioning it).
But, what else could they do instead of looking for Turing machines?
They could go for the laws and they did. It was rapidly recognized that Gowers’s item 5 or even the one before play a central role.
4. It should have a tendency to produce identifiable macroscopic structures.
What does that mean? In his comment Tim Gowers explains:
My impression is that there are several levels of complexity and that each level provides a sort of primeval soup for the next level up, if that makes sense.
Anyhow, a sort of answer is that from any model I would look for the potential to produce objects that could serve as the basic building blocks for a higher-level model.
This last sentence seems to be a recurring motive among players. There seems to be a consensus that such levels should be part of a model. If you now start with some bottom level, then how do you proceed from one level to another? That is indeed a problem.
Some believe, for example, that if you put electrons and other elementary particles together in the right combination you will end up with a human. Like a watchmaker making a watch. The watch obeys the same laws as do their parts. In this spirit a human obeys the same laws as say an electron. That line of thinking nearly inevitably leads to the ‘free will vs. determinism‘ controversy.
Others believe that there is no evidence for the above to hold. There is essentially only one known process to ‘produce’ humans. The natural one. This process is radically different from ‘putting pieces together’ and it is not obvious that the laws governing the pieces hold for the whole. There is, let’s call it, a ‘barrier’ separating the level of humans from the level of electrons. This is hard to imagine for a reductionist.
If we take the existence of ‘levels of complexity’ as a given, then the decisive question currently seems to be:
I am very excited on what answers they come up with …
The n-category cafe is a place for creative and inspiring discussions. Admittedly, I do not understand enough of category theory to contribute and therefore I was really happy when in the recent discussion about Feferman set theory I saw the following two quotes.
By the end of his remark John Baez states:
I don’t know any good way to deal with Russell’s paradox, but I believe there is one. I believe someday we’ll find it. But I don’t think we’ll find it by trying to ‘weasel out’ of the problem. Somehow we need to think a new way — a clearer way, that makes the paradox just disappear.
Tom Leinster answers:
I’ve pretty much entirely moved over to a point of view in which it doesn’t make sense to ask of any two sets A and B the question ‘is A∈B?’ For example, I don’t think it makes sense to ask whether ∅∈ℚ. And once you’ve adopted this point of view, the ‘paradox’ dissolves.
These two statements are crystal clear. John Baez is not satisfied with the state of discussion on the paradox and Tom Leinster answers this by disallowing certain questions. His idea, in a way, exemplifies how we deal with this problem usually. We disallow things.
What if we can go the other way. Like with the introduction of the complex numbers. Instead of disallowing the square root of a negative number we take it and see how far we get.
In our situation this means we allow the question ‘is A∈B’ for all objects under consideration and then rapidly approach Russel’s paradox (and some others). From this paradox we learn that there are sets which contain and not contain themselves simultaneously. However:
That is only a problem in a two valued logic!
Let us assume there are other truth values but ‘true’ and ‘false’. Especially, that there is at least one truth value equal to its own negation. In this sense, negation is an involution on the set of truth values as we know it and negation has a fixed point. This fixed point is something new, like the square root of -1. In my opinion, that is what we learn from the paradox. The logic of mathematics has to be multi-valued.
Unfortunately this innocent looking assumption/observation leads us far away from anything we are used to know about sets. Circularity starts to surface everywhere and instead of calling it a ‘bug’ we are forced to call it a ‘feature’ (if we want to proceed on this way).
I elaborate this in further posts here, albeit in small steps, because the consequences are really mind-boggling.
Hold your breath …
Tim Gowers proposes a polymath project to answer the question about the origin of life scientifically (mathematically).
… and please start to breathe again.
That is incredibly ambitious and fits perfectly into the scope of this blog. Go there and contribute!
I am way to pessimistic to do it myself. Is the axiomatic ideology we follow able to lead to insights concerning this question? I really doubt that. So far mathematicians were remarkably unsuccessful to even define notions like ‘life’ and ‘intellligence’. We were so unsuccessful, that it might be allowed to ask whether a definition is actually possible.
Mathematicians introduce notions by description (like set and element) or by definition (like function and number). ‘Life’, like many other notions, does not fit into this approach. It is so hard to grasp the meaning of ‘life’ by definitions and descriptions that one is lead to the idea that there are probably other ways to introduce notions in a precise sense.
Indeed you can think of processes to introduce notions which are not definitions and descriptions. Unfortunately, once you start to do that, you have to give up a lot of what makes us feel comfortable with contemporary mathematics. That sounds vague, but I have a feeling that I catch up writing this stuff here in the blog maybe even years before Tim finishes the above project.
… just kidding …
The plan now is to elaborate a bit further on scientific laws and then I explain how to ’emerge’ new notions from old ones.
When I asked my guest about his opinion on this little booklet edited by John Brockman, he actually started to smile and answered something like: ‘These smart scientists have a lot of dangerous ideas and have you recognized, all of their ideas are dangerous for other people? Scientists surely must be very caring.’ By then I was already used to his interesting approach to the concept of humor and decided to ignore the last remark. Instead I wanted to know, whether he had his own dangerous idea, preferably an idea dangerous for science.
He told me, that the single most dangerous idea for science is:
There is no proof.
The danger of this idea, according to him, does not stem from the fact that it might be true. The problem is, that you, as a scientist, cannot argue scientifically against it.
I was not able to follow. The Pytagorean theorem is proved! There are dozens of proofs, some of them hundred and thousands of years old. I have checked a few myself and all experts agree on the truth of this theorem. After all, this is not the classification of all finite simple groups and even this is settled. At least I hope so. How can one seriously think that there is no proof?
In a deliberately patient sounding voice he explained again, that the possible truth of the idea is not the problem, but our wrong understanding of what science actually is. Even in this moment when I writing down this post, I have no idea of what he was talking about. My face must have expressed my ignorance and he began a monologue on proofs. In essence he claims, that to deserve its name
a proof has to prove that it is a proof.
Otherwise, it is obviously not a proof, but only some consensus among the participating players. You can call it peer review if you like, but don’t call it a proof.
I am completely lost. No mathematician has ever proved that his proof is indeed a proof. That makes no sense! Or, does it? And if yes, then it is surely impossible! What do you think?