Fibonacci Numbers Everywhere?

September 19, 2011

Let a_n be a sequence with values in \mathbb{N} and let

\alpha(n):=\min\{m\in\mathbb{N}: n|a_m\}

be the first index m such that n divides a_m whenever it exists (otherwise it stays undefined). Next define a sequence by the following recurrence:



a_{x+2}:= 1+\sum_{n:\alpha(n)\leq x} \varphi(n) \left\lfloor\frac{x}{\alpha(n)}\right\rfloor

for 0\leq x\in\mathbb{N }. Here, \varphi denotes Euler’s totient function. Note that the sum is extended over all n with \alpha(n)\leq x. The values of \alphaare thus defined. Moreover, the sum is finite.

What is a_n? As it turns out and as you might have guessed from the title a_n is the well-known Fibonacci series.

Computational example. Assume we knew that the a_n were the Fibonacci numbers for 1\leq i\leq 5. Then  a_7 = 1+\sum_{n\in\{1, 2, 3, 5\}}\varphi(n)\left\lfloor\frac{x}{\alpha(n)}\right\rfloor where the sum is extended over all divisors of a_1, \ldots , a_5 (i.e. the divisors of 1, 1, 2, 3, 5). Thus

a_7 = 1+1 \left\lfloor\frac{5}{1}\right\rfloor+1 \left\lfloor\frac{5}{3}\right\rfloor+2\left\lfloor\frac{5}{4}\right\rfloor+4 \left\lfloor\frac{5}{5}\right\rfloor = 13.

How does one prove that the a_n are indeed the Fibonacci numbers? Actually, there is a nice and elementary double counting proof for this and even after scribbling it down I must confess that for me the Fibonacci numbers here somehow pop out of nowhere.

Upcoming Courses

July 28, 2011

Starting in Oktober I will give Mathematics for Engineers II which is basically Ordinary Differential Equations and Real Analysis (of functions with several variables). The course will be in german. Since the material is standard I won’t bother translating it and bringing it here.

The other new course will be Metamathematics. I have not yet decided on the exact content.

Treating Markets Mechanically – An Example

April 27, 2011

The aim of this post is to provide the transition from time-independence to time-dependence within a simple economic model for further reference.

For that purpose we consider a single consumer-worker. This agent obeys a time constraint on labour L and free time F

L + F = 1.

We introduce a utility function U as

U = C^\alpha F^{1-\alpha}

for a given 0< \alpha<1. There is a budget constraint W given as price p times consumption C equals wage rate w  times labour L.

p C = w L.

The agent now maximizes U such that p C + w F = w. Let us solve that. Lagrange equations yield

-\frac{\partial U}{\partial C} = \lambda \frac{\partial W}{\partial C}


-\frac{\partial U}{\partial F} = \lambda \frac{\partial W}{\partial F}

with constraint

W = p C + w F -w = 0.


\frac{p}{w}=\frac{\alpha C^{\alpha-1} F^{1-\alpha}}{(1-\alpha) C^\alpha F^{-\alpha}}=\frac{\alpha F}{(1-\alpha) C}.

Solving that for C and plugging it into the budget constraint yields

\frac{\alpha}{(1-\alpha)}w F+w F -w =0.

Solving this for F and again using the budget constraint shows that




solves the maximization problem. So far there is no time evolution. To introduce such a dynamics we mimic mechanics and set C=C(p,\dot{p}). Demand is a function of price and its derivative. For economists the \dot{p} comes from nowhere. Especially since it is not obvious at all how to define the derivative of a price evolution. For now it has to suffice that eventually we shall understand the derivative in a distributional sense and until then we treat it as a formal parameter.

The time-dependent utility function for the consumer-worker

U = C^\alpha F^{1-\alpha}r^t

for a discount rate 0<r\leq1. The agent now maximizes \int_0^T U d t under the constraint p C + w F = w.

We make the following assumption due to S. Smale (for excess demand):

\dot{p}=C \textnormal{ and }\dot{w}=L.

Euler-Lagrange equations yield

\frac{d}{d t}\frac{\partial U}{\partial C} = \lambda \frac{\partial W}{\partial C}


\frac{d}{d t}\frac{\partial U}{\partial L} = \lambda \frac{\partial W}{\partial L}

with constraint

W = p C - w L = 0.

For the Lagrange multipliers we get

\lambda =-p^{-1}\alpha r^t C^{\alpha-2}F^{-\alpha}((\alpha-1)(C\dot{F}-F\dot{C})-C F \log r)


\lambda =w^{-1}(\alpha-1) r^t C^{\alpha-1}F^{-1-\alpha}(\alpha(C\dot{F}-F\dot{C})-C F \log r).

Equating, plugging in the constraint and dividing by C F yields

\frac{\dot{C}}{C}-\frac{\dot{F}}{F}= \frac{(\alpha w - p C)\log r}{\alpha(1-\alpha)w}

First we discuss the case r=1. Then


and thus (consider \frac{d}{d t}\ln C) there is a positive, constant K such that C = K F and we get because of the budget constraint

F = \frac{w}{p K + w}, C = \frac{w K }{p K + w}.

The constant K(p,w,\alpha) is unique and maximizes \int_0^T C^\alpha F^{1-\alpha} ds = \int_0^T \frac{w K^\alpha}{p K + w} ds. In equilibrium we have p = p^* and w = w^*. Maximizing  K yields \frac{d}{d K}\frac{w^* K^\alpha}{p^* K + w^*}= 0 and thus K=\frac{\alpha w^*}{(1-\alpha) p^*}. Now

F = 1-\alpha


C = \alpha \frac{w^*}{p^*}.

The case r<1. In equilibrium \dot{C}=\dot{F}=0 we immediately obtain C=\alpha \frac{w^*}{p^*}. Plugging this into the budget constraint yields F=1-\alpha.

Interestingly enough, we get an equilibrium equal to the solution of the time-independent model. How justified is S. Smale’s assumption C=\dot{p}? Economists often use linear demand theory and set C=T-p. Both approaches seem to be incompatible and both have a draw back. When you scale prices (e.g. by introducing a new currency) demand should stay the same. This is not the case in both settings. One needs currency dependent constants that scale accordingly to fix that. One possibility to avoid that is C=\frac{\dot{p}}{p}. As usual, more options do not improve clarity and calculating the whole model in the general case, i.e. C=C(p,\dot{p}) is not totally conclusive either. For a solution of the Euler-Lagrange equations one obtains under moderate assumptions on the partial derivatives that

\alpha (1-\alpha)w\frac{\partial C}{\partial \dot{p}}\left(\frac{\dot{C}}{C}-\frac{\dot{F}}{F}\right)  =  (\alpha w - p C)\left(\frac{\partial C}{\partial \dot{p}}\log r + \frac{d}{d t}\frac{\partial C}{\partial \dot{p}}-\frac{\partial C}{\partial p}\right).

Linear demand has \frac{\partial C}{\partial \dot{p}}=0 and thus \alpha w - p C=0. The budget constraint implies F=1-\alpha which is a constant. We thus can safely exclude linear demand from our considerations. The above equation cannot distinguish between Smale’s assumption and C=\frac{\dot{p}}{p}. However, hidden in the technical assumptions, there seems to be some advantage in Smale’s approach. It remains to clarify the price-scaling issue.

Polymath5 – A somewhat surprising observation

April 5, 2011

In this post, Kevin O’Bryant has considered “least nonzero digit”-type examples \lambda_{m, S}. Encouraged by his remark that there are only very few possibilities, i.e. feasible sets S, for each modulus m I did a complete enumeration up to modulus 200. The goal was (and still is) to find a slowly growing example possibly beating \mu_{3} the current record holder (please read my last post for more information).

Much to my surprise there are certain “magic” numbers with comparatively many feasible sets S. What follows is a table (modulus m: number of sets S) for those m with more than 4 sets:

(39: 24)
(87: 560)
(95: 3.360)
(111: 5.280)
(119: 42.772)
(135: 5.940)
(145: 11.440)
(159: 64.064)
(183: 13.728)

All other m < 200 have 4 or less feasible sets.

For example, there is no set S for m=105. This can be seen as follows: Logarithmic discrepancy of \lambda_{105} implies \sum_{i=1}^{d-1} \lambda_{105}(i)=0 for d|105. Since 3,5,7|105 complete multiplicativity implies \lambda_{105}(2)=\lambda_{105}(3)=\lambda_{105}(5)=-1. Now 45^2\equiv 30 \mod 105 and again since \lambda_{105} is completely multiplicative 1=\lambda_{105}(30)=\lambda_{105}(2)\lambda_{105}(3)\lambda_{105}(5)=-1 produces a contradiction.

What makes these number “special”? Why is there this huge jump from 4 to “many”? Is this just one more “random” number theoretic artifact or can we understand this by considering how the m-examples are constructed from the d|m-examples?

I do not know the answers, but let me close with something I do know. In my last post I gave a proof that any \lambda_{m,S} that grows slower than \mu_3 must have m>729. Subsequent computer searches have increased that bound to 1000.

Polymath5 – A collection of results concerning the completely multiplicative case

March 23, 2011

In this post I have collected results of the fifth Polymath project on the Erdös discrepancy problem. With hundreds and maybe even thousands of posts I have decided to restrict this to the (maybe) simplest non-trivial special case, namely completely multiplicative functions with values in {\{\pm 1\}}. That has some justification since there are results reducing the problem to the completely multiplicative case (albeit with values in {S_1}). Furthermore my focus is on elementary examples related to the problem and how far one can get with textbook methods. It that sense this can be seen as a beginner’s guide with the quoted results being nothing more than exercises. I have arranged the results within the following sections

  • Statement of the problem
  • Examples related to the problem
  • Necessary conditions for bounded discrepancy
  • Discrepancy results

What fascinates me about the problem is that it can be stated without much technicalities and that a solution seems to be within reach due to recent results in number theory.

Discrepancy Problem. Find or prove the non-existence of a function {f:\mathbb{N}\rightarrow\{\pm 1\}} with the following properties:

  • {f} is completely multiplicative, i.e. {f(m n)=f(m) f(n) } for all {m,n\in\mathbb{N}}.
  • {f} has bounded partial sums (bounded discrepancy), i.e. there is a {C>0} such that {\left|\sum_{n\leq x}f(n)\right|\leq C} for all {x\in\mathbb{R}}.

Let me fix some notation before I present examples related to the above question. In the following {f} denotes a completely multiplicative function with values in {\{\pm 1\}}, {p,q} denote primes and {s=\sigma + i t} is a complex number with real part {\sigma} and imaginary part {t}.

Examples related to the problem.

  • {f(n)=1} for all {n\in\mathbb{N}} is completely multiplicative and has unbounded discrepancy {\sum_{n\leq x}1=\lfloor x\rfloor}.
  • Liouville function {\lambda(p):=-1} for all primes {p}.
  • Dirichlet characters of modulus {k} are completely multiplicative with values in {\{0\}\cup\{e^{2 \pi i n / k}:1\leq n \leq k\}}. Since Dirichlet characters assume the value {0} they are strictly speaking not in the focus of this post. However, they can be used to construct {\{\pm 1\}}-valued examples (cf. the next example) and they play a central role in the discussions.
  • The functions \displaystyle \mu_p(q):=\left\{ {(\frac{q}{p}) \textnormal{ if } q\not= p} \atop -1 {\textnormal{ if } q=p}\right. and \displaystyle \lambda_p(q):=\left\{ {(\frac{q}{p}) \textnormal{ if } q\not= p} \atop 1 {\textnormal{ if } q=p}\right.with {(\frac{q}{p})} being the quadratic Dirichlet character mod {p} (Legendre symbol) and {p} an odd prime.
  • To generalize the last example fix {m \in \mathbb{N}} and a set {S\subseteq \{1,\ldots, m-1\}}. Define {\lambda_{m,S}(n):=1} if the last non-zero digit of {n} in its {m}-ary representation is in {S}. In all other cases {\lambda_{m,A}(n):=-1}. Immediately from the definition we find \displaystyle \lambda_{m,S}(n+l)=\lambda_{m,S}(n) if {\textnormal{ord}_m l>\textnormal{ord}_m n}. To not obtain unbounded discrepancy it is necessary that {\sum_{i=1}^{m/d-1}\lambda_{m,S}(i)=0} for {1<d|m}. Moreover, we find {\sum_{i=1}^{n}\lambda_{m,S}(i)=\sum_{j=0}^l \sum_{i=1}^{d_j}\lambda_{m,S}(i) } which implies logarithmic growth. Since {\lambda_{m,S}} is completely multiplicative we have {\lambda_{m,S}(n)=1 } if n is a quadratic residue mod {m}, implying {\lambda_{p,S}=\lambda_p} for {p} an odd prime. We construct some {\lambda_{m,S}} for small composite {m}:
    • For {m=9} we have for the quadratic residues {\{1,4,7\} \subseteq S} and thus depending on whether we include {3} either {\lambda_{9,\{1,3,4,7\}}=\lambda_3} or {\lambda_{9,\{1,4,6,7\}}=\mu_3}.
    • For {m=15} we get {\{1,4,6,9,10\} \subseteq S}. Since {\sum_{i=1}^4 \lambda_{15,S}(i)=0} for all {S} we cannot include {2} and {3}. Because {10} is included this forces {\lambda_{15,S}(5)=-1} for all {S}. Now using complete multiplicativity and {\sum_{i=1}^{14} \lambda_{15,S}(i)=0} we get the four cases: {S_{15,\{1,4,6,9,10,11,14\}}}, {S_{15,\{1,4,6,7,9,10,11\}}}, {S_{15,\{1,4,6,9,10,13,14\}}} and {S_{15,\{1,4,6,7,9,10,13\}}}.
    • For {m=21} we have that {9^2\equiv 18 \mod 21}. Thus {1=\lambda_{21,S}(18)=\lambda_{21,S}(2)}. Since {3|21} we have {\sum_{i=1}^2\lambda_{21,S}(i)=0}, a contradiction.

Necessary conditions for bounded discrepancy.

  • If {f} has bounded partial sums, then the Dirichlet series {\sum_{n\leq x} \frac{f(n)}{n^s}} converges for {\sigma>0}. Proof idea: Integration by parts.
  • If {f} has bounded partial sums, then {\sum_{p\leq x} \frac{f(p)}{p}=A+O\left(\frac{1}{\log{x}}\right)}. Proof idea: One first establishes {\sum_{p\leq x} \frac{f(p)\log{p}}{p}=O(1)} like in the proof of Dirichlet’s theorem on primes in arithmetic progressions. This estimate is then used in the textbook proof on the growth of {\sum_ {p\leq x} \frac{1}{p}}.
  • If {f} has bounded partial sums, then for infinitely many primes {p} one has {f(p)=1} and for infinitely many primes {q} one has {f(q)=-1}. Proof idea: Otherwise one would get a contradiction to the standard result on the growth of {\sum_{p\leq x} \frac{1}{p}}.
  • With not so elementary methods one can prove results ‘in the spirit of’: Let {f:\mathbb{N}\rightarrow S_1} have bounded partial sums and let {\chi} be a non-principal Dirichlet character. Then {\sum_{p}\frac{|1-f(p)\overline{\chi(p)}|}{p}=\infty}. Here more serious number theory kicks in and this is probably where most are currently heading to.

Discrepancy results.

  • Computer experiments (no link found) show that there are {f} with {\left|\sum_{n\leq x}f(n)\right|\leq 2} for all {x\leq 246}. No such sequence exists for {x=247}. There is no traditional proof of this, not even for {x=\infty}.
  • Liouville’s function does not have bounded partial sums since its Dirichlet series equals {\frac{\zeta(2 s)}{\zeta(s)}} and thus has a singularity at {s=\frac{1}{2}}.Alternatively one can proceed as in the proof of Dirichlet’s theorem on primes in arithmetic progressions and apply Dirichlet’s hyperbola method to {A(x):= \sum_{n\leq x}\frac{1*\lambda(n)}{\sqrt{n}}} ultimately showing that under the assumption of bounded partial sums {A(x)} grows like {\sqrt{x}}. This is a contradiction since {A(x)} grows logarithmically. Showing that the discrepancy of the Liouville function is {O(n^{1/2+\varepsilon})} for all {\varepsilon>0} is equivalent to solving the Riemann hypothesis.
  • For primitive Dirichlet characters {\chi} mod {k} one has {\max_{n\leq x}\left|\sum_{i\leq n}\chi(i) \right|\geq \frac{\sqrt{k}}{2\pi}}. Proof idea: Apply partial summation to the associated Gauss sum {G(1,\chi):=\sum_{1\leq m\leq k} \chi(m) e^{2 \pi i m /k}} and use {|G(1,\chi)|^2=k}.
  • We have {\mu_3(n) = \lambda_{9,\{1,4,6,7\}}(n)} and  \displaystyle \left|\sum_{i=1}^n\mu_3(i)\right|\leq\log_9(8n+1) with equality holding only for {n=\frac{9^m-1}{8}} and {m\in\mathbb{N}}.
  • {\mu_3} is optimal in the following sense: \displaystyle \limsup_{x\rightarrow\infty}\frac{\left|\sum_{n\leq x}\mu_p(n)\right|}{\log{x}}\geq \frac{1}{2 \log{3}} with equality holding only for {p=3}. Proof idea: One constructs a sequence {x_i} such that {\lim_{x_i\rightarrow\infty}\frac{\left|\sum_{n\leq x_i}\mu_p(n)\right|}{\log{x_i}}\geq \frac{M_p-m_p}{2\log{p}}} with {M_p:=\max_{1\leq n< p}\sum_{1\leq i\leq n} (\frac{i}{p})} and {m_p:=\min_{1\leq n< p}\sum_{1\leq i\leq n} (\frac{i}{p})}. With the lower bound for the discrepancy of primitive Dirichlet characters one can eliminate all but finitely many cases. These can then be checked by computer. A similar argument shows that the sums of {\lambda_p} grow faster than {\mu_3}.
  • Computer experiments so far did not find a {\lambda_{m, S}} that grows slower than {\mu_3}. For a given {\lambda_{m,S}} let {M_{m,S}:=\max_{1\leq n< m}\left|\sum_{1\leq i\leq n} \lambda_{m,S}(i)\right|}. Then {m=39} with {S=} \displaystyle \{1,3,4,9,10,12,13,14,16,19,22,25,27,29,30,31,34,35,36\} is the largest {m} such that there is an {S} with {M_{m,S}=2}. Proof idea: Do a computer search among all completely multiplicative discrepancy 2 functions with domain {\{1,\ldots,m-1\}} ({m} must be less than 247). A {\lambda_{m, S}} beating {\mu_3} thus has at least {M_{m,S}=3} and thus {\frac{1}{\log 9}>\frac{3}{\log m}} implying {m>729}.

EDP and some first steps in number theory

February 12, 2011

Last year I have spent some time to think about the Erdös Discrepancy Problem (EDP). My interest was sparked by Tim Gower’s Polymath project and two comments of Terence Tao (1, 2) on the special case of completely multiplicative functions. Let me state this version of EDP.

Problem. Prove or disprove the existence of a function {f:\mathbb{N}\rightarrow\{\pm 1\}} with the following properties

  • {f} is completely multiplicative, i.e. {f(m n)=f(m) f(n) } for all {m,n\in\mathbb{N}}.
  • {f} has bounded partial sums, i.e. there is a {C>0} such that {\left|\sum_{n\leq x}f(n)\right|\leq C} for all {x\in\mathbb{R}}.

In his comments Terence Tao describes how one can use ideas of Dirichlet’s proof (of the theorem on primes in arithmetic progressions) to get first an elementary proof of the unboundedness of the summatory Liouville function and second a necessary condition for unbounded discrepancy using properties of {\sum_{p\leq  x}f(p)}. While the ‘positivity’ part is straight forward I had some trouble to fully digest the complex analysis part at the end of his second comment (bounded discrepancy implies {\sum_{p\leq  x} f(p)=o\left(x/ \log x\right)}). However, all this can be achieved with textbook arguments and I took this as an opportunity to learn some (analytic) number theory. In what follows I elaborate some ideas on the analytic number theory approach and (in an upcoming post) on the positivity approach (Dirichlet’s idea).

Analytic Method. To show where we can apply complex analysis let me first repeat a standard argument. Define { \psi_f(x):=\sum_{n\leq  x}f(n)\Lambda(n) } with {\Lambda} being the von Mangoldt function. Plugging in the definition yields

\displaystyle  \psi_f(x)=\sum_{p\leq  x}\sum_{\alpha=1 \atop p^\alpha\leq x} f(p^\alpha)\log p=\sum_{p\leq  x}\log p\sum_{\alpha=1}^{\left\lfloor\log_p x\right\rfloor} f(p)^\alpha.

Separating primes with {f(p)=1} from primes with {f(p)=-1} we find (using {\{x\}:=x-\lfloor  x\rfloor})

\displaystyle   \begin{array}{rcl}  \psi_f(x) & = & \sum_{p\leq x\atop  f(p)=1}\log p\left\lfloor\log_p x\right\rfloor - \sum_{p\leq x,  \left\lfloor\log_p x\right\rfloor \textnormal{ odd}\atop f(p)=-1}\log  p\\ & = & \log x \sum_{p\leq x\atop f(p)=1}1 - \sum_{p\leq  x\atop f(p)=1}\log p\left\{\log_p x\right\} \\ & & \hspace{1cm}-  \sum_{p\leq x, \left\lfloor\log_p x\right\rfloor \textnormal{ odd}\atop  f(p)=-1}\log p. \end{array}

Setting {f(n)=1} for all {n} we find that {\psi(x)= \pi(x)\log  x-\sum_{p\leq x}\log p\left\{\log_p x\right\}} and since {\pi(x)\frac{\log x}{x}=\frac{\psi(x)}{x}+o(1)} we find for all {f}

\displaystyle  \begin{array}{rcl}  0\leq\sum_{2<p\leq x\atop  f(p)=1}\log p\left\{\log_p x\right\} & \leq & \sum_{2<p\leq  x}\log p\left\{\log_p x\right\}\\ & = & -\log 2\left\{\log_2  x\right\}+o(x). \end{array}

Defining {\pi_{f,1}(x):=\sum_{p\leq x\atop  f(p)=1}1} and {\vartheta_{f,-1}(x):=-\sum_{p\leq  x\atop f(p)=-1}\log p} we arrive at the


\displaystyle   \psi_f(x)=\pi_{f,1}(x)\log x +\sum_{1\leq i \textnormal{ odd}}  \left(\vartheta_{f,-1}(x^\frac{1}{i})-  \vartheta_{f,-1}(x^\frac{1}{i+1})\right)+o(x)

The above argument indicates how to proceed. Using analytic number theory similar to the proof of the prime number theorem (PNT) one gets information on {\psi_f(x)} for general {f}. It might even be possible to prove that {f} having bounded partial sums implies {\psi_f(x)=o(x)}, although I have not yet checked the details. If true, we would get a nice analog to the PNT in the (still maybe empty) case of bounded partial sums.

Conjecture. If {f} has bounded partial sums, then

\displaystyle  \pi_{f,1}(x)\frac{\log x}{x} =-\frac{1}{x}\sum_{1\leq i  \textnormal{ odd}} \left(\vartheta_{f,-1}(x^\frac{1}{i})-  \vartheta_{f,-1}(x^\frac{1}{i+1})\right)+o(1).


January 13, 2011

Next week I am half way through … and still alive.

I like to work

October 1, 2010

Teaching eight mathematics courses (quasi-)simultaneously is just a matter of organization. At least that is what I currently try to prove. Meanwhile I learn a lot about analytic number theory and Erdös’s discrepancy conjecture.

Fürstenberg’s proof on the infinity of primes revisited

August 6, 2010

Warning to teaching staff: summing infinite sequences of positive (>0) integers is difficult, but possible. For example \sum_{n=0}^\infty n \cdot n!=0 is not a mistake and students must get full marks. Let me show you why.

A couple of months ago I was hoping to ‘cheat’ a proof of the Erdös discrepancy conjecture by using a variant of an idea of Fürstenberg’s proof on the infinitude of primes. Remember, Fürstenberg considered the integers with a new topology. Its open sets  are \{a+n b: n\in\mathbb{Z}\} for a\in\mathbb{Z} and b>0.

This topology is metrizable. There are a couple of hand-waving arguments how this metric could look like. However, as far as I am aware of, there is so far no neat description in the literature. A couple of days ago, R. Lovas and I. Mezö have published a fairly straightforward proof that d(n,m)=\|n-m\| with \|n\|:=\frac{1}{\max\left\{k\in\mathbb{N}_{>0}:1|n,2|n,\ldots,k|n\right\}} induces the above topology.

Since \|n!\|\leq\frac{1}{n} the sequence (n!)_{n\in\mathbb{N}} converges to 0 in this topology. The partial sums of \sum_{n=0}^\infty n \cdot n! satisfy \sum_{k=0}^{n-1} k \cdot k!=n! and thus \sum_{n=0}^\infty n \cdot n!=0.

R. Lovas and I. Mezö have collected more such observations in their note. What they did not mention explicitly, but what I consider interesting is that with the above metric, the integers become an ultrametric space. Without loss of generality we assume \|m\|\leq\|n\|. Then 1,2,\ldots,\frac{1}{\|n\|} are all divisors of m and n and thus they are divisors of m+n. Therefore \|m+n\|\leq \|n\|=\max\{\|m\|,\|n\|\}. The strong triangle inequality now follows d(m,n)=\|m-l+l-n\|\leq \max\{d(m,l),d(l,n)\}.

Science and Responsibility – Love Parade 2010

August 2, 2010

Last week several hundred thousands of young people from all over the world gathered in Duisburg (Germany) to celebrate this years Love Parade.  21 of them are now dead. Their families and the public are mourning. What remains are questions. Questions on why this could happen and who is responsible.

Since this is a blog about science I am not so much interested in the  mayor, who refuses to step back since he wants to ‘clear things up’. I am also not interested in the organizer who claimed 1.400.000 participants and, maybe after he realized that this will get him into trouble with his insurance, counted again to just find 250.000. Police and fire services also played some role which is not of my concern.

I am interested in the security concept which was certified by a professor from a near university. No names here since this is an ongoing case and it is hard to get the facts. Sure is that the organizer planned to get more than 250.000 people through a 40 meter wide tunnel to the location. This tunnel was also planned to be the only exit from the location. At first glance that sounds crazy and … at a second glance that still sounds crazy. How could someone certify such a security concept? Our scientist in question has a Ph. D. in theoretical physics and his wiki entry contains some name dropping in form of nobel prize winning collaborators. If he signs such a concept then maybe because of some ‘deep’ insights stemming from his research on ‘transport and traffic’. However, this is buried in some proprietory journals to which I have no access (as an unaffiliated random guy).

That leaves me with the publicly available information. That is a TV interview right after the catastrophe with the event still going on. My rough translation (out of memory) of the decisive passage ‘… the behaviour of panicking people is hard to model …’ followed by some disgusting allocation of responsibility. In a further interview our scientist denies responsibility at all since he was not allowed to see the whole concept. This assertion is shockingly unmasking.

My conclusion: As long as results are not publicly available, as long as concepts  are signed without being fully seen and as long as no responsibility for mistakes is taken over it is very hard to differentiate between a charlatan and a scientist.

Because of the omnipresence of this case in all media science has lost a lot of reputation in germany.