A while ago I was working through the $13$ times table for some boring reason, and I was in the kind of mood to find it really quite vexing that the first digits don’t go $1,2,3,4$. Furthermore, $400 \div 13 \approx 31$, so it takes a long time before you see a 4 at all, and that seemed really unfair.
I was being pretty unreasonable in my expectations of basic arithmetic, but I wasn’t completely brain-dead: I smelled an integer sequence! How about
$a(n)$ = least $k$ such that $k \times n$ starts with a $4$.
That’s not particularly interesting, and someone who comes across this sequence in the OEIS might think “why $4$?” So, I did a bit more thinking and came up with this:
$a(n)$ = least $k$ such that $\{ \text{first digit of } j \times n, \, 0 \leq j \leq k \} = \{ 0,1,2, \dots 9 \}$
I wrote a bit of Python, and in a few minutes I had some numbers:
$n$
1
2
3
4
5
6
7
8
9
10
11
12
13
$a(n)$
9
45
27
23
18
15
13
12
9
9
9
42
62
And hey, $13$ is a record-setter. I’m really beginning to dislike this number. Anyway, I searched the OEIS for my sequence and it wasn’t there, so I submitted it and it was duly accepted as A249067.
Along the way, OEIS editor Charles R Greathouse IV added this intriguing conjecture:
Conjecture: $a(n) \leq N$ for all $n$. Perhaps $N$ can be taken as $81$.
Why $81$? Maybe look at the graph produced automatically by the OEIS:
The record of $81$ is reached at $a(112)$. And at $a(1112)$. And $a(11112)$. That’s because they’re very slightly bigger than $\frac{1}{9} \times 10^m$, so nine times $1 \dots 12$ is just bigger than $9 \dots 9$, i.e. a number starting with a $1$, so it takes nine times nine steps down the times table before you see a number with $9$ as its first digit.
This pattern repeats at every power of $10$, and in fact every pattern in this sequence repeats (more or less) at every power of 10: this animated plot of the sequence with different horizontal scales shows that it’s self-similar:
(The fuzziness in the bigger plots is because each plot just takes a sample of points, and interpolates between them)
So the conjecture looks true, and this is my sequence, so I should prove it.
It isn’t surprising that this thing repeats when you multiply by $10$: we’re only looking at the first digit, and obviously the first digit of $n$ is the same as the first digit of $10n$. That doesn’t suffice as a proof of Charles Greathouse’s conjecture though: numbers which don’t end in a $0$ might do something unhelpful.
Fortunately, the day after I thought this sequence up was MathsJam night. I decided I’d set the Charles Grey pub’s brightest minds on the problem. I had a few ideas but I’m not particularly quick at putting thoughts together.
Ji proposed an application of the pigeonhole principle: if you look at the first two digits of the numbers you see in $n$’s times table, you can write out everything you might see in a $9 \times 10$ grid:
The $n$ times table will dance around this grid until all nine rows have been visited. The longest it can do that is by visiting all 80 cells not in the last line. If the process doesn’t visit the same place twice before it hits every row, that means that the latest you can put off visiting the last row is the 81st iteration. So we need to show that you can’t visit the same spot twice before visiting each row once.
Unfortunately, that’s not true. The $12$ times table visits the ’12’ cell at $12 \times 1 = 12$ and again at $12 \times 10 = 120$, before all possible first digits have been seen.
So, we need another explanation.
Katie Steckles and the Manchester MathsJam crowd came up with an alternate explanation: if you can prove $\left\lceil \frac{1}{9} \cdot 10^m \right\rceil$ (that is, $112$, $1112$, $\ldots$) takes 81 steps for all $m \geq 3$, then that’s the maximum, as any $m$-digit number bigger than that will reach $9 \times 10^m$ in at most as many steps, and will definitely have seen all the other initial digits before then, and any $m$-digit number smaller than $\left\lceil \frac{1}{9} \cdot 10^m \right\rceil$ will visit every first digit in the first 9 multiples.
There’s some evidence for this: the $m+3$-digit numbers that take 81 steps seem to be the ones between $11\ldots112$ and $112499\ldots99$.
I don’t know if there’s a clever way of showing that $\left\lceil \frac{1}{9} \cdot 10^m \right\rceil$ takes 81 steps, but would it convince you if I said that $112$ takes that long, and adding more $1$s in the middle can’t make it any worse? Anyway, that’s good enough for me.
I think I can now answer my question: exactly how bad is the $13$ times table? Let’s compute the record-setters for A24097: the numbers that take longer than any smaller number to see every possible leading digit:
$n$
1
2
13
112
$a(n)$
9
45
62
81
$13$ is a record-setter in the sequence, which means it’s pretty bad, but it’s not the worst: we’ve shown above that $112$ takes the longest possible number of steps to see every digit. And the number $2$ comes under scrutiny for taking way longer than its neighbours. So really, $13$ is just unlucky to find itself in such company.
Especially round entry numbers are set aside for particularly nice sequences to mark the passing of major milestones in the encyclopedia’s size; this time, we have four nice sequences starting at A300000. These were sequences that were originally submitted with indexes in the high 200,000s but were bumped up to get the attention associated with passing this milestone.
Here they are:
A300000: The sum of the first $n$ terms of the sequence is the concatenation of the first $n$ digits of the sequence, with $a(1) = 1$.
The number formed by concatenating the first three digits in the sequence is $110 = 1 + 10 + 99$. This has a Golomb sequence vibe about it, though it’s a bit more straightforward to generate.
This sequence was submitted by Eric Angelini, a Belgian TV producer who has added countless sequences to the OEIS, usually generated like this by picking a constraint and working out what the sequence would need to look like in order to obey it.
A300001: Side length of the smallest equilateral triangle that can be dissected into $n$ equilateral triangles with integer sides, or $0$ if no such triangle exists.
I’m amazed this one wasn’t already in! Seems like exactly the kind of thing that would appear in something like Dudeney’s Amusements. There’s an associated paper on the arXiv, by Ales Drapal and Carlo Hamalainen, which notes that some of the earliest work on triangle dissections was done by Bill Tutte, of Bletchley Park fame.
The entry page contains some fab plaintext-art drawings of solutions for a few different $n$.
A300002: Lexicographically earliest sequence of positive integers such that no $k+2$ points fall on any polynomial of degree $k$.
The definition of this one is a bit opaque if you’re not in the right frame of mind, but it’s really neat. If you plot the sequence, as the OEIS can automatically do for you, you get this:
Or, if you want to do this in your head, think of the set of points $(n, a(n))$.
Now, if you pick any polynomial of degree $k$, there’s no subset of $k+2$ of the points on the scatter plot that lie on that polynomial. It’s a ‘duck-and-dive’ sequence – it always picks the smallest number that won’t be on any of the $2^(n-1)$ polynomials defined by the sequence leading up to $a(n)$.
The OEIS entry contains a conjecture that this sequence is a permutation of the natural numbers. It’s easily shown that it contains no duplicates – otherwise, if the number $m$ is repeated, there’d be two elements lying on the line $y=m$, a degree-0 polynomial. What’s not obvious is that every number will eventually turn up. It’d be pretty wild if some numbers never did – and that’d form a new sequence, too!
A300003: Triangle read by rows: $T(n, k) =$ number of permutations that are $k$ “block reversals” away from $12…n$, for $n \geq 0$, and (for $n \gt 0$) $0 \leq k \leq n-1$.
I don’t like “triangle read by rows” entries, purely because the OEIS’s web interface doesn’t make them easy to read. It’s debatable whether sequences generated by two parameters are even ‘sequences’, but that’s not a fight worth having, because there are some truly fab bits of maths hiding in the OEIS’s triangles.
The Oval Track puzzle. You can reverse four elements at a time. There’s also a cyclic permutation move, which A300003 doesn’t allow.
This one looks at what you can do by starting with the list of numbers $1,2, \ldots, n$, and repeatedly picking a block of adjacent numbers and reversing their order. It’s like a generalised version of the Oval Track puzzle.
Inspired by the BBC’s Sport Relief fundraising campaign, I’ve decided to set myself a vaguely mathematical running challenge. My current routine does involve a little running, but nothing serious, so I’ve given myself a bar to aim for that’s both vaguely achievable, and completely irrational.
I’ll aim to run π kilometres (or as close as I can get, with the measuring instruments I have access to) each day during the month of March. This will either be on the treadmill at my gym – in which case I’ll try to get a photo of the ‘total distance’ readout once I’ve finished – or out in the real world, for which I’ll use some kind of running GPS logging device, to provide proof I’ve done it each day. Some days I’ll run on my own, and others I’ll be accompanied by friends/relatives, who’ll be either running as well or just making supportive noises. At the end of the month, I’ll post an update documenting my progress/success/failure.
Serious request: if you know of anywhere in the UK I can reasonably get to where there’s an established circle that’s exactly 1km in diameter, I can try to come and run round the circumference of it. Drop me an email if so.
If you’d like to support my ridiculous plan, you can follow my progress and donate on my fundraising page, or encourage others to do so by visiting pikm.run (I paid £4 for the URL, so now I have to do it). Sport Relief is the even-numbered-years-counterpart of Comic Relief, which together raise money for thousands of projects all over the UK and in the developing world, to help the vulnerable and those in need.
The approximate geometric mean $\mathrm{(AGM)}$ is a nice approximation of the geometric mean $\mathrm{(GM)}$, but it has some quirks as we will see. After a discussion at the MathsJam gathering, I was intrigued to find out how good an approximation it is.
To get a better understanding, we first have to look again at its definition. For $A=a\cdot 10^x$ and $B=b \cdot 10^y$, we set
where $\mathrm{AM}$ stands for the arithmetic mean. This makes also sense when $a$ and $b$ are not just integers between 1 and 10, but any real numbers. Note that we won’t consider negative $A$ and $B$ (i.e. negative $a$ and $b$), as the geometric mean runs into issues if we do so. The values of $x$ and $y$ may be negative, though. The $\mathrm{AGM}$ looks like a mix between the $\mathrm{AM}$ and the $\mathrm{GM}$, so what can possibly go wrong?
Same mean, different numbers
In contrast to the $\mathrm{AM}$ and the $\mathrm{GM}$, the $\mathrm{AGM}$ depends on the number base (10 in this case) and the presentation of $A$ and $B$.
If we write $A=(10a) \cdot 10^{(x-1)}$, we get a different value for $\mathrm{AGM}(A,B)$. This looks rather unfortunate, but it will turn out to be helpful. To ease notation we will assume in the following that $a\geq b$ unless otherwise stated. This can be done without loss of generality as $\mathrm{AGM}(A,B)=\mathrm{AGM}(B,A)$.
Peter Rowlett proved in his post that $\mathrm{GM}\leq \mathrm{AGM}$. The question is, how far can the $\mathrm{AGM}$ exceed the $\mathrm{GM}$? In other words, what’s the supremum of the ratio $R=\mathrm{AGM}/\mathrm{GM}$?
Using the notation of $A$ and $B$ as above we get
\[
\begin{align*} R=\frac{\mathrm{AGM}(A,B)}{\mathrm{GM}(A,B)}= \frac{\mathrm{AM}(a,b)}{\mathrm{GM}(a,b)} = \frac{1}{2}\cdot (\sqrt{a/b}+\sqrt{b/a}).\end{align*}
\]
So, the ratio $R$ doesn’t depend on $x$ and $y$ but only on $a$ and $b$. That’s convenient. Taking $a$ and $b$ in the interval $[1,10)$, as is usual, we can look at the plot of $R(a,b)$.
As long as we are in the blue part of the graph, $\mathrm{AGM}$ looks to be a sensible approximation of the $\mathrm{GM}$. So let’s look at the bad combinations of $a$ and $b$.
The worst case happens when $a$ and $b$ are maximally far apart: The supremum of $R(a,b)$ is its limit for $a \rightarrow 10$ and $b=1$. So in general, $1\leq R \lt 5.5/\sqrt{10} \approx 1.74$.
This supremum doesn’t look too bad at first, but unfortunately, the result can be unusable in extreme cases. For example, if $a=999=9.99\cdot 10^2$ and $b=1000=1 \cdot 10^3$, we have $\mathrm{GM}(A,B)\approx \mathrm{AM}(A,B)=999.5$ and $\mathrm{AGM}(A,B)\approx 1738$ – not only is the $\mathrm{AM}$ a better approximation of the $\mathrm{GM}$ than the $\mathrm{AGM}$ in this instance, the $\mathrm{AGM}$ is bigger than both the numbers $A$ and $B$ of which it is supposed to give some kind of mean!
Let’s analyse this a bit deeper. The ratio $R$ only depends on the ratio $r=a/b$. In closed form we can write $R(r)=1/2\cdot (\sqrt{r}+\sqrt{r}^{-1})$ and we are left to study this function in the range $[1,10]$. Its maximum is $R(10)$, but smaller $r$ give better results. And we will see, that we don’t have to put up with $r=10$.
Here, the flexibility of the definition of the $\mathrm{AGM}$ comes into play. Due to the choice of a suitable presentation of the numbers we can guarantee that $r$ isn’t too big. If we have $r\leq \sqrt{10}$ which is equivalent to $\sqrt{10}b\geq a \geq b$ we calculate $\mathrm{AGM}(A,B)$ as above. If $a>\sqrt{10}b$, we change the presentation of the number:
\[
\begin{align*}B=b \cdot 10^y = (10b)\cdot 10^{y-1}=:b’ \cdot 10^{y-1} \end{align*}
\]
and continue from there.
So, let’s redefine the $\mathrm{AGM}$ for $10>a\geq b\geq 1$ like this:
Note, that in the second case we have $\sqrt{10}a>10b>a$, so that the roles of the pair $(a,b)$ are taken over by the pair $(10b,a)$. Setting $r=10b/a$ in the second case, we have in both cases $1\leq r\leq \sqrt{10}$, so we only have to study $R(r)$ in the interval $[1,\sqrt{10}]$, which will turn out to be rather benign.
Note also, that this new $\mathrm{AGM}$ can still be calculated without a calculator when using the approximation $\sqrt{10}\approx 3$, as Colin Beveridge suggested in Peter’s post.
In the example above with $A=999$ and $B=1000$ we write $B=10\cdot 10^2$ and find with this new definition of the $\mathrm{AGM}$:
This coincides with the arithmetic mean of the two numbers and is really close to the geometric mean. This is looking promising.
If we define the $\mathrm{AGM}$ of two numbers $A$ and $B$ in the way explained above, we get the following two inequalities:
\[
\begin{align*} (I) \quad & \mathrm{GM}(A,B)\leq \mathrm{AGM}(A,B) \leq \mathrm{GM}(A,B) \cdot 1.2 \\
(II) \quad & \mathrm{GM}(A,B) \leq \mathrm{AGM}(A,B) \leq \mathrm{AM}(A,B) \end{align*}
\]Both inequalities together mean that not a lot can go wrong when using the $\mathrm{AGM}$ with the appropriate presentation of the numbers: The $\mathrm{AGM}$ is bigger than the $\mathrm{GM}$, but exceeds it by maximally 20%, and it is always smaller than the $\mathrm{AM}$.
As a consequence, the $\mathrm{AGM}$ will always be between $A$ and $B$. So it is indeed a “mean” of some kind.
A proof of these two inequalities
(I) We only have to find the maximum of $R=\mathrm{AGM}/\mathrm{GM}$. Due to the discussion above we can assume that $\sqrt{10}b\geq a \geq b$, but $a$ can now be bigger than 10. The latter is not a problem though.
The maximum of $R=\mathrm{AGM}(A,B)/\mathrm{GM}(A,B)=\mathrm{AM}(a,b)/\mathrm{GM}(a,b)$ is attained when $a$ and $b$ are maximally apart, i.e. $r=\sqrt{10}$, so
\[
\max\left(\frac{\mathrm{AGM}(A,B)}{\mathrm{GM}(A,B)}\right)=\frac12 \cdot (10^{1/4}+{10^{-1/4}}) \approx 1.2.
\]
(II) We will show that $\mathrm{AGM}(A,B)/\mathrm{AM}(A,B) \leq 1$. Let’s drop the assumption that $a\geq b$. Instead we assume, again without loss of generality, that $x\geq y$, so that we can set $z=:x-y \geq 0$. For the ratio $r=a/b$ we have $\sqrt{10} \geq r \geq 1/\sqrt{10}$. If $r$ fell outside this interval, we would have had to change the presentation of one of the numbers before calculating the $\mathrm{AGM}$. Dividing the numerator and denominator in the above inequality by $B$ we get:
\[
\frac{\mathrm{AGM}(A,B)}{\mathrm{AM}(A,B)}=\frac{(1+r) \cdot 10^{z/2}} {1+r \cdot 10^z}.
\]
So we look for an upper bound of the function $f_z(r):=\frac{(1+r) \cdot 10^{z/2}} {1+r \cdot 10^z}$ when varying $z$ and $r$ and want to show that this upper bound is smaller or equal to 1. Note, that we only have to check for integer $z\geq 0$ (The result is actually false if we allow any real $z$).
For $z=0$, we have $f_0(r)=1$ for any $r$ and hence $\mathrm{AGM}=\mathrm{AM}$. For a fixed $z \geq 1$ we can derive the function $f_z(r)$ with respect to $r$ and find that the slope is always negative. Hence for a fixed $z$, the function $f_z(r)$ attains a maximum when $r$ is smallest, i.e. $r=1/\sqrt{10}$, so we are left to show that
For $z=1$ we have equality again and $\mathrm{AGM} = \mathrm{AM}$. For $z\geq 2$ we can write $z=2+z’$ with $z’$ being an integer $\geq 0$. We get the following chain of inequalities
In summary, modifying the definition of the $\mathrm{AGM}$ to assure that the ratio of the “leading characters” is as close to 1 as possible, makes sure that the $\mathrm{AGM}$ works well, even in the bad cases.
I gave a talk on Fermi problems and a method for approaching them using the approximate geometric mean at the Maths Jam gathering in 2017. This post is a write up of that talk with some extras added in from useful discussion afterwards.
Enrico Fermi apparently had a knack for making rough estimates with very little data. Fermi problems are problems which ask for estimations for which very little data is available. Some standard Fermi problems:
How many piano tuners are there in New York City?
How many hairs are there on a bear?
How many miles does a person walk in a lifetime?
How many people in the world are talking on their mobile phones right now?
Hopefully you get the idea. These are problems for which little data is available, but for which intelligent guesses can be made. I have used problems of this type with students as an exercise in estimation and making assumptions. Inspired by a tweet from Alison Kiddle, I have set these up as a comparison of which is bigger from two unknowable things. Are there more cats in Sheffield or train carriages passing through Sheffield station every day? That sort of thing.
The point of these is not to look up information or make wild guesses, but instead to come up with a back-of-the-envelope, ‘wrong, but useful‘, orders of magnitude estimate. Some ‘rules’, if you want to play with these the way I would:
don’t look up information;
don’t make precise calculations using calculator or computer;
be imprecise — there are 400 days in a year, people are 2m tall, etc.;
round numbers where possible and calculate in your head.
One approach is to estimate by bounding – come up with numbers that are definitely too small and too large, and then use an estimate that is an average of these. But which average?
Say I think some quantity is bigger than 2 but smaller than 400. The arithmetic mean would be $\mathrm{AM}(2,400)=\frac{2+400}{2}=201$. The geometric mean would be $ \mathrm{GM}(2,400)=\sqrt{2\times 400} = 28.28\!\ldots$.
Which is a better estimate? The arithmetic mean is half the upper bound, but 100 times the lower bound. On this basis, for an ‘order of magnitude’-type estimate, you might agree that the geometric mean is a better average to use here. Following my Maths Jam talk, Rob Low said that the geometric mean makes more sense for an order of magnitude estimate, since it corresponds to the arithmetic mean of logs. To see this, consider \[
\begin{align*}
\log(\mathrm{GM}(A, B)) &= \log(\sqrt{AB}) \\
&= \log((AB)^{\frac{1}{2}}) \\
&= \frac{1}{2}\log(AB) \\
&= \frac{1}{2}(\log(A) + \log(B)) = \mathrm{AM}(\log(A), \log(B)) \text{.}
\end{align*}
\]
So, geometric mean it is. However, taking a square root is not usually easy in your head, and we want to avoid making precise calculations by calculator or computer. Enter the approximate geometric mean.
Approximate Geometric Mean
For the approximate geometric mean, take $2=2 \times 10^0$ and $400=4 \times 10^2$, then the AGM of $2$ and $400$ is: \[ \begin{align*}
\frac{2+4}{2} \times 10^{\frac{0+2}{2}} &= 3 \times 10^1\\
&= 30 \approx 28.28\!\ldots = \sqrt{2\times 400} = \mathrm{GM}(2,400) \text{.}
\end{align*} \]
Why does this work? Let $A=a \times 10^x$ and $B=b \times 10^y$. Then \[
\begin{align*}
\mathrm{GM}(A,B)=\sqrt{AB}&=\sqrt{ab \times 10^{x+y}}\\
&=\sqrt{ab} \times 10^{\frac{x+y}{2}} \text{,}
\end{align*}
\]
and \[\mathrm{AGM}(A,B) = \frac{a+b}{2} \times 10^{\frac{x+y}{2}}\text{.}\]
Setting aside the $10^{\frac{x+y}{2}}$ term, which appears in both averages, is it obvious that, for single digit numbers $>0$, \[\mathrm{GM}(a,b)=\sqrt{ab} \approx \frac{a+b}{2}=\mathrm{AM}(a,b) \text{?} \]
There is a standard result that says \[ \begin{align*}
0 \le (x-y)^2 &= x^2 – 2xy + y^2\\
&= x^2 + 2xy + y^2 – 4xy\\
&= (x+y)^2 – 4xy \text{.}
\end{align*} \]
with equality iff $x=y$. So $\mathrm{GM}(a,b)\le\mathrm{AM}(a,b)$, but are they necessarily close?
By exhaustion, it is straightforward to show (for single-digit integers, given the rule to round numbers where possible) that the largest error occurs when $a=1$ and $b=9$. Then \[ \sqrt{1 \times 9} = 3 \ne 5 = \frac{1+9}{2} \] and the error is $2$ which, relative to the biggest number $9$ might be seen as quite significant.
I’d say you are not likely to use this method if the numbers are of the same order of magnitude, because the idea is to come up with fairly wild approximations and if they were quite close it might be sensible to think of them as not really different. Then the error is going to be at least one order of magnitude smaller than the upper bound, i.e. $10^\frac{x+y}{2} \ll 10^y$. For example, if your numbers were $1$ and $900$ (as a pretty bad case), then: \[ \mathrm{GM}(1,900)=\sqrt{900}=30 \ne 50=\mathrm{AGM}(1,900) \] and a difference of $20$ on a top value of $900$ is not as significant as a difference of $2$ was on a top value of $9$.
So I suppose I would argue that this makes the error relatively insignificant. However, this thinking left me somewhat unsatisfied. I felt there ought to be a nicer way to demonstrate why the approximate geometric mean works as an approximation for the geometric mean. Following my talk at Maths Jam, Philipp Reinhard has been thinking about this, and he will share his thoughts in a post here in a few days.
One edge case
I didn’t have time to fit into my talk what I would recommend if the two numbers differed by an odd number of orders of magnitude. For example, $\mathrm{AGM}(1,1000)$ generates another square root in $1 \times 10^{\frac{3}{2}}$ – precisely what we were trying to avoid! What I have recommended to students is to simply rewrite one of the numbers so that the difference in exponents is even. For example, writing $1=1 \times 10^0$ and $1000 = 10 \times 10^2$ gives \[\mathrm{AGM}(1,1000)=5.5 \times 10^{1} \text{.}\]
Following Maths Jam, the esteemed Colin Beveridge made the sensible suggestion of just treating $10^{\frac{1}{2}}$ as $3$, making \[
\begin{align*}
&\mathrm{AGM}(1,1000)\\
&= 1 \times 10^{\frac{3}{2}}\\
&\approx 1 \times 3^3 = 27\text{.}
\end{align*}
\]
This increases our problems, though, because we have the potential to deal with larger differences (hence larger errors) than when dealing with single-digit numbers. Actually, it was wondering why this increased error happens that got me thinking seriously on this topic in the first place. I’ll stop now to let Philipp share what he has been thinking on this.
On 31st January 2008, I gave my first lecture. I was passing my PhD supervisor in the corridor and he said “there might be some teaching going if you fancy it, go and talk to Mike”. And that, as innocuous as it sounds, was the spark that lit the flame. I strongly disliked public speaking, having hardly done it (not having had much chance to practice in my education to date – I may have only given one talk in front of people to that point, as part of the assessment of my MSc dissertation), but I recognised that this was something I needed to get over. I had just started working for the IMA, where my job was to travel the country giving talks to undergraduate audiences, and I realised that signing up to a regular lecture slot would get me some much-needed experience. I enjoyed teaching so much that I have pursued it since.
I just noticed that last Wednesday was ten years since that lecture. It was basic maths for forensic science students. I was given a booklet of notes and told to either use it or write my own (I used it), had a short chat about how the module might work with another lecturer, and there I was in front of the students. That was spring in the academic year 2007/8 and this is the 21st teaching semester since then. This one is the 15th semester during which I have taught — the last 12 in a row, during which I got a full-time contract and ended ten years of part-time working.
I have this awful feeling this might lead people to imagine I’m one of the people who knows what they are doing.
P.S. The other thing that I started when I started working for the IMA was blogging – yesterday marks ten years since my first post. So this post represents the start of my second ten years of blogging.
The next issue of the Carnival of Mathematics, rounding up blog posts from the month of January, and compiled by Rachel, is now online at The Math Citadel.
The Carnival rounds up maths blog posts from all over the internet, including some from our own Aperiodical. See our Carnival of Mathematics page for more information.
If you pay attention to United States politics you have probably noticed that mathematics is currentlyenjoying a rare moment of relevance. You probably also know this is not happening because all of a sudden politicians have decided that mathematics is clearly the coolest thing in the world, even though it clearly is, but instead because gerrymandering has become one of the major issues du jour.
For those of you lucky enough not to know what gerrymandering is, let me give you a quick précis. Named after Elbridge Gerry – it should be pronounced like Gary and not Jerry – and a congressional district which slightly resembled a salamander he signed into law as the governor of Massachusetts, gerrymandering has come to be the blanket term for the redrawing of political districts in the United States in a way that provides political gain for the party conducting the redrawing. This is primarily done through either packing, drawing a district so all of your opponents’ votes are concentrated in a small number of districts and therefore can not meaningfully affect others, or cracking, splitting up the opponents’ votes among many different districts so they have less influence on any of them. This has generally been considered to be totally legitimate, and smart, political maneuvering in the US and upheld as legal in the courts, unless it can be proven the gerrymandering was done based on partisanship and not on race.
The reason gerrymandering is such a hot topic is because the courts might just be changing their views regarding partisan gerrymandering, and a big factor behind this is mathematics. There was an argument in front of the supreme court, in the case of Gill v. Whitford late last year about partisan gerrymandering in my home state of Wisconsin, which had mathematics as a central pillar in the arguments against the current district lines. Even more recently the Pennsylvania Supreme Court threw out their current districts and demanded they be redrawn, and while I am not sure if mathematics played a large role in them getting thrown out it certainly will when they are redrawn.
As gerrymandering is enjoying its moment in the sun, it is only fair the mathematician playing the biggest role in changing how it is all being thought about is called Moon. Moon Duchin is an Associate Professor at Tufts University and the creator of the Metric Geometry and Gerrymandering Group which, through a series of conferences, is applying cutting-edge mathematics to the redistricting problem, training mathematicians to be expert witnesses on gerrymandering for court proceedings, and providing teachers with lesson plans and guidance on how to implement them (there is one more conference in California coming up in March and a big workshop happening in August if you want to get involved).
The really big news though is, as of January 26 Duchin is working as a consultant for Governor Wolf in Pennsylvania, with the job of helping to make sure their redrawn congressional district map is fair. I have had the joy of talking to Moon for my podcast Relatively Prime about her work with the MGGG and watched her give a talk about gerrymandering at the 2018 Joint Mathematics Meeting. I do not think I have ever seen a more enraptured audience at a mathematics conference, there were a lot of people in the room and each and every one of them was paying attention. I can not think of a better person from a mathematical ability perspective, as well as a public engagement one, to be the face of this for mathematics.
It is too bad it has taken something so awful as gerrymandering to get mathematics a seat at the table in US political discourse, and even though I have spent a huge amount of my life trying to convince people mathematics is something we should all care about I would happily not have people talk about it if it meant we had no gerrymandering. That said, I am glad we have mathematicians like Moon Duchin who are willing to take this battle on in front of not only the mathematical community but an ever increasing portion of the politically engaged public, not to mention Governor Wolf and lawyers like those in Gill v. Whitford who are willing to reach past their comfort zone and let mathematics play a central role in their work. There is not going to be a clean, perfect solution to all of this, but hopefully with mathematicians like Moon involved in this it will end up a lot better than where it is now.
Did you read Cédric Villani’s Birth of a Theorem? Did you have the same reaction as me, that all of the mentions of the technical details were incredibly impressive, doubtless meaningful to those in the know, but ultimately unenlightening?
Writing about maths, especially deep technical maths, so that a reader can follow along with it is hard – the Venn diagram of the set of people who can write clearly and the set of people who understand the maths, two relatively small sets, has a yet smaller intersection.
Vicky Neale sits squarely inside it, and Closing The Gap has gone straight into my top ten “books to give to interested students”.
Here’s a clever way to structure a maths book (I have taken copious notes): follow the development of a difficult idea or discovery chronologically, but intersperse the action with background that puts the discovery in context. That’s not a new structure – but it’s tricky to pull off: you have to keep the difficult idea from getting too difficult, and keep the background at a level where an interested reader can follow along and either say “yes, that’s plausible” or better “wait, let me get a pen!”. This is where Closing The Gap excels.
Neale takes as the difficult idea the Twin Primes Conjecture, and specifically the work that followed from Yitang Zhang’s lightning-bolt discovery in 2013 that infinitely many pairs of primes are separated by at most 70,000,000 (which sounds like a lot… but is very small compared to “no upper limit”) – especially the Polymath projects and the work of James Maynard in reducing the bound to either 600 (unconditionally) or 12 (if the Elliott-Halberstam conjecture is true – a bound later reduced to 6 by Polymath8b).
The Elliott-Halberstam conjecture? What’s that? Neale takes the time to explain, by way of a mathematical pencil, the flavour of the conjecture, without getting bogged down in the technical details; she tells us enough that the story makes sense, and enough that we could go and find out more if we wanted.
Because of Neale’s position in the Venn diagram, she can pull off this kind of thing, making maths accessible without losing accuracy – she’s meticulous about saying “there’s more to this” when there’s more to something.
This attention to detail is possibly overdone in places – I found myself rolling my eyes from time to time about in-text reminders that I met Terry Tao in a previous chapter, or that we’d hear more about such-and-such in a future one, which I suppose is an upshot of deciding to do without footnotes. This is literally my only mild criticism of the book; I’m even in thrall to the quality of the paper it’s printed on.
Closing The Gap communicates the excitement, frustration and interconnectedness of top-tier mathematical research, including the relatively new approaches pioneered by Tim Gowers (among others) with the Polymath project. The book’s introduction starts with an extended analogy comparing mathematics to climbing (we know a MathsJam talk about that!) – how something impossible gradually becomes possible, then difficult, then accessible to novices with the help of a guide. Neale sets herself up as this guide, and succeeds brilliantly.
In the account’s usual citationless factoid style, the Elves state that you’re more likely to be crushed by a meteor than to win the jackpot on the lottery.
The replies to this tweet were mainly along the lines of this one from my internet acquaintance Chris Mingay:
Should we not be getting almost weekly stories of people being crushed by a meteor then ?
Yeah, why don’t we hear about people being squished by interplanetary rocks all the time? I’d tune in to that!
A couple of other helpful sorts have provided some extra data as context for this fact:
I asked on Twitter if any turbonerds keep a record of every jackpot ever, and of course they do: Peter Rowlett and Tim Stirrup both provided me with a link to Richard K. Lloyd’s comprehensive table, which reckons there have been 4749 winners, of which 3220 became millionaires.
4750 people have ever won the lottery (for a definition of ‘won’ that might not be the one we want, but it gives us an order of magnitude)
According to their website,4750 people have become millionaires since 1994 from UK lotto wins, so how many have been crushed?
(Hey, QI like to do it to their guests, so why can’t I?)
The statement sounds wildly incorrect on first inspection, so I reckon we’re not talking about the same kinds of odds.
It must be the case that:
someone has worked out the odds of being killed by a meteor,
someone has worked out the odds of winning the lottery, and
someone has compared those two numbers.
I assume at least the first two someones were not QI Elves, and I reckon the third one probably wasn’t either. So, where did QI get their fact?
A search for “meteor lottery odds” got me this story on independent.co.uk published five days before QI’s tweet, so that’s probably their source. That links to “Review Journal”, a generically authoritative-sounding title, which turns out to be the Las Vegas Review Journal, who in 2015 published an article by someone affiliated with gobankingrates.com titled “20 things more likely to happen to you than winning the lottery”. That cheery listicle cites a 2008 article by Phil Plait on his Bad Astronomy blog where he cites Alan Harris’ answer to the Fermi question of working out your odds of being killed by a meteor, directly or indirectly. The “crushed” phrasing, which is a stronger statement than the one Harris looked at, seems to originate with the Las Vegas Review Journal. Maddeningly, Plait doesn’t give a citation for Alan Harris’s calculation and I can’t find a better source on Google, so the search stops here.
After all of that chasing, I’ve got a kind-of reputable source for the “1 in 700,000” odds of being killed by a meteor presented in the Independent article. That’s much better odds than the 1 in 45,057,474 chance of winning the lottery claimed by operators Camelot. We hear about people winning the lottery fairly often, so why isn’t “meteor squish” a journalistic cliché like “bus plunge”?
Well, the meteor figure is your lifetime odds of being killed, and the lottery figure is your odds of winning each time you play. That’s it – they’re measured in different units, effectively. Plait’s Bad Astronomy piece contained a good explanation of what the odds meant, but that got lost when the headline figure was spread in factoid form.
So we can’t compare the two numbers as stated – that’d be like me saying I’m taller than you are old. What can we do to get numbers for meteors and lotteries that we can compare?
One option is to assume both take place once – a meteor hits Earth, and you play the lottery. We know the odds of winning the lottery in one attempt, and one of Harris’s assumptions in his model was that an asteroid impact would kill everyone – so your probability of being killed is 1. No contest – you’re way more likely to be killed by an asteroid that hits Earth than for the lottery ticket you just bought to be a winner.
A more reasonable approach might be to look at your odds within a certain period of time. We’ve already got a figure of around 1 in 700,000 for being killed by a meteor in a 70-year lifespan, so we just need to get the corresponding figure – what are your odds of winning the National Lottery at least once in your lifetime?
Clearly, it depends on how often you play. My personal odds are zero – I’ve never so much as bought a scratchcard. Conversely, if you buy enough tickets, you can guarantee you win, a tactic executed to great success by Voltaire and later on some MIT students. Those strategies both relied on oversights in the rules of their respective lotteries to make them profitable, but if you’ve got a fortune to spare you could buy a National Lottery ticket corresponding to each combination of six balls and guarantee that exactly one of them will win.
For the sake of getting a reasonable number, let’s say you buy one ticket for each draw. There are two draws each week, so 104 draws each year. So your odds of winning the lottery at least once in 70 years is
At this point I wanted to use the fact that you can only play the lottery once you’re 16, and the life expectancy in the UK is 81.2 years, but I’ll stick with 70 years of playing so we can compare with the meteor number.
That’s a way, way lower number than the meteor number. So you’re vastly more likely to win the lottery in your lifetime than you are to be killed by a world-ending meteor – over 100 times more likely, in fact.
And if a meteor did kill everyone, you’d be unlikely to read about it in the news the next day.
Read Full Article
Visit website
Read for later
Articles marked as Favorite are saved for later viewing.
close
Show original
.
Share
.
Favorite
.
Email
.
Add Tags
close
Scroll to Top
Separate tags by commas
To access this feature, please upgrade your account.