My current research interests can be broken up into five overlapping themes: (1) Collective Wisdom, (2) Epistemology, (3) Explanation, (4) Philosophy of Probability, and (5) Biosecurity Intelligence and Forecasting. (You can find my CV here.)

That last theme is a weird one for a philosopher to be researching, but it happens to provide a concrete motivation for a lot of my research in the first four themes, which can sometimes get pretty abstract and seemingly disconnected from anything useful or interesting. I do my best to keep my philosophical naval-gazing in check by remembering to focus on practical problems and scientific research. I find this methodology extremely useful: it tends to prevent me from going down rabbit holes, but it also helps me maintain perspective when I do decide to go down a rabbit hole.  

The general problem of biosecurity intelligence and forecasting is to work out when and where the next disease (or pest) outbreaks will occur. This is an incredibly important problem to solve, since disease outbreaks can devastate ecosystems, our food supplies (animal and plant agriculture, and aquaculture), and the global human population. There are all sorts of reasons to take this problem incredibly seriously, but here is just one: infectious diseases have probably claimed more human lives than all wars put together.

A lot goes into forecasting a disease outbreak, and there is a growing number of web apps devoted to collecting and organising evidence from various sources to help us do this — see e.g., Google Flu Trends, HealthMap, and ProMED-mail. A few years ago, I developed one of these apps in collaboration with the Australian Department of Agriculture and the Australian Centre of Excellence for Biosecurity Risk Analysis. It was called and we built it because we needed something to help us forecast aquatic animal disease outbreaks and the existing systems at the time focused mainly on human and zoonotic diseases. was a success in a number of ways, and it eventually became the International Biosecurity Intelligence System (IBIS), which has a broadened focus so as to include plant pests and disease outbreaks and terrestrial animal disease outbreaks.

One thing that is common to all of these biosecurity forecasting apps is that they collect and then combine the knowledge of a diverse set of humans from all around the world. And then, given that collective knowledge, we somehow have to make a prediction about what will happen next. But different people will make different predictions, and yet we often have to act as one (e.g., through our governments). This means that different predictions have to be combined together in some way to form a single, collective prediction. And it would be nice if we could do this in some way that maximised the accuracy of that collective prediction. 

This quickly gets us into some areas of philosophy that are known as social epistemology (roughly: the theory of collective knowledge and reasoning) and formal epistemology (roughly: mathematical theories of knowledge and reasoning). The problem of biosecurity forecasting, put more abstractly, is that we have to (1) elicit some predictions from a bunch of humans, often in the form of probabilities, and then we have to (2) combine those predictions in some, hopefully not-too-stupid, way. Regarding (1), a major problem is that humans don’t seem to be very good at probability, but another major problem is probability theory doesn’t seem to be very good at capturing human uncertainty. So I’ve been getting very interested in non-standard models of uncertainty (such as sets of probabilities) and how they reflect human uncertainty (which has caused me to start dabbling a bit in the philosophy of mind). You can read more about this in the Epistemology section below. Regarding (2), there are lots of ways of combining predictions and it is tricky to work out which ways systematically do better than others. It’s also tricky to work out what “better” means in this context. You can read more about this in the Collective Wisdom section below.

Putting these tricky issues aside, one striking thing about combining the judgements of a diverse bunch of humans is that you often don't have to do anything fancy when doing the combining in order to get surprisingly accurate collective judgements. This is sometimes known as the Wisdom of Crowds effect. If you get some people to guess the number of jelly beans in a jar and average their guesses, you’ll probably find that that average guess is way more accurate than most of the individual guesses. That’s kind of surprising, since none of the people doing the guessing has to be particularly good at guessing numbers of jelly beans in jars — in fact, they can all be terrible at guessing numbers of jelly beans! I think this is really interesting and I find myself thinking a lot about this topic a lot (with the kind support of the Humboldt Foundation). My interest in this topic stems partly from the fact that it is incredibly useful in this age of the Internet (see e.g., CrowdMed), but also because the Wisdom of Crowds effect seems to have the robust-feature-of-reality glow to it.

The what? (Welcome to one of my rabbit holes!) There are some things that happen in this world with a strange and interesting kind of robustness. Last week I found a quarter on the street. Well, that’s not very strange or interesting, and it seems that it could easily not have happened. But now consider this: Last semester I found that my students’ grades formed a bell curve. Actually, this happens most semesters. Now, that’s a bit strange and interesting, and it would definitely be robust if I had a policy of grading on a curve. But I don’t have that policy. Indeed, given that I don’t have that policy, it seems very strange and interesting that my students’ grades keep forming a bell curve. Let’s call this the Bell Curve effect. The Bell Curve effect seems to be a strangely robust phenomenon. It’s gets even stranger once we see that all sorts of other things in the world tend to form a bell curve (at least approximately): the heights of humans, the sizes of snowflakes, the weights of loaves of bread, and so on — the list seems endless. Why do bell curves keep popping up all over the place?

The mathematician Henri Poincaré is said to have said of the Bell Curve effect: “everyone believes in it: experimentalist believing it is a mathematical theorem, and mathematicians believing that is an empirical fact”. I don’t think we really understand the effect yet. Many people will tell you that the Central Limit Theorem explains the Bell Curve effect, but I think the real story is more complicated, more nuanced, and more interesting than that. You can read more about that in the Explanation section below, but up here my purpose is to just note that the Bell Curve effect and the Wisdom of Crowds effect are similar in some interesting ways (indeed, I’ve heard some people say that Bell Curve effect explains the Wisdom of Crowds effect). They both seem to be strangely robust phenomena while also being completely contingent (that’s part of the glow) — all sorts of things don’t form a bell curve (e.g., household wealths), and we’re all familiar with the Madness of Crowds

Another, closely related effect that has the mysterious glow is that ice cubes always melt and farts always eventually dissipate (thankfully) — or, in other words: the entropy of macro-scopic systems almost always increases. The standard explanation for why this effect happens is probabilistic, but that explanation creates all sorts of problems for our understanding of the concept of probability. That gets me interested in issues to with the philosophical foundations of probability, and you can read more about that in the Philosophy of Probability section below.

Our world contains many more effects that have the glow: honeybees tend to produce honeycombs made out of hexagons, three species of North American cicada hibernate in the ground for either 13 or 17 years (both prime numbers), there is an asteroid belt near Jupiter in which the asteroids neatly organise themselves to form the Kirkwood gaps, and if you throw a bunch of sticks up in the air and take a photo of them, you’ll find most of them are closer to the horizontal plane then the vertical. These glowy examples have been used by various philosophers (including myself) to argue that there can be mathematical explanations of physical effects. For example, the reason why the cicadas have those weird prime-numbered life cycles is that it is a good idea for them — evolutionarily speaking — to avoid the life-cycles of predators and other organisms that compete for resources and the mathematically optimal way to do that is to have a prime-numbered life cycle. (The explanation is a bit more complicated than that, but that’s the gist of it.)

One reason why these explanations are super interesting — besides their effects having the glow — is that mathematical objects are abstract (can you point to the number 2?) and it’s not clear how something abstract can explain something physical. Usually when we explain something, it’s in terms of physical stuff interacting with other physical stuff. For example, the reason why (white) Australians tend to have more freckles than (white) people elsewhere in the world is partly because of the harsh Australian sun. That’s just some physical stuff (certain kinds of photons) bumping into some other physical stuff (certain kinds of Australians) and not bumping into other physical stuff (certain kinds of non-Australians). And that’s how explanations usually go. But the numbers 13 and 17 don’t seem like the kinds of things that can bump into cicadas. So there is a bit of a puzzle about what the explanatory connection is between mathematical stuff and physical stuff. I think the solution to the puzzle is connected with the explanations of the other effects that have the glow — e.g., the Wisdom of Crowds effect and the Bell Curve effect. You can read about some of this in the Explanation section below.

So, there you have it: we went from biosecurity forecasting to the wisdom of crowds to mathematical explanations of physical effects that have a robust-feature-of-reality glow to them! I only said that my focus on practical problems and scientific research tends to keep me from going down rabbit holes. But at the end of the day, I’m a philosopher, and there’s nothing better than going down a good rabbit hole.  

Collective Wisdom



Philosophy of Probability

Biosecurity Intelligence and Forecasting