Field of Science

Does mathematics carry human biases?

This week, a conversation flared up on Twitter on whether mathematics can carry human biases, and what such a possibility could even mean.

The spark was a statement by the Committee on Minority Participation in Mathematics of the Mathematical Association of America (MAA), responding to actions the Trump administration has taken to disparage and de-fund the academic discipline of Critical Race Theory. The committee's statement pointed out that the attack on Critical Race Theory has a potentially chilling effect on all academic disciplines, including mathematics:

As mathematicians, we notice patterns - this is something we are all trained to do. We bring these Executive actions to our community’s attention for several reasons: we see the pattern of science being ignored and the pattern of violence against our colleagues that give voice to race and racism. We need to fight against these patterns. As educators, we also recognize the threatening pattern of banning education and withdrawing education funding to suppress conversations on race and racism, extending from elementary to postsecondary institutions to the workplace and research spheres.

 The MAA tweeted out this statement, highlighting the following quote:

The resulting conversation appears to have focused in particular on the idea that "mathematics is created by humans and therefore inherently carries human biases", largely disregarding the rest of the committee's statement. One biologist in particular felt so provoked by this statement that she felt it should be disqualifying for the whole field:

First off, let me say clearly: Dr. Heying's tweet is reprehensible.  No one should be dictating who does or does not have business in math, let alone someone from outside the field.  She also seems completely ignorant of the centuries-old debate on whether mathematics is discovered or invented (most mathematicians feel it's some combination of both). And while I do not know if her comment was intended to be racist, the fact that she is saying the Committee on Minority Participation in Mathematics has "no business in math" is absolutely racist in its effect. She should apologize immediately, but instead she is doubling down.

Leaving aside Dr. Heying's offensive remark, the statement itself raises some interesting questions.  What could it mean for mathematics to "carry human biases"? I think part of the issue here is that the word "mathematics" could be understood in several different ways:

  1. Mathematics as a collection of relationships (discovered or not) among numbers and other mathematical objects,
  2. Mathematics as the human body of knowledge regarding these relationships,
  3. Mathematics as a discipline and profession devoted to understanding and describing these relationships

For an example of mathematics in the first sense, let's take the theorem that there are infinitely many primes among the natural numbers. This is one of the most famous results in elementary number theory, with a number of beautiful proofs dating back to Euclid in ancient Greece. Within the universe of math, such a statement is not contestable. This is the point--and the beauty--of proofs in mathematics: they reveal truths that are universal, regardless of who discovers or uses them.

Many of those responding to the committee's statement assumed that they were using "mathematics" in this first sense, as if theorems like the inifinitude of primes could carry human bias. But I see this as an exceedingly ungenerous interpretation, with no support in the rest of their statement. Indeed, the people leaping to this interpretation seem to be all too eager to paint the committee's statement in the worst possible light, as if any statement calling for greater diversity and inclusion in mathematics is automatically considered suspect.

If "mathematics" is understood in the third sense, as a discipline and profession, then absolutely it can carry human bias. Ronald Fisher, who pioneered the study of statistics, was a notorious racist and eugenicist, and he was not alone in these views. Moreover, until recent decades, women and minority groups were systematically excluded from studying and practicing higher mathematics. Because of this systematic exclusion, most of the "great figures" of Western mathematics are white men, and this perception that "math is for white men" becomes self-reinforcing. This is not merely a historical legacy: nonwhite mathematicians continue to face bias and isolation, and in some cases harassment.

What about the second sense, mathematics as a human body of knowledge? Could this carry bias? Here I think the question is much more nuanced, but the example of negative numbers is instructive. They first appeared in the Han Dynasty of ancient China (202 BC - 220AD). It has been suggested that the idea of duality in Chinese philosophy made negative numbers more intuitive for them. Indian mathematicians in the 7th century AD were using negative numbers to represent debts.  Yet in Western mathematics, negative numbers were dismissed as absurd and nonsensical until calculus came along in the 18th century.

I like the example of negatives, because it shows that what gets accepted as legitimate mathematics is indeed a social construct. Cultural biases can come into play when determining which ideas gain legitimacy, even in the abstract world of pure mathematics.  Relationships among numbers are not biased, but our process of understanding and discovering these relationships may be. And I agree with the committee's statement that understanding how human biases influence our thought--even within the ivory tower of mathematics--is key to achieving greater inclusion and equity for all people.

How do we mourn human civilization?

2019 was a lot of things. But for what I want to say here, 2019 was the year that I realized we might not save ourselves.

Just on its face, 2019 was a terrible year if you care about climate change. Arctic permafrost may have reached a tipping point. Antarctic ice melted at record pace. The Amazon burned. Meanwhile, carbon emissions continued to rise, and COP-25, the major UN forum for international climate policy, ended with essentially no progress.

But for me personally, 2019 was the year I allowed myself to consider that we might not work it out.  Not only will we not stop the first effects of climate change, we might not even stop any of them.  Faced with an existential threat to our entire civilization, we might just drive ourselves right off the fucking cliff.

Surely we will do something to stop it. Consciously or not, this thought had always been in the back of my head when thinking about climate change. Yes, the science looks bleak, the politics look intractable, and some level of crisis is probably unavoidable. But surely, at some point, human civilization will come together, face the danger ahead, and do something to stop it.

This year, I allowed myself to pluck this voice from the back of my head, hold it to the light, and examine it. Will we do something to stop it?

Well, what does our track record show? Climate change was officially identified by NASA as a severe global threat in 1988. Since then, we've had 31 years of scientific research, policy debates, and international agreements. Every international scientific and policy-making body recognizes climate change as an urgent and existential threat. And yet emissions have continued to rise, essentially without pause. 


I'm an optimist at heart. I always try to look at things in the best possible light. But at this point, it's starting to look like, if we were going to save ourselves, we would have done it by now.

Surely we will stop it. We might not stop it. What if we don't stop it?

What happens if we don't take drastic action? Here is where I think that the scientific and journalistic institutions have failed to properly communicate the danger. Because the headline numbers—3 or 4 degrees Celsius, 2 meter sea-level rise by 2100—might not sound that bad at first. Why, exactly, are these numbers so scary?

First of all, with a 4°C temperature rise, 74% of the Earth's population would experience deadly heat waves every year. Multi-breadbasket failures are possible, leading to mass famine. As much as 5% of the world's population could be flooded every year by 2100. These and other catastrophes could lead to as many as a billion climate refugees by 2050.

What would this level of disruption mean for human civilization? With one tenth of the world's population displaced, can nations still maintain their borders or their identities? Can governments survive if they can't provide food or freshwater to their people? When "natural disasters" turn into commonplace occurrences, will the collective fiction known as "money" retain its value?

Questions like these defy quantitative predictions, but based on these an other considerations, researchers have described increase of 5°C or more as posing "existential threats to the majority of the population".  And while it is probably still possible to avoid this level of warming, doing so would require unprecedented economic transitions and global cooperation—and our track record so far does not give much reason for optimism.

We might not stop it.

2019 is the year I started to mourn. The year I let myself consider that the civilization we have right now might be—likely will be—the best we will ever get. That our current society—for all its wonders and flaws—could be revealed as fossil-fueled mirage that collapses before we ever build something better to replace it. That, even if homo sapiens as a species survive, what we know of as human civilization could go up in smoke, fire, and water.

Of course, the destruction will not be spread evenly, nor fairly. The countries most vulnerable to climate change, such as Bangladesh and Haiti, are among those least responsible for creating it. Still, there is reason to doubt that the political and economic systems of the West will survive extreme climate change. Already, mass migration from the Middle East and Central America (driven in part by climate change) have fueled the rise of the Far Right in Europe, Brexit in the UK, and the election of Donald Trump in the US. Currently, the US is holding thousands of these migrants in concentration camps, forcibly separated from their families. What will happen when migrants swell to 10% of the world's population, compounded with greatly increased fires, flooding, hurricanes, epidemics, and food shortages? How much strain, exactly, can our political and economic institutions take?

What had you pictured for yourself and your loved ones in 2050? I had hoped to be rounding out my career as a mathematician, with a satisfying record of scientific accomplishment and well-taught students behind me. I had hoped to be watching my son thrive in the world with at least some of the advantages that had helped me succeed. But now I'm letting myself ask, what if my college, the university system, the country, the entire economy, are gone by then? What if all we leave the next generation is a command to survive, survive at all costs?

I am not telling you to despair. Despair saps the will to act, and there is too much work to be done.  The difference between 2°C vs 3°C, or between 3°C vs 4°C, is so great that we must be out in the streets causing disruption, fighting for our futures and our lives. We must also join with each other to become resilient, to form networks of preparedness, to help the most vulnerable, and to strategize how we will adapt to whatever change will come. I am not telling you that we cannot make a difference. I believe we can and we will, and I invite you to join me, and help me, in this struggle.

But I also invite you to mourn. We can't truly grasp the urgency for action unless we emotionally grapple with the consequences of inaction. What, in human civilization, will you miss most? What will you wish we had fought harder to preserve? What imagined future will you be most heartbroken to discard?

I wish you a joyous 2020, but also a mournful one. We must be clear-eyed about what we will lose, if we are to fight to preserve what we can.

Andi's Factor Game

Two weeks ago, my friend Andi messaged me about a mathematical game she had invented.  She was so excited to share it.  She had coded up a "proof of concept" version in html, and had come up with a mathematical proof about its winning strategies.  She was enthusiastic about its potential to make math fun even for non-math people, and full of ideas for next steps.

Then two days ago, I learned that Andi died.  It seems that this game is one of the last things she put into the world.  Although I didn't know her as well as I might have, her excitement about sharing this game seems to typify the passion and determination with which she approached all her projects.  Andi was an uncompromising advocate for social justice with a poetic eye and a keen sense of humor. Also, she was a transgender woman; I say this because visibility matters and because I believe she would not want this aspect of her identity to be erased.

The best way I can personally think of to honor Andi's memory is to share her final game with the world.  Like any well-designed game, it is easy to play but difficult to master. The rules are deceptively simple:

  1. A large whole number, called the Magic Number, is specified and known to both players (it could be randomly generated by computer, for example). All factors of the Magic Number are listed out, including 1 and the number itself.
  2. Two players take turns choosing factors of the Magic Number. Every time one player chooses a factor, that factor and all multiples of it are crossed out. Once a factor has been crossed out, neither player can choose it.
  3. Whoever chooses 1 loses. In other words, the goal is to eliminate the factors in such a way that the other player is forced to choose 1.

For example, let's say the Magic Number is 12.  The factors of 12 are 1, 2, 3, 4, 6, and 12.  These are all the numbers that can be chosen.

Say player 1 choses 12 itself.  Then 12 is eliminated, so the "board" looks like this:

1 2 3 4 6 12

Now player 2 chooses 3.  So 3 and all multiples of 3 are crossed out:

1 2 3 4 6 12

Next player 1 chooses 2.  So 2 and all multiples of 2 are crossed out:

1 2 3 4 6 12

Only the number 1 is left. Player 2 is forced to choose 1, so Player 1 wins.

To visualize what's happening in this game, it helps to draw a diagram like this:

Every time you a player picks a number, that number and all numbers downstream of it are eliminated. (Here "downstream" refers to the direction the arrows are pointing, which is visually upwards.)  So, if 2 is picked, that eliminates 2, 4, 6, and 12. 

To mathematicians, a diagram like this is called a lattice. The game-play for a given Magic Number is determined by the structure of the lattice, which in turn is determined by the Magic Number's prime factorization, as you can see in this lattice for 120.

But enough theory, go ahead and play!  Here's a link to the "proof of concept" version that Andi coded up.  You play against the computer, who goes first.  To try again with a different Magic Number, click "New Game". You can put in whatever Magic Number you choose, or have the computer randomly pick one.

Did you win? No, you didn't. But don't feel bad: Andi proved that, with optimal play, Player 1 will always win the game.

It's a proof by contradiction.  Assume, for the sake of contradiction, that for some particular Magic Number, Player 2 has a winning strategy. In other words, Player 2 has a winning response to any first move that Player 1 might make.  In particular, if Player 1 chooses the Magic Number itself, Player 2 must be able to choose some other number—call it n—which puts them in a winning position.  But then Player 1 could have chosen n as their first move, which would have put Player 1 in this same winning position.  This contradicts our assumption that Player 2 has a winning response to any first move of Player 1.  Therefore, by contradiction, Player 1 must win if both sides play perfectly.

The interesting thing about this proof is that it's non-constructive. It says that there exists a winning strategy for Player 1, but gives no indication of what this winning strategy might be!

Andi designed her code to search through all possible game outcomes for a winning one. While this guarantees that the computer always wins, it doesn't give much insight into how one ought to play, or why certain strategies might work better than others.

There are many interesting open questions here: Can the winning strategy be described concisely? Is there a polynomial-time algorithm to find the winning strategy for a given Magic Number? And can the game be generalized to other kinds of lattices?

Andi's final gift to the world is a good one.  Her code is available on GitHub; please use it and build on it if you are inspired. I hope she is remembered for this and for everything else she put out into the world.

I'll close with this mathematical meditation, which was one of Andi's last Facebook posts:

The set of rational numbers is continuous, in the sense that between any two distinct rational numbers, there exist more distinct rational numbers. If you only look at the rationals, you'll miss uncountably many reals. If you insist on defining reals in terms of rationals, you'll need to take rationals to their limits.

Rest in Power.

You can win the Electoral College with 22% of the vote

Donald Trump is poised to become the next US president, despite the fact that Hillary Clinton received over a million more votes than him (and counting). This would mark the second time in sixteen years, and either the fourth or fifth time in history (depending on how you count) that the Electoral College winner has lost the popular vote.

How is it possible to win the Electoral College but lose the popular vote?  The answer lies in a combination of two factors.  The first is the winner-take-all nature of the state contests. All states except for Maine and Nebraska deliver all their electors to the candidate with the plurality of votes.  This means that if you win by slim margins in a sufficient set of states, you can lose badly in all other states and still secure an Electoral College victory.

The second factor is the disproportionate representation of small states.  Each state has a number of electors equal to its total number of congresspeople (senators plus representatives).  The number of representatives is roughly proportional to population size, but adding in the two senators per state gives the smaller states more per-capita representation.  For example, Wyoming has approximately 7 electors per million elligible voters, while California has 2 per million.  So a Wyomingite has more over three times the Electoral College representation of a Californian (calculations here).

So if you want to become president without winning the most votes, your strategy is to aim for narrow victories in a set of smaller states that add up to 270, while ceding the other states to your opponent.  This begs the question: what is the smallest popular vote percentage one could receive while still winning the presidency?

The answer—according to my best calculations—is 22%.  You could capture the Electoral College, and become President of the United States, with only 22% of the vote.

I got this number by starting with the states with the most electors per elligible voter (Wyoming, Vermont, Delaware, Alaska, ...).  For each of these, I gave 50.1% of the vote to "Team Red", and the remaining 49.9% to "Team Blue".  I continued down the list of states with the most electors per capita, giving 50.1% to Team Red, until the total electoral votes exceeded the 270 needed to win.  I then gave Team Blue 100% of the vote for all other states.  It turns out Team Red didn't need New Jersey, so I threw that over to Team Blue as well.  The result: Team Blue captures 77.7% of the popular vote, but Team Red wins the Electoral College vote 270 to 268.  You can check my math in this spreadsheet.  My answer agrees with a similar calculation done in 2011.

Figure 1: One can capture the Electoral College with only 22.3% of the vote, by receiving 50.1% of the vote in the red states above and 0% in the blue states.

It makes sense that the 22.3% figure is close to one quarter.  If all states were equal in both population and electoral votes, one could tie the electoral college with slightly more than one quarter of the vote, by winning slightly more than half the vote in half the states, while losing the others completely (see below).  The fact that one can win the US electoral map with less than 25% is due to the disproportionate representation of small states.

Figure 2: A hypothetical electoral map of four states with equal populations and electoral votes.  Pie charts show the popular votes in each state.  One can tie the electoral college with slightly more than 25% of the vote, by winning narrow majorities in two states and receiving no votes in the other two. 
The above calculations assume that there are no third party candidates, and that voter turnout is the same in each state.  Dropping these assumptions can lead to even more lopsided possibilities.  For instance, with one third-party candidate, we only need to give Team Red 33.4% in the red states of Figure 1, while Team Blue and the third party each get 33.3%.  This leads to an Electoral College win for Team Red with 14.9% of the vote.  Alternatively, suppose that the turnout in the red states of Figure 1 is half that of the blue states.  Then Team Red wins with 14.3% of the vote.

Of course, possible is not the same as likely.  It would be very unlikely, for instance, for a candidate to receive 50.1% of the vote in Oklahoma but 0% in Texas.  What does not seem unlikely, on the other hand, is that the Electoral College winner loses the popular vote.  This has happened in at least 4 out of 58 elections, or 6.8%, which is not that rare of an occurrence.  What we need to decide, as a country, is whether we support an electoral system that does not always align with the majority of votes.

Information and Structure in Complex Systems

Eight years ago, I had finished my first year of graduate school in math, and I was at a loss as to what to research.  My original focus, differential geometry, was a beautiful subject to learn about, but the open research questions were too abstract and technical to sustain my interest.  I wanted something more relevant to the real world, something I could talk to people about.

Looking for new ideas, I took a course in complex systems, run by the New England Complex Systems Institute.  The director, Yaneer Bar-Yam, had pioneered a new way of representing structure in a systems.  I was fascinated by this idea but also puzzled. As a mathematician, I wanted to understand the basis of this idea.  What assumptions does it rely on?  How are its basic concepts defined? 

My attempt to answer these questions turned into one of the longest and most demanding projects I’ve worked.  After an eight-year collaboration Yaneer and my friend Blake Stacey, we finally have a preliminary manuscript up on the web.  It is currently under review for publication.  And to my pleasant surprise, we got a nice write-up in ScienceNews.

So what is this project all about?  The idea is that we're using information theory (which I've written about previously) as a tool to represent and quantify the structure of a system.

Before I explain what any of this means, let's consider some motivating examples.  Here's a system (call it system A):
You wouldn't really call this a complex system.  It has only one component (a ball) that bounces around in a fairly simple way.  Since there's not much to see here, let's turn to system B:

Source: Wikimedia Commons
This system has many particles, which bounce around and bump into each other.  In one sense, this system is quite complex: it is very difficult to describe or predict its exact state at any given time.  But looking beyond the level of individual particles reveals a kind of simplicity: since the particles behave independently of each other, overall measures such as the average particle velocity or the rate of collisions are relatively stable.  In other words, the individual complexity "averages out", so that on the whole, the system behaves quite simply.

Contrast that to the behavior of system C:
Source: A Bird Ballet by Niels Castillon
This is a murmuration of starlings.  The starlings fly in a semi-coordinated, semi-independent way, creating intricate shapes and patterns that you would never observe in systems A and B.  This is a prototypical "complex system"—the kind that has intrigued researchers since the 70's. 

It is intuitively clear that systems A, B, and C have entirely different kinds of structure.  But it is surprisingly difficult to capture this intuition mathematically.   What is the essential mathematical property of system C that can allow us to differentiate it from A and B?

We try to answer this question using information theory.  Information theory was first invented by mathematician Claude Shannon in 1948 to address problems of long-distance communication (e.g. by telegraph) when some signals may be lost along the way.  Shannon's ideas are still used, for example, in the development of cell phone networks.  But they also have found applications in physics, computer science, statistics, and complex systems.

To explain the concept of information, let's look at a system consisting of a single blinking light:
This is one of the simplest systems you could possibly imagine.  In fact, we can quantify this simplicity. To describe the state of the system at any given time, you only have to answer one yes/no question: "Is the light on?"

The amount of information conveyed in one yes/no question is called one bit.  "Bit" is short for "binary digit", and is the same unit used to quantify computer memory.  In other words, the state of this light can be described in one binary digit, 0 for OFF and 1 for ON.

Now let's add another light:
Let's say these lights are statistically independent.  This means that knowing the state of one doesn't tell you anything about the other.  In this case, to identify the state of the system requires two bits of information—that is, two yes/no questions, one for the first light and one for the second.  We can depict this situation with a diagram like this:

The circles are drawn separately, since information describing one of them tells us nothing about what the other is doing. We could say that each of these bits applies at "scale one", since each describes only a single light bulb. 

Here are two lights that behave in a completely different fashion:
Note that the two light bulbs are always either both on or both off.  Thus, even though there are two components, the system can still be described by a single bit of information—a single yes/no question.  The answer to this question (e.g. "are they on?") applies to both bulbs at once.  The "information diagram" for this system looks like two completely overlapping circles:
We could say that the one bit of information describing this system applies at "scale two", since it describes two light bulbs at once.

A more interesting case occurs between these two extremes:
It's hard to see it, but I've animated these bulbs to be in the same state 3/4 of the time, and the opposite state 1/4 of the time.  If I told you the state of the first bulb, you wouldn't completely know the state of the second, but you could make an educated guess.  Specifically, if I told you the first bulb is ON, you could guess that the second is ON too, and you'd be right 75% of the time.  So there is information overlap: Information about the first bulb gives partial information about the second.  In fact, we can use Shannon's formulas to actually calculate how much overlap there is: approximately 0.19 bits.  So if you know the state of the first bulb (1 bit), then you also know 0.19 bits about the second bulb—not enough to know its state with certainty, but enough to make a guess that is 75% accurate.  The overlapping information can be depicted like this:
As you can see, 0.19 bits of information apply to both light bulbs at once (scale two), while the remaining 0.81+0.81=1.62 bits apply only to a single bulb (scale one).

In principle, these "information diagrams" (we call them dependency diagrams) exist for any system.  Highly ordered systems, like system A above, have lots of overlapping, large-scale information.  Highly disordered systems like B have mostly small-scale, non-overlapping information.  The systems that are most interesting to complex-systems researchers, like the starlings in example C, have lots of partial overlaps, with information distributed over a wide range of scales. 

And that's the basic premise of our theory of structure.  The structure of a system is captured in the overlaps of information describing different components, and the way information is distributed across scales.  While we take these concepts quite a bit further in our paper, the central idea is right here in these blinking lights. 

Thanks for reading!

The time the cops pulled their guns on me

This post is not about science.

I'm writing this because the horrific news out of Ferguson, Missouri—the killing of an unarmed man and the subsequent assault on the populace and media—has been bringing back memories an experience I had with the police ten years ago in Chicago.

I should be clear about why I'm choosing to share this. It's not because I think my own problems are particularly deserving of attention in comparison to the violence done to Michael Brown, Eric Garner, and other recent victims of police violence.  In fact, what I experienced was relatively tame in comparison.  But that's kind of the point. This incident instantly brought my white privilege into sharp focus, in a way that has stuck with me ever since.  Issues like racial profiling can be somewhat abstract for white people.  I hope my story can open a new entry point into these issues for those who rarely experience them directly.

After college, I joined Teach for America.  I was assigned to a high school on the West side of Chicago, where I taught math and coached the chess team.  The school and the surrounding neighborhood were nearly 100% black.  (Yep, Chicago is segregated.)  It was also a rough neighborhood in the sense that drug dealers and prostitutes operated openly within a block of the high school, and students talked about gang warfare the way those at other schools might gossip about the Homecoming dance.  I was not a great teacher in that environment, but I felt a strong bond with the students—especially with those on the chess team, who would squeeze into my tiny Civic every month or so to face off against other teams, often from much more affluent suburban schools.

One Saturday, we got back to the West side around 10pm, and I decided to give each of the team members a ride home.  After I dropped the last student off, I got back into the car to head home. But as I tried to start out, there was another car right next to me, blocking me into my parking space.  And the driver was looking at me.

I didn't know what they wanted.  Maybe they wanted my parking spot.  To try to get out of their way, I pulled forward a bit.  But they moved in parallel, blocking me in again.  We repeated this dance two or three times.  They motioned to me to roll down my window.  But seeing as I had no idea who they were, I thought this was probably a bad idea and kept my window shut.

Then the driver and passenger got out, walked in front of my car, pulled guns out, and pointed them at me.

As a child, I frequently had nightmares in which "bad guys" would shoot me with guns.  I started to feel like I had slid into one of those nightmares.  It didn't feel like reality—it felt like a dream that was happening to me.  I thought maybe I was mistakenly mixed up in a criminal conspiracy, and they were going to kidnap me or worse.

They shouted "PUT THE FUCKING CAR IN PARK!"  I complied.  Then one of them yanked open my car door and put his gun to my head (literally, it was touching my temple).  He shouted "TAKE YOUR FUCKING SEATBELT OFF", which I did as well as I could given how much I was shaking.  He then pulled me out, put me in handcuffs, and bent me over the trunk of their car.

It was at this point that I realized I was probably dealing with the police, rather than some criminal organization.  I told them I didn't know they were police.  One of them responded "Who else would be going the wrong way down a motherfucking one way street?"

Ummm,  I guess this chain of logic might have occurred to me if I wasn't scared shitless by the fact that strangers were blocking me in and pointing guns at me. 

The other one, who still had his gun to my head, said "We don't want to hurt you, we just want to know your source!" I had no idea what they were talking about.  I told them that I was a math teacher at the local high school.  His response was "Oh yeah?  Well how long have you been doing heroin?" They continued to interrogate me and searched my pockets as I told them about the chess team, the tournament, and the student I had just dropped off. 

After a minute or so, it became clear to them that I was not, in fact, a heroin user.  It was remarkable how quickly I shifted in their view from "junkie" to "white do-gooder".  Within sixty seconds, their tone of voice changed, they took me out of cuffs, and their started explaining why they had taken the approach that they did.

Their explanation went like this: The corner where I had dropped of this student was a well-known herion point.  White people are so rare in this neighborhood that those who are around after dark are usually there for the drugs.  Transactions often occur in the buyer's car, with the buyer driving the dealer around the block as the deal is made.  So I fit the profile of a heroin buyer.  When I failed to stop for them, they escalated by getting out and drawing guns.  When I continued to creep my car forward towards them (unintentionally, since I had no idea what I was doing at that point), they felt they had to escalate further my opening the door and putting a gun to my head.

It almost makes sense, except that they never identified themselves as cops.  They were in an unmarked car and never bothered to show me a badge.  Because they read me as a heroin junkie, they assumed I would be familiar with the routine of being pulled over by an unmarked car.  Just to emphasize the point: They were quicker to pull their guns on me than to show me any kind of police identification.

The next week, I told the chess team what happened during practice.  I'll never forget what one of them said to me next: "Mr. Allen, I'm sorry you had to go through that, but you know what that makes you?  A black man.  We go through that shit every day."  He then told me about a time the cops made him strip to his underwear and stand outside in the middle of winter for hours, cuffed to a police car, before they released him without charge.  All of my students had stories.  They all had stories of the cops treating them as if their time, their dignity, and even their lives were worthless.

I did end up filing a complaint with the Chicago Police Department, but I was unable to ID the officers.  I had (and still have) a clear mental picture of one of them, but none of the photos they showed me matched him.  So the case was dropped.

What do I take from this experience?  For one thing, some very real anxiety.  It still haunts me sometimes when I'm trying to sleep, and I was shaking when typing this out.  But I also try to accept it as an alternate-reality window into something I would never have otherwise experienced.  For a brief moment in time, the usual dynamics were reversed: I was profiled for being a white person in an all-black neighborhood.  Because of the color of my skin and the block I was on, the cops read me as a criminal and treated me like one.  But only for about a minute.  Once they realized I was not a junkie, my white privilege reasserted itself and suddenly they were there to serve rather than threaten me. 

As a white person with financial and educational privilege to boot, I can be reasonably certain that I will not experience such an incident again, unless I choose to return to a situation like urban teaching in which the usual rules become twisted.  But imagine (and I'm talking to white folks here) if you had no choice.  Imagine if you could never tell whether the cops—the people who are supposed to protect you—would arbitrarily read you as a criminal and decide to threaten your life before even explaining who they are or what they want.  Imagine how that might change your concept of safety, the way you present yourself outside, or even your plans for any given evening.  That is the reality that my chess team described to me.  It is the reality that underlies the headline-grabbing incidents like Michael Brown, Eric Garner, or Trayvon Martin.  It is the reality that millions of people live every day.

Brian Arthur's vision of Complexity Economics

My friend Daria Roithmayr alerted me to a working paper of Brian Arthur laying out a vision for a new approach to studying economics.  Brian Arthur is one of the pioneers of complex systems thought, and has devoted his life to understanding what really happens in our economy, and why this behavior is so different from what classical economics predicts.

Classical economics is a theory based on the concept of equilibrium.  Equilibrium, in economics, is a state in which everyone is doing the best thing they could possibly do, relative to what everyone else is doing.  And since everyone is doing the best possible thing, no one has incentive to change.  So everything stays the same.  Forever.

Okay, that doesn't sound much like our actual economy.  So why is the equilibrium concept so central to economics?  The answer is that equilibria can be calculated.  If you make certain simplifying assumptions about how economic actors behave, you can prove that exactly one equlibrium exists, and you can calculate exactly what every actor is doing in this equilibrium.  This allows economics to make predictions. 

These predictions are useful in explaining many broad phenomena—for example, the relationship between supply, demand, and price.  But they exclude any possibility of movement or change, and therefore exclude what is really interesting (and lucrative!) about the economy.  Arthur explains it this way:
We could similarly say that in an ocean under the undeniable force of gravity an approximately equilibrium sea level has first-order validity. And this is certainly true. But, as with markets, in the ocean the interesting things happen not at the equilibrium sea level which is seldom realized, they happen on the surface where ever-present disturbances cause further disturbances. That, after all, is where the boats are.
T-Pain understands the need for nonequilibrium theories.


The vision of economics that Arthur lays out is based not on equilibrium, but on computation:
A better way forward is to observe that in the economy, current circumstances form the conditions that will determine what comes next. The economy is a system whose elements are constantly updating their behavior based on the present situation. To state this in another way, formally, we can say that the economy is an ongoing computation—a vast, distributed, massively parallel, stochastic one. Viewed this way, the economy becomes a system that evolves procedurally in a series of events; it becomes algorithmic.
The part of this essay that was most challenging to me personally was where he talks about the limitations of mathematics:

...the reader may be wondering how the study of such computer-based worlds can qualify as economics, or what relationship this might have to doing theory. My answer is that theory does not consist of mathematics. Mathematics is a technique, a tool, albeit a sophisticated one. Theory is something different. Theory lies in the discovery, understanding, and explaining of phenomena present in the world. Mathematics facilitates this—enormously—but then so does computation. Naturally, there is a difference. Working with equations allows us to follow an argument step by step and reveals conditions a solution must adhere to, whereas computation does not. But computation—and this more than compensates—allows us to see phenomena that equilibrium mathematics does not. It allows us to rerun results under different conditions, exploring when structures appear and don’t appear, isolating underlying mechanisms, and simplifying again and again to extract the bones of a phenomenon. Computation in other words is an aid to thought, and it joins earlier aids in economics—algebra, calculus, statistics, topology, stochastic processes—each of which was resisted in its time.
He later explains the limitations of mathematics with an analogy to biology:
Even now, 150 years after Darwin’s Origin, no one has succeeded in reducing to an equation-based system the process by which novel species are created, form ecologies, and bring into being whole eras dominated by characteristic species. The reason is that the evolutionary process is based on mechanisms that work in steps and trigger each other, and it continually defines new categories—new species. Equations do well with changes in number or quantities within given categories, but poorly with the appearance of new categories themselves. Yet we must admit that evolution’s central mechanisms are deeply understood and form a coherent group of general propositions that match real world observations, so these understandings indeed constitute theory. Biology then is theoretical but not mathematical; it is process- based, not quantity-based. In a word it is procedural. By this token, a detailed economic theory of formation and change would also be procedural. It would seek to understand deeply the mechanisms that drive formation in the economy and not necessarily seek to reduce these to equations.
Or, as Stuart Kauffman asked me when I told him about my mathematical biology research, "Can any of your equations predict rabbits fucking?"