Field of Science

Information and Structure in Complex Systems

Eight years ago, I had finished my first year of graduate school in math, and I was at a loss as to what to research.  My original focus, differential geometry, was a beautiful subject to learn about, but the open research questions were too abstract and technical to sustain my interest.  I wanted something more relevant to the real world, something I could talk to people about.

Looking for new ideas, I took a course in complex systems, run by the New England Complex Systems Institute.  The director, Yaneer Bar-Yam, had pioneered a new way of representing structure in a systems.  I was fascinated by this idea but also puzzled. As a mathematician, I wanted to understand the basis of this idea.  What assumptions does it rely on?  How are its basic concepts defined? 

My attempt to answer these questions turned into one of the longest and most demanding projects I’ve worked.  After an eight-year collaboration Yaneer and my friend Blake Stacey, we finally have a preliminary manuscript up on the web.  It is currently under review for publication.  And to my pleasant surprise, we got a nice write-up in ScienceNews.

So what is this project all about?  The idea is that we're using information theory (which I've written about previously) as a tool to represent and quantify the structure of a system.

Before I explain what any of this means, let's consider some motivating examples.  Here's a system (call it system A):
You wouldn't really call this a complex system.  It has only one component (a ball) that bounces around in a fairly simple way.  Since there's not much to see here, let's take a look at system B:

Source: Wikimedia Commons
This system has many particles, which bounce around and bump into each other.  In one sense, this system is quite complex: it is very difficult to describe or predict its exact state at any given time.  But looking beyond the level of individual particles reveals a kind of simplicity: since the particles behave independently of each other, overall measures such as the average particle velocity or the rate of collisions are relatively stable.  In other words, the individual complexity "averages out", so that on the whole, the system behaves quite simply.

Contrast that to the behavior of system C:
Source: A Bird Ballet by Niels Castillon
This is a murmuration of starlings.  The starlings fly in a semi-coordinated, semi-independent way, creating intricate shapes and patterns that you would never observe in systems A and B.  This is a prototypical "complex system"—the kind that has intrigued researchers since the 70's. 

It is intuitively clear that systems A, B, and C have entirely different kinds of structure.  But it is surprisingly difficult to capture this intuition mathematically.   What is the essential mathematical property of system C that can allow us to differentiate it from A and B?

We try to answer this question using information theory.  Information theory was first invented by mathematician Claude Shannon in 1948 to address problems of long-distance communication (e.g. by telegraph) when some signals may be lost along the way.  Shannon's ideas are still used, for example, in the development of cell phone networks.  But they also have found applications in physics, computer science, statistics, and complex systems.

To explain the concept of information, let's look at a system consisting of a single blinking light:
This is one of the simplest systems you could possibly imagine.  In fact, we can quantify this simplicity. To describe the state of the system at any given time, you only have to answer one yes/no question: "Is the light on?"

The amount of information conveyed in one yes/no question is called one bit.  "Bit" is short for "binary digit", and is the same unit used to quantify computer memory.  In other words, the state of this light can be described in one binary digit, 0 for OFF and 1 for ON.

Now let's add another light:
Let's say these lights are statistically independent.  This means that knowing the state of one doesn't tell you anything about the other.  In this case, to identify the state of the system requires two bits of information—that is, two yes/no questions, one for the first light and one for the second.  We can depict this situation with a diagram like this:

The circles are drawn separately, since information describing one of them tells us nothing about what the other is doing. We could say that each of these bits applies at "scale one", since each describes only a single light bulb. 

Here are two lights that behave in a completely different fashion:
Note that the two light bulbs are always either both on or both off.  Thus, even though there are two components, the system can still be described by a single bit of information—a single yes/no question.  The answer to this question (e.g. "are they on?") applies to both bulbs at once.  The "information diagram" for this system looks like two completely overlapping circles:
We could say that the one bit of information describing this system applies at "scale two", since it describes two light bulbs at once.

A more interesting case occurs between these two extremes:
It's hard to see it, but I've animated these bulbs to be in the same state 3/4 of the time, and the opposite state 1/4 of the time.  If I told you the state of the first bulb, you wouldn't completely know the state of the second, but you could make an educated guess.  Specifically, if I told you the first bulb is ON, you could guess that the second is ON too, and you'd be right 75% of the time.  So there is information overlap: Information about the first bulb gives partial information about the second.  In fact, we can use Shannon's formulas to actually calculate how much overlap there is: approximately 0.19 bits.  So if you know the state of the first bulb (1 bit), then you also know 0.19 bits about the second bulb—not enough to know its state with certainty, but enough to make a guess that is 75% accurate.  The overlapping information can be depicted like this:
As you can see, 0.19 bits of information apply to both light bulbs at once (scale two), while the remaining 0.81+0.81=1.62 bits apply only to a single bulb (scale one).

In principle, these "information diagrams" (we call them dependency diagrams) exist for any system.  Highly ordered systems, like system A above, have lots of overlapping, large-scale information.  Highly disordered systems like B have mostly small-scale, non-overlapping information.  The systems that are most interesting to complex-systems researchers, like the starlings in example C, have lots of partial overlaps, with information distributed over a wide range of scales. 

And that's the basic premise of our theory of structure.  The structure of a system is captured in the overlaps of information describing different components, and the way information is distributed across scales.  While we take these concepts quite a bit further in our paper, the central idea is right here in these blinking lights. 

Thanks for reading!

The time the cops pulled their guns on me

This post is not about science.

I'm writing this because the horrific news out of Ferguson, Missouri—the killing of an unarmed man and the subsequent assault on the populace and media—has been bringing back memories an experience I had with the police ten years ago in Chicago.

I should be clear about why I'm choosing to share this. It's not because I think my own problems are particularly deserving of attention in comparison to the violence done to Michael Brown, Eric Garner, and other recent victims of police violence.  In fact, what I experienced was relatively tame in comparison.  But that's kind of the point. This incident instantly brought my white privilege into sharp focus, in a way that has stuck with me ever since.  Issues like racial profiling can be somewhat abstract for white people.  I hope my story can open a new entry point into these issues for those who rarely experience them directly.

After college, I joined Teach for America.  I was assigned to a high school on the West side of Chicago, where I taught math and coached the chess team.  The school and the surrounding neighborhood were nearly 100% black.  (Yep, Chicago is segregated.)  It was also a rough neighborhood in the sense that drug dealers and prostitutes operated openly within a block of the high school, and students talked about gang warfare the way those at other schools might gossip about the Homecoming dance.  I was not a great teacher in that environment, but I felt a strong bond with the students—especially with those on the chess team, who would squeeze into my tiny Civic every month or so to face off against other teams, often from much more affluent suburban schools.

One Saturday, we got back to the West side around 10pm, and I decided to give each of the team members a ride home.  After I dropped the last student off, I got back into the car to head home. But as I tried to start out, there was another car right next to me, blocking me into my parking space.  And the driver was looking at me.

I didn't know what they wanted.  Maybe they wanted my parking spot.  To try to get out of their way, I pulled forward a bit.  But they moved in parallel, blocking me in again.  We repeated this dance two or three times.  They motioned to me to roll down my window.  But seeing as I had no idea who they were, I thought this was probably a bad idea and kept my window shut.

Then the driver and passenger got out, walked in front of my car, pulled guns out, and pointed them at me.

As a child, I frequently had nightmares in which "bad guys" would shoot me with guns.  I started to feel like I had slid into one of those nightmares.  It didn't feel like reality—it felt like a dream that was happening to me.  I thought maybe I was mistakenly mixed up in a criminal conspiracy, and they were going to kidnap me or worse.

They shouted "PUT THE FUCKING CAR IN PARK!"  I complied.  Then one of them yanked open my car door and put his gun to my head (literally, it was touching my temple).  He shouted "TAKE YOUR FUCKING SEATBELT OFF", which I did as well as I could given how much I was shaking.  He then pulled me out, put me in handcuffs, and bent me over the trunk of their car.

It was at this point that I realized I was probably dealing with the police, rather than some criminal organization.  I told them I didn't know they were police.  One of them responded "Who else would be going the wrong way down a motherfucking one way street?"

Ummm,  I guess this chain of logic might have occurred to me if I wasn't scared shitless by the fact that strangers were blocking me in and pointing guns at me. 

The other one, who still had his gun to my head, said "We don't want to hurt you, we just want to know your source!" I had no idea what they were talking about.  I told them that I was a math teacher at the local high school.  His response was "Oh yeah?  Well how long have you been doing herion?" They continued to interrogate me and searched my pockets as I told them about the chess team, the tournament, and the student I had just dropped off. 

After a minute or so, it became clear to them that I was not, in fact, a herion user.  It was remarkable how quickly I shifted in their view from "junkie" to "white do-gooder".  Within sixty seconds, their tone of voice changed, they took me out of cuffs, and their started explaining why they had taken the approach that they did.

Their explanation went like this: The corner where I had dropped of this student was a well-known herion point.  White people are so rare in this neighborhood that those who are around after dark are usually there for the drugs.  Transactions often occur in the buyer's car, with the buyer driving the dealer around the block as the deal is made.  So I fit the profile of a herion buyer.  When I failed to stop for them, they escalated by getting out and drawing guns.  When I continued to creep my car forward towards them (unintentionally, since I had no idea what I was doing at that point), they felt they had to escalate further my opening the door and putting a gun to my head.

It almost makes sense, except that they never identified themselves as cops.  They were in an unmarked car and never bothered to show me a badge.  Because they read me as a herion junkie, they assumed I would be familiar with the routine of being pulled over by an unmarked car.  Just to emphasize the point: They were quicker to pull their guns on me than to show me any kind of police identification.

The next week, I told the chess team what happened during practice.  I'll never forget what one of them said to me next: "Mr. Allen, I'm sorry you had to go through that, but you know what that makes you?  A black man.  We go through that shit every day."  He then told me about a time the cops made him strip to his underwear and stand outside in the middle of winter for hours, cuffed to a police car, before they released him without charge.  All of my students had stories.  They all had stories of the cops treating them as if their time, their dignity, and even their lives were worthless.

I did end up filing a complaint with the Chicago Police Department, but I was unable to ID the officers.  I had (and still have) a clear mental picture of one of them, but none of the photos they showed me matched him.  So the case was dropped.

What do I take from this experience?  For one thing, some very real anxiety.  It still haunts me sometimes when I'm trying to sleep, and I was shaking when typing this out.  But I also try to accept it as an alternate-reality window into something I would never have otherwise experienced.  For a brief moment in time, the usual dynamics were reversed: I was profiled for being a white person in an all-black neighborhood.  Because of the color of my skin and the block I was on, the cops read me as a criminal and treated me like one.  But only for about a minute.  Once they realized I was not a junkie, my white privilege reasserted itself and suddenly they were there to serve rather than threaten me. 

As a white person with financial and educational privilege to boot, I can be reasonably certain that I will not experience such an incident again, unless I choose to return to a situation like urban teaching in which the usual rules become twisted.  But imagine (and I'm talking to white folks here) if you had no choice.  Imagine if you could never tell whether the cops—the people who are supposed to protect you—would arbitrarily read you as a criminal and decide to threaten your life before even explaining who they are or what they want.  Imagine how that might change your concept of safety, the way you present yourself outside, or even your plans for any given evening.  That is the reality that my chess team described to me.  It is the reality that underlies the headline-grabbing incidents like Michael Brown, Eric Garner, or Trayvon Martin.  It is the reality that millions of people live every day.

Brian Arthur's vision of Complexity Economics

My friend Daria Roithmayr alerted me to a working paper of Brian Arthur laying out a vision for a new approach to studying economics.  Brian Arthur is one of the pioneers of complex systems thought, and has devoted his life to understanding what really happens in our economy, and why this behavior is so different from what classical economics predicts.

Classical economics is a theory based on the concept of equilibrium.  Equilibrium, in economics, is a state in which everyone is doing the best thing they could possibly do, relative to what everyone else is doing.  And since everyone is doing the best possible thing, no one has incentive to change.  So everything stays the same.  Forever.

Okay, that doesn't sound much like our actual economy.  So why is the equilibrium concept so central to economics?  The answer is that equilibria can be calculated.  If you make certain simplifying assumptions about how economic actors behave, you can prove that exactly one equlibrium exists, and you can calculate exactly what every actor is doing in this equilibrium.  This allows economics to make predictions. 

These predictions are useful in explaining many broad phenomena—for example, the relationship between supply, demand, and price.  But they exclude any possibility of movement or change, and therefore exclude what is really interesting (and lucrative!) about the economy.  Arthur explains it this way:
We could similarly say that in an ocean under the undeniable force of gravity an approximately equilibrium sea level has first-order validity. And this is certainly true. But, as with markets, in the ocean the interesting things happen not at the equilibrium sea level which is seldom realized, they happen on the surface where ever-present disturbances cause further disturbances. That, after all, is where the boats are.
T-Pain understands the need for nonequilibrium theories.


The vision of economics that Arthur lays out is based not on equilibrium, but on computation:
A better way forward is to observe that in the economy, current circumstances form the conditions that will determine what comes next. The economy is a system whose elements are constantly updating their behavior based on the present situation. To state this in another way, formally, we can say that the economy is an ongoing computation—a vast, distributed, massively parallel, stochastic one. Viewed this way, the economy becomes a system that evolves procedurally in a series of events; it becomes algorithmic.
The part of this essay that was most challenging to me personally was where he talks about the limitations of mathematics:

...the reader may be wondering how the study of such computer-based worlds can qualify as economics, or what relationship this might have to doing theory. My answer is that theory does not consist of mathematics. Mathematics is a technique, a tool, albeit a sophisticated one. Theory is something different. Theory lies in the discovery, understanding, and explaining of phenomena present in the world. Mathematics facilitates this—enormously—but then so does computation. Naturally, there is a difference. Working with equations allows us to follow an argument step by step and reveals conditions a solution must adhere to, whereas computation does not. But computation—and this more than compensates—allows us to see phenomena that equilibrium mathematics does not. It allows us to rerun results under different conditions, exploring when structures appear and don’t appear, isolating underlying mechanisms, and simplifying again and again to extract the bones of a phenomenon. Computation in other words is an aid to thought, and it joins earlier aids in economics—algebra, calculus, statistics, topology, stochastic processes—each of which was resisted in its time.
He later explains the limitations of mathematics with an analogy to biology:
Even now, 150 years after Darwin’s Origin, no one has succeeded in reducing to an equation-based system the process by which novel species are created, form ecologies, and bring into being whole eras dominated by characteristic species. The reason is that the evolutionary process is based on mechanisms that work in steps and trigger each other, and it continually defines new categories—new species. Equations do well with changes in number or quantities within given categories, but poorly with the appearance of new categories themselves. Yet we must admit that evolution’s central mechanisms are deeply understood and form a coherent group of general propositions that match real world observations, so these understandings indeed constitute theory. Biology then is theoretical but not mathematical; it is process- based, not quantity-based. In a word it is procedural. By this token, a detailed economic theory of formation and change would also be procedural. It would seek to understand deeply the mechanisms that drive formation in the economy and not necessarily seek to reduce these to equations.
Or, as Stuart Kauffman asked me when I told him about my mathematical biology research, "Can any of your equations predict rabbits fucking?"

How natural processes can create meaning

The project of science is largely about asking why things happen.  We seek causal explanations: Why do planets follow elliptical orbits? Why does water become solid in cold temperatures?


Historically, this project has been largely reductionist in its approach.  That is, scientists have generally taken the view that phenomena can be explained in terms of smaller components.  We can understand how molecules behave by looking at their atoms; we can understand how atoms behave by looking at subatomic particles, etc. This program has been extremely productive: we can explain why oceans have tides and why prisms make rainbows.  Because of this success, some people believe that science will eventually be able to explain everything this way.  They argue that, if we can just understand matter at its tiniest level—quarks or whatever else is smaller than them—explanations for everything else will follow as a matter of course.

A postulated interior of the Duck of Vaucanson (1738-1739) by an American observer.  SOURCE: Wikimedia Commons
I encounter this extreme view not so much in academic papers, but moreso in casual conversations among people who want to ground their arguments in science.  It seems to be a common "move" to argue that some concept is meaningless or illusory, because it can ultimately be reduced to the level of atoms, genes, or some other constituent entity.  Jerry Coyne, for example, argues in a recent essay that free will does not exist, because our brains are composed of atoms that must obey the laws of physics.

I argue that this extreme reductionism does not make for convincing arguments, on two grounds.  (I should pause to say that the ideas here are heavily influenced by many other thinkers—Stuart Kauffman in particular.) The first is that understanding the behavior of the parts of a system doesn't necessarily imply an understanding of the behavior of the whole.  This is a result of chaos theory. It can be shown that most systems with many interacting parts are chaotic, meaning that even if one could measure the present behavior of each component to within arbitrary precision, this would not suffice to predict the system's behavior for more than a brief window of time.  Any initial inaccuracies in measurement rapidly compound until all predictive power is lost. (This is the famous "butterfly effect": the future can be changed by a flap of a butterfly's wings.)  Additionally, quantum effects add another source of indeterminacy to any physical system.  Thus it is impossible, for example, to predict the advent of mantis shrimp or David Bowie by starting from the Big Bang and applying the laws of physics.  These entities do not contradict the laws of physics, but they're not predicted by them either.  (Okay, maybe Bowie contradicts the laws of physics just a little bit.)

The laws of physics do not predict this hotness.

The second ground—and the idea I most want to explore here—is the following:

Natural processes create new reasons for things to happen.

The prime example of this is evolution.  Consider, for example, a bacterium swimming up a glucose gradient—perhaps the simplest goal-directed behavior in nature.  The bacterium senses more glucose on one of its sides than the other, and swims in the direction of more glucose.  What would we say is the reason for this behavior?  One could investigate the physics and chemistry of the bacterium and identify mechanisms that cause it to move this way.  But this does not explain the apparent agency in the bacterium's movement.  The more satisfying explanation appeals to evolution: it moves toward greater sugar concentrations because evolution has provided it this mechanism to find food in order to reproduce.

Simulation of bacteria undergoing biased random walk toward a food source.  SOURCE: http://www.mit.edu/~kardar/teaching/projects/chemotaxis%28AndreaSchmidt%29/finding_food.htm
Notice, however, that this explanation only makes sense on the level of the whole organism.  The carbon and other atoms that comprise this bacterium do not act as if they had any goal.  Only the bacterium as a whole appears to be goal-oriented.  Thus reductionism completely fails to explain the bacterium's behavior.  Evolution—a natural and spontaneous process—has created a new reason for something to happen. This reason applies to the whole organism, but not to its parts.

Once we accept that natural processes create new reasons for things to happen, many new questions arise.  For instance, do different kinds of evolutionary processes create different reasons?  Yes!  It turns out that evolution in spatially dispersed populations can select for cooperative behaviors that would be disfavored if all individuals were mixed together.  So the explanation "it behaves that way in order to help its neighbors" makes sense under some evolutionary conditions but not others.

We can also ask what other kinds of processes can create new causal explanations.  Humans, for instance, engage in many activities that do not seem to be directly related to survival or reproduction; I would argue that this is due to a complex process in which our genes co-evolved with our cultures

This man wants a slippery butt, but the individual cells that comprise him do not much care how slippery his butt is.  SOURCE: Three Word Phrase by Ryan Pequin
In short, nature can be creative.  Not only can it create new objects and life forms, it can also create new meanings, in the sense of reasons for things to happen.  These new meanings arise via naturally occurring processes that are consistent with—but not predicted by—the laws of physics.  These processes can even generate new, higher-level processes, which then create additional new layers of meaning.  If we, as scientists and as humans, want to understand why things happen, we must first understand the multiple, distinct ways that meaning and causality can arise. 

What's the deal with inclusive fitness theory?

You may not be aware of it, but there is a battle afoot in the theory of evolution.  The fight is over inclusive fitness theory—an approach to studying the evolution of cooperation.  I, together with mathematical biologist Martin Nowak and naturalist E. O. Wilson, just published an article pointing out weaknesses in the theory, and suggesting that it might not tell us much about why cooperation actually evolves.  This is my attempt to explain the controversy—and our new paper—to those who may not know anything about it.

ResearchBlogging.org The essential question is, "Why do organisms sometimes help others at a cost to themselves?"  Such helping behaviors have been observed from microbes to insects to humans.  At first glance, these behaviors may appear to contradict natural selection, since the cost of helping reduces the chances that the behavior is passed on to offspring. 

Theorists have identified a number of different ways that costly helping can actually be favored by natural selection.  One way is if the help is primarily directed toward close relatives. These relatives have a good chance of sharing the "helping" gene, so that help increases the overall prevalence of this gene.  This mechanism is called kin selection.

Inclusive fitness theory is one way of representing the idea of kin selection.  Let's say you have some gene that makes you sacrifice your time and energy to help others.  This help affects fitness—the number of healthy offspring you produce.  ("Healthy" offspring are the ones that will eventually grow up and have offspring of their own.)  The first idea is to split fitness into the offspring that you produce on your own, and those which can be attributed to help from others:


The idea of inclusive fitness is to disregard the offspring that others help you produce, but instead count the ones that you help others produce:


To determine the overall effect on the helping gene, offspring that you help others produce must be weighted by the probability that they share the helping gene, which can be interpreted as your "relatedness" to them.  (For example, help you give to your siblings is weighted by one-half, equal to the probability that you inherited the same parental copy of the helping gene.)  Adding up these amounts of help times relatedness gives your inclusive fitness.  In some simplified models, it can be shown that natural selection favors organisms that have the highest inclusive fitness. 

At this point you may be asking "Wait, does it really make sense to divide offspring into those  produced on one's own versus those produced by help from others?"  This is exactly the problem!  Aside from the obvious point that no one reproduces without help in sexual species, nature is full of synergistic and nonlinear interactions, so that making clean divisions like this is impossible in most situations.  Thus the idea of inclusive fitness theory only works in simplified toy models of reality. 

Nowak and Wilson, together with mathematician Corina Tarnita, made this point forcefully in a 2010 Nature article.  In response, more than 100 authors signed a letter saying that inclusive fitness theory has no limitations, and is as general as natural selection itself.  (There were also heated blog posts and a talking bear video!)

What are we to make of this claim that inclusive fitness theory has no limitations at all?  This claim turns out to be based on the idea that, however complex the interactions are in nature, one can always use linear regression to split one's offspring into those attributable to oneself versus others.

Our new paper shows that this approach is not exactly wrong, but nonsensical.   To see why, let's consider a hypothetical helping trait (call it Trait X), and see if this approach can tell us whether and how this trait is selected for. 


Can the this method predict whether Trait X will succeed in evolution?  No, because in order to even set up the regression, one must know in advance whether it succeeds not.  The whole method is based on retrospectively analyzing known results of natural selection, and so it logically cannot predict anything new.

Ok, so if we must know in advance whether or not Trait X is favored, can this method at least help us understand why it succeeds or fails?  The answer is no again, at least not in general.  The reason is that the regression method looks for correlations between having type X as a partner and having high fitness.  If there is a positive correlation, this method says that trait X is "altruistic".  But as any statistics student knows, correlation does not imply causation.  In fact, it is easy to come up with examples where the regression method misidentifies the nature of a trait.

For example, suppose Trait X is actually a jealous trait—if you have it, it makes you want to find high-fitness individuals and attack them, reducing their fitness as well as your own.  A hypothetical example with numbers is illustrated here:

The greenish numbers are the fitnesses before the attack; while the red numbers indicate the results of the attack.  The individual with Trait X (indicated in red) found the highest-fitness individual (5, in this case) and attacked him, reducing each of their fitnesses by one.  But since the attacked individual still has fitness 4, there is a positive correlation between having Trait X as your partner and having high fitness.  So the regression method calls this "altruism" when it clearly is not.

In short, the regression method generates a "just-so-story", which is often wrong, for an outcome that is already known.  The fact that this method is trumpeted as "the very foundation of social-evolution theory" indicates a weird state of affairs in this corner of biology.  My reading is that many researchers fell in love with inclusive fitness theory (which admittedly can be elegant and intuitive when it works), and tried to stretch it to include all of natural selection.  Similar problems exist in economics, in that some researchers fall in love with the elegant mathematics of their theories and forget that they may not always apply to the real world.

I'm not proposing that we replace inclusive fitness theory with some other all-encompassing theory or framework.  Rather, I'm suggesting that the method of analysis be tailored to the problem at hand.  A variety of mechanisms can support the evolution of cooperation, and a variety of approaches are needed to understand them.  The only truly general theory in evolutionary biology is the theory of evolution itself. 

Allen B, Nowak MA, & Wilson EO (2013). Limitations of inclusive fitness. Proceedings of the National Academy of Sciences of the United States of America PMID: 24277847

Gardner A, West SA, & Wild G (2011). The genetical theory of kin selection. Journal of evolutionary biology, 24 (5), 1020-43 PMID: 21371156

Nowak MA, Tarnita CE, & Wilson EO (2010). The evolution of eusociality. Nature, 466 (7310), 1057-62 PMID: 20740005

On math and magic


I've been on a kick lately of re-reading my old favorite fantasy novels. I started with some of Lloyd Alexander's Prydain Chronicles, and am now going back through Ursula K. LeGuin's Earthsea Trilogy. I haven't touched this books—or anything in the fantasy genre—since my early teens, and its been interesting to see how differently I relate to them now.

...from another former obsession
One moment in particular struck me. In LeGuin's A Wizard of Earthsea, there's a scene in which a young apprentice-mage sneaks a look at his master's dusty old spellbooks and becomes transfixed by the ancient runes inside. I realized that the visceral feeling evoked by this passage (and others like it throughout the fantasy genre) is exactly what I felt as a college freshman exploring the math section of my undergraduate science library. I would spend hours at a time browsing dusty old math books, the more arcane the better, trying to decipher their internal logic. Yes, I wanted to learn new math, but I was also hooked on the feeling of being lost in these mysterious tomes. Like the mage's spellbooks, these math books contained strange symbols describing deep and powerful truths, which could only be understood through long, deep study.

A sample from a recent article of mine. Doesn't math look cool?
Reflecting back on these moments highlights how my relationship to mathematics has changed.  I was initially drawn to math because of its beauty, elegance, mystery, and because it contained a kind of absolute truth.  But after teaching for three years and studying differential geometry for one, I found that abstract beauty and truth were no longer enough to sustain my excitement.  I wanted to discover and describe important patterns in the world, not just relationships between abstract constructs.  Metaphorically speaking, I wanted to work my magic in the world, not just study it for its own sake.  This lead me to study study of complex systems and eventually evolutionary dynamics.  Mathematics has lost none of its beauty or mystery for me, but my focus now is on its connection to the world rather than its absolute, self-contained truths.

This parallels, in some ways, the differences I've noticed in the way I approach these fantasy novels now.  As a hyper-imaginative pre-teen, I wanted to lose myself in these fantasy worlds, to blur the lines in my mind between these worlds and my own.  Re-reading them now, I have no desire to escape into these worlds.  Rather I look for metaphors and themes connecting these worlds to mine. These books (and the genre as a whole) seem obsessed with the idea of power: discovering one's own power, learning about different sources of power, coming to grips with the dangers and limitations of power, avoiding the temptation to use power for evil.  As a researcher, a future professor, and simply an adult actor in this world, I have a certain measure of real-world power now that I lacked as a bookish pre-teen. In these books, I'm finding an opportunity to reflect on how to wield that power, and the responsibility that comes with it.

Perhaps the larger theme is this: I used to think I needed to escape from the world in order to be myself.  Now my goal is to connect to the world, as much as possible, while still being deeply, authentically, myself.

Can we find meaning in evolution?

I'm a mathematician who studies evolution. I'm also a person who thinks about how people can find meaning and purpose in their lives. And so, combining these, I've spent a fair bit of time thinking about what, if anything, evolution can tell us about the meaning and purpose of human life.

My friend Connor Wood recently wrote on this topic. Specifically, he probed the question of why, precisely, many conservative religious traditions find the idea of evolution so objectionable. His argument is encapsulated in this quote:
I strongly suspect that evolutionary theory makes people so uncomfortable, not because it disagrees with Genesis (lots of things contradict Genesis), but because it presents a vision of a natural world whose “values” are fundamentally opposed to those of our religious cultures.
By "values" (in quotes because evolution is an amoral process), Connor is referring to the often violent struggle to survive and reproduce one's genes, which includes such behavior as infanticide in some mammals and birds. While I agree with Connor's basic argument, I think it's not primarily the violence and struggle that offends some religious sensibilities (the Old Testament and many other religious texts are full of violence) but rather the inherent randomness and lack of ultimate purpose in the process.

Even though scientists generally don't intend it as such, evolution fills the role of a creation story. Like other creation stories, it explains where we came from and how we got here. But unlike other creation stories, it gives us few clues as to where we're going or what we're supposed to do. In fact, it tells us that we're the product of random events. If this randomness had gone differently, we might not be here at all. I think the randomness and lack of purpose implied by this story is why many people—including some who believe it as a scientific hypothesis—find the idea of evolution disturbing.
Where did all this come from??  What does it mean??

Interestingly, several thinkers have tried to turn this equation around, claiming that evolution can, in fact, satisfy our deepest psychological/spiritual needs. One of these is Stuart Kauffman, one of the biggest names in complex systems. Kauffman's latest book, Reinventing the Sacred, argues that evolution is such a creative and fundamentally unpredictable process that it can provide us with all the divine-like inspiration we need.

Unfortunately, Kauffman's idea doesn't quite get there for me. It's true that the variety of life is awe-inspiring, with more and more surprises the closer one looks. However, I think that just being awestruck by the beauty and creativity of nature is insufficient: it doesn't satisfy the questions of why we're here or what we should try to do with our lives.

Another approach is to focus on the potential of evolution to produce cooperation, creativity, and complexity. These aspects of evolution are highlighted in Supercooperators, the new book by my boss and mentor Martin Nowak. I think one of the reasons for the past few decades' surge of research into this side of evolution (the "snuggle for existence") is that it changes the story evolution tells about us, allowing us to understand how love, empathy, and compassion are also products of our evolutionary history.

But I don't find this to be of great philosophical comfort either. First, for every example of the evolution of cooperation, there's a complementary example of evolved selfishness and violence. Second, knowing that my feelings of love and empathy exist because they were successful traits in my ancestors doesn't make me feel better about them. In fact, it makes me feel worse. I want to think of these as fundamental to who I am, not some ploy to reproduce my genes. Every time I try to think about all my love and altruism as being a product of evolution, I become sad and want to stop thinking about it. Perhaps I'm just not thinking about it right, but I imagine others may have this difficulty too.

I made a handy (oversimplified) chart to summarize what I think evolution can and can't do for us in terms of filling philosophical/spiritual voids:
In short, my answer is that no, I don't think evolution can provide us with satisfying answers to many of our deepest questions.

Some atheists/materialists argue that the conversation should end here: There is no larger meaning or purpose to life, and any quest for such is a waste of time. But these questions are a real part of who I am, as real as love or anything else I feel. Doubtless, such searchings are products of evolution themselves. Yet to rationalize them away would be to deny a fundamental part of myself. Besides, if life truly has no purpose, then what would my time be better spent doing? Reproducing my genes? Why should I care about that either, if that's also just another artifact of evolution?

My approach is to grapple with these questions head on, knowing that there are no easy answers. Evolution—the most credible scientific theory as to how we got here—doesn't tell us where we're going or what to strive for. And yet it has implanted us with a deep need to plumb these questions. One could, I suppose, see this as a cruel joke that our evolutionary history has played on us. But I think these questions are as real and important as anything else we experience in life, and there is fulfillment and self-knowledge to be found in exploring them, even if we strongly suspect that satisfying answers will never be found.