Field of Science

Is Life Fractal?

I'm sure you all know what fractals look like, but a few pretty pictures never hurt anyone:



Isn't that cool? The key thing about fractals is that if you look at just a small part of it, it resembles the whole thing. For instance, the following picture was obtained by zooming in on the upper left tail of the previous one:



One of the original "big ideas" of complex systems is that fractal patterns seem to appear spontaneously in nature and in human society. Let's look at some examples:

Physical Systems: Pop quiz: is this picture a close-up of a rock you could hold in your hand, or wide shot of a giant cliff face?



I don't know what the answer is. Without some point of reference it's very hard to determine the scale because rocks are fractal: small parts of them look like the whole.

Other examples in physical systems include turbulence (small patches of bumpy air look like large patches) and coastlines (think Norway). These two examples in particular inspired Benoit Mandelbrot to give fractals their name and begin their mathematical exploration.

Biological Systems: Here's an example you're probably familiar with:



And one you probably aren't:



The first was a fern, the second was a vegetable called a chou Romanesco, which has to be the coolest vegetable I've ever seen.

In the case of these living systems, there's a simple reason why you see fractals: they are grown from cells following simple rules. The fern, for example, first grows a single stalk with leaves branching out. These leaves follow the same rule and grow their own leaves, and so on.

Of course, the pattern doesn't exist forever. If you zoom in far enough, eventually you see leaves with no branches. This is an important feature of all real-world fractals: there is some minimum scale (e.g. the atomic scale or the cellular scale) at which the fractal pattern breaks down.

Social Systems: Some people like to extend this reasoning to the social realm, arguing that individuals form families, which form communities and corporations, which form cities, nations and so on. You can try to draw parallels between behavior at the nation level or the corporation level to behavior at the human level.

Personally, I'm a little dubious on this argument. My doubts stem partly from my personal observation that humans seem to act morally on an individual scale, but that corporations on the whole behave far worse than individuals. I think there's something fundamentally different about the centralized decision-making process of a human, and the more decentralized process of a corporation. But this is all my personal opinion. Feel free to debate me on it.

Is Global Complexity Worth It?

I had a wide-ranging lunch conversation with my friend Seth
a week ago. We touched on many things, but kept circling back to the above question. More specifically, I was wondering if our current level of global complexity could ever be sustainable, even with the best international governance and planning.

Let's define what we're talking about. In today's world, actions you take have consequences around the globe. For example, if you buy a computer, it likely was not made in a workshop down the road. The parts that go into your computer come from many different countries. These parts had to cross vast distances to come together, burning oil from other countries in the process. These parts were assembled in yet other countries, shipped several more times, and finally delivered to you. The money that you paid for the computer feeds back into all these various countries and processes, strengthening and perhaps changing them.

The effects of this global entanglement have been amazing. Without it, we wouldn't have computers, cellphones, airplanes, plastics, television, cars, or curry powder in the supermarket. None of these products can be made in any one local community; they all require cooperation on a continental, if not global, scale.

But I'm worried by globalization, on both a theoretical and practical level. It's clear that as humans, we aren't living within our means--I won't go into the details of that argument here. What concerns now is whether the very structure of our global society may be preventing us from ever living within our means.

First, feedback loops are getting too complex. Suppose we lived in a simple, hundred person community, and someone was stealing from his neighbors, dumping trash in the public square, or doing other undesirable actions. These actions would become apparent to everyone in short order, and the community could punish the perpetrator in various ways; economically, socially, even physically.

In theory, we have a legal system now to provide these kinds of punishments. But the more complex our society becomes, the harder it is to identify those who are screwing things up. Furthermore, laws and enforcement vary wildly across countries. Multinational corporations can get away with dumping trash in the ocean, toppling Central American democracies, intentionally creating blackouts in California, or supporting sweatshops in China because a) the actions might be legal in whatever location they're operating out of, b) they can obscure their practices behind a wall of complexity that regulators can't penetrate, and c) the consumers usually have no idea what the company is doing and therefore can't exercise moral judgment in their purchases. It could be decades before any consequences (legal, economic, or environmental) catch up with the perpetrator. And decades is too long to be an effective deterrent.

Second, we are increasingly interdependent. Witness how the mortgage crisis spread throughout American economic sectors and is now spreading through the world. Infectious diseases like avian flu have the potential to go global due to the volume of international travel. Even our environmental problems have globalized--we worry about global warming now, whereas the environmental agenda in the past was more about local pollution issues.

I see this as a problem because it means we have only one chance to screw up. The inhabitants of Easter Island destroyed their ecosystem and suffered for it, but the damage was contained to the island. In our current connected world, one disaster could ruin things for all humanity.

Can we do anything about global complexity and interdependence? I've been thinking about ways we can promote some simplicity in our economy, like buying local food or supporting local independent retailers over mega-chains. I'm not advocating we go back to preindustrial tribal society, but a little extra simplicity seems like a good thing.

Tragedy of the Commons in Evolution

Based partly on the feedback from last column, I'd like to probe a bit deeper into the connection between altruism, evolution, and space. Not outer space, mind you, but space here on planet Earth.

We all know that in Darwinian evolution, the species that survive and reproduce best in their environment are the ones that persist and evolve. We know from observing nature that this system tends to produce sustainable ecosystems in which every species seems to play a useful role, even if they are also competing for survival. In particular, no level of the food chain eats so much of the level below as to cause it to go extinct.

Now suppose that in a grassland ecosystem, some animal speices got really good at eating grass. So good, in fact, that it could devour an entire field, roots and all, in a season, and use all that energy to reproduce faster much than its competitors. It would seem that this species has an evolutionary advantage over its slower peers. Of course, this advantage would be very short-term; the grass couldn't grow back the next season, so all species, including this super-eater, would starve.

This situation might be called a Tragedy of the Commons, a phrase popularized by a 1968 Science article by Garret Hardin. This phrase refers to a general situation where there is a shared resource everyone depends on. Without some check on everyone's behavior, some individuals may be tempted to take more than their share, and if this happens too often, the resource is depleted and everyone suffers. (The current depletion of the global edible fish population is one of many real-life examples.)

The question is, why hasn't this tragedy wiped out life on earth by this point? What's to stop a super-eater from spontaneously evolving somewhere, multiplying rapidly, spreading throughout the planet, and destroying all life everywhere?

Several studies (May and Nowak, Werfel and Bar-Yam, Austin et. al.), each taking different approaches, point to a common answer: space. If a selfish overeater evolves somewhere, it will exhaust the resources around it, but then it will die off while other species in other ecosystems live sustainably. As long as there is sufficient space in the world, an overzealous species will cause its own destruction before it can spread very far. In this way, evolution on a sufficiently large planet actually favors organisms that live in harmony with their environment.

Now, if there was some species that could not only suck its environment dry, but also move fast enough to outrace the devastation it was causing, we'd have a real problem on our hands. Fortunately, it seems this has never happened.

Or has it???

Altruistic and Selfish Bacteria

The Boston University Physics Department hosted a very interesting talk yesterday by Robert Austin of Princeton. Austin has been studying the social behavior of bacteria, in order to help understand the social dynamics of other organisms, including humans. He shared with us some intriguing results about selfish and altruistic individuals, and the social dynamics between the two.

Indeed, Austin and his collaborators found a single gene that controls bacteria "selfishness." If it's off, bacteria slow down their metabolism and reproduction rate when they sense their environment has been depleted of nutrients. This prevents them from completely destroying their living space. However, if this gene is turned on ("expressed" is the technical term) the bacteria go right on eating until nothing is left. They even develop the ability to feed off of other dead bacteria.

Interesetingly, the gene is off by default when bacteria are found in the wild. But if you put them in a petri dish, mix them together, and cut off their food supply, you rather quickly (after only about 4 days!) see selfish mutants emerge. These mutants rapidly consume all the remaining food, including each other, and then starve.

This is an interesting conundrum. The petri dish situation seems pretty dire: first the cheaters win, and then everyone loses. This is another prisoner's dilemma situation: cheaters seem to have the advantage over the self-restraining altruists, but if everyone cheats then everyone is worse off.

On the other hand, bacteria in the wild exercise restraint, so there must be something different going on in the wild than in the petri dish.

Intrigued, Austin and his colleagues set up a different experiment. They designed an artificial landscape conatining many differnt chambers in which the bacteria could isolate themselves. Food sources were spread unevenly through the landscape. They also found a way to "manufacture" the selfish bacteria by fiddling with their DNA, and they dyed them a different color from the altruists to discern the interactions between the two.

In this situation, the altruists and the cheaters managed to coexist by segregating themselvs. The altruists gathered in dense clumps (and lived in harmony?) while the cheaters spread out sparsely (they don't even like each other!) around the altruists, occasionally gobbling up a dead one. Somehow, the altruists are able to segregate themselves in such a way that the cheaters can't steal their food; a marked contrast to the first experiments in which the bacteria were continually mixed together. Here's what this segretation looks like within two of the "chambers":



The chamber on the left, which is nutrient-poor, contains mainly cheaters waiting for others to die. The nutrient-rich chamber on the right contains "patches" of altruists and cheaters, never fully mixed. You can't see it from the picture, but the green altuists are very densely clumped and the red cheaters are spread apart from each other.

The possible life lesson here is that altruists can exist in a society with cheaters if the altruists can segregate themselves to form (utpoian?) communities. If there is forced mixing between the two groups then, unfortunately, it all ends in tragedy.

A very similar lesson can be found in the work of Werfel and Bar-Yam, but that's a story for another time.

Information, Part Deux

First, a note of personal triumph: I have a paper up on the arXiv! For those who don't know, the arXiv is a way for researchers to distribute their work in a way which is free for all users, but also official, so that no one can scoop you once you've posted to the site. In the paper, I argue that a new and more general mathematics of information is needed, and I present a axiomatic framework for this mathematics using the language of category theory.

For those unfamiliar with such highfalutin language, it's really not as complicated as it sounds. I'll probably do a post soon explaining the content of the paper in layperson's terms. But first, and based partly on the feedback to my last post, I think it's important to say more on what information is and why I, as a complex systems theorist, am interested in it.

I'm currently thinking that information comes in three flavors, or more specifically, three broad situations where the concept comes in handy.

  • Statisitcal information: Some things in life appear to be random. Really, this means that there's information we don't have about what's going to happen. It turns out there's a formula to quantify the uncertainty of an event---how much we don't know. This enables us to make statements like "event A is twice as uncertain as event B", and, more powerfully, statements like "knowing the outcome of event C will give us half of the necessary information to predict event B." The second statement uses the concept of mutual information: the amount of information that something tells you about something else. Mutual information can be understood as quantifying the statistical relationship between two uncertain events, and forms the basis of a general theory of complex systems proposed by Bar-Yam.


  • Physical Information: If the "uncertain event" you're interested in is the position and velocity of particles in a system, then calculating the statistical uncertainty will give you what physicists call the entropy of the system. Entropy has all the properties of statistical information, but also satisfies physical laws like the second law of thermodynamics (entropy of a closed system does not decrease.)


  • Communication Information: Now suppose that the "uncertain event" is a message you'll receive from someone. In this case, quantifying the uncertainty results in communication information (which is also called entropy, and there's a funny reason* why.) Communication information differs from statistical information in that, for communication, the information comes in the form of a message, which is independent of the physical system used to convey it.


The neat thing about these flavors of information is that they are all described by the same mathematics. The first time I learned that the same formula could be used in all these situations, it blew my mind.

One might think that is concept is already amazingly broad; why is a "more general mathematics of information" needed? The answer is that people were so inspired by the concept of information that they've applied it to fields as diverse as linguistics, psychology, anthropology, art, and music. However, the traditional mathematics of information doesn't really support these nontraditional applications. To use the standard formula, you need to know the probability of each possible outcome of an event; but "probability" doesn't really make sense when talking about art, for example. So a big part of my research project is trying to understand the behavior of information when the basic formula does not apply.

*Having just invented the mathematical concept of communication information, Claude Shannon was unsure of what to call it. Von Neumann, the famous mathematician and physicist, told him "You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage."