Field of Science

Phase Transitions

One of the biggest projects of complex systems research is to find "universal" phenomena: patterns that manifest themselves in similar ways across physical, social, and biological systems. One phenonenon that appears regularly throughout complex systems is phase transitions: those instances when a slight change in the rules causes a massive change in a system's behavior. These changes only seem to happen when the system is at certain "critical" points. Understanding when these phase changes occur, and what happens when they do, will go a long way toward increasing our understanding of systems behavior in general.

To illustrate the many manifestations of this idea, let's look at some examples:


  • Physics: Water boils at 212 degrees Fahrenheit. This fact is so commonplace that it's easy to forget how fundamentally surprising it is. Temperature is basically a measure of how "jittery" the molecules in a substance are. Most of the time, if you increase water's temperature by a degree or two, you make the individual molecules buzz around faster but the liquid itself ("the system") retains all of its basic properties. But at the magical point of 212 degrees, a slight change in jitteriness radically changes the system's behavior. At the critical point, the slight change is just enough to overcome certain forces holding the molecules together, and off they go.


  • Computer Science: Say you give a computer a randomly selected problem from a certain class of problems (like finding the shortest route between two points on a road map), and see how long the computer takes to solve it. Of course, there are many ways of "randomly" choosing a problem, so let's say you have some parameters which tell you how likely some problems are versus others. For the most part, a small change in the parameters won't change the complexity of the problem much, but at some critical values, a small change can make the problem much simpler or much more difficult. (For a technical exposition see here.) Papadimitriou claimed that, in some mathematical sense, these are the same kind of phase transitions as in solids and liquids, but I don't know the details on that claim.


  • Mathematics: There are several mathematical phenomena that behave like phase transitions, but I'll focus on bifurcations. A dynamical system in mathematics is a system that evolves from one state to another via some rule. Change the rule a little and you'll change the system's behavior, usually not by much, but sometimes by a whole lot. For instance, the system might shift from being in equilibrium to alternating between two states. Change the rules a bit more and it could start cycling though four states, then eight. Another small change could land you in chaos, in which predicting the future behavior of the system is next to impossible.


  • Ecology: Okay enough with theory, let's look at some situations where phase transitions matter in a huge way. Ecosystems are adaptive, meaning that they can absorb a certain amount of change while maintaing their basic state. However, Folke et. al. have extensively documented what they call "regime shifts" in ecosystems--changes in ecosystems from one stable state to anoher, very different stable state (think rainforest to desert.) Often these shifts appear triggered by human behavior. Folke et. al. also review ways to increase ecosystem resilience (i.e. make them less susceptible to regime shifts) by, for example, promoting and maintaining biodiversity.


  • Economics: Well for starters, there was the Great Phase Transition of 1929, or the current phase transition triggered by sub-prime lending. In both events, small crashes cascaded into much larger ones because of underlying problems in the market: in the first case people buying stocks with borrowed money; in the second case investment in risky mortgages that only make sense when interest rates are low. These underlying problems created a situation where a single "spark" could bring the whole market down.


  • Social Sciences: The idea of a "tipping point" actually belonged to sociological theory before Malcolm Gladwell popularized it. It refers to any process that, upon gaining critical momentum, cascades dramatically. The term was first coined to describe white flight: once a critical number of nonwhites moved into a neighborhood, all the whites would head for the 'burbs. It has since been used to describe all manner of trends and fads, as well as contagious diseases. Trends that never catch enough initial supporters will die out quickly, but beyond a certain point, they're unstoppable. "Facebook" ustoppable.


Given their importance and ubiquity, understanding the how, why, and when of phase transitions is a crucial project. The good news is that they're not totally unpredictable--there are certain signs that tell you when a phase transition may be approaching. However, this discussion must wait for another time.

Where did my gills go, again?

This has nothing to do with complex systems theory, but it's so cool I had to share it. I found out on Wired Science today that, according to anatomist Neil Shubin's book Your Inner Fish, hiccups are a leftover evolutionary impulse from our time as amphibians. Essentially, when we hiccup, our brain is trying to get us to breathe through our gills, rather than our lungs. As the Guardian explains:

Spasms in our diaphragms, hiccups are triggered by electric signals generated in the brain stem. Amphibian brain stems emit similar signals, which control the regular motion of their gills. Our brain stems, inherited from amphibian ancestors, still spurt out odd signals producing hiccups that are, according to Shubin, essentially the same phenomenon as gill breathing.


Kevin Costner may have been a visionary after all.

Christos Papadimitriou

Christos Papadimitriou, one of the world's foremost computational theorists, gave a talk Thursday at MIT entitled "The Algorithmic Lens: How the Computational Perspective is Changing the Sciences." Through a series of eight "vignettes" in math, physics, biology and economics, he showed how ideas from computer science have influenced thinking in all other sciences. I don't know if he explicitly aligns himself with the complex systems movement, but the ideas he presented were very much in line with complex systems thinking, and gave me a lot to ponder.

When mathematicians and physicists look at a problem, the main questions they ask are "Is there a solution?" and "How do we find it?" If there is even a theoretical procedure for finding the answer, the mathematicians and physicists are usually satisfied. What computer scientists bring to the table is another question "How complex is the solution procedure?" Computer scientists ask this question because they know of many problems which can be solved in theory, but even the fastest computer in the world couldn't solve them before the end of the universe. Computational complexity started as a practical concern of deciding which problems can be solved in reasonable amounts of time, but it was soon recognized as an interesting theoretical problem as well. Papadimitriou's thesis is that importance of this question has now spread beyond computer science to all of the natural and social sciences.

In this post I'll focus on two of his vignettes. My next post will focus on a third.

The first is from economics. It is a central tenet of economic theory that a market will always "find" its equilibrium: that magical point where supply, demand, and price are perfectly aligned. However, Deng and Huang, among others, have shown that finding such an equilibrium is not polynomially bounded, meaning that even very powerful computers can't find equilibria in large markets.

Now, if the market itself is performing computations as its various players try to sort out optimum prices and production levels. If it were true that the market could always find its equilibrium, this would constitute a "proof" that the the problem can be solved relatively easily. In fact, you could just write a computer program to emulate what the market does.

But since an easy solution is theoretically impossible, this must mean that markets don't always find their equilibria. And this fact is actually obvious from looking at how markets really behave: they go up and down, they crash, they generally do strange things. So this tenet of economic theory must be due for a serious revision.

A second interesting vignette was ostensibly about the brain, though it has much wider implications. As we make decisions, we can often feel different parts of our brain working against each other. Part of us wants something and part of us wants something else. Papadimitriou asked, "Could this possibly be the most efficient way to make decisions?" It doesn't seem particularly efficient. And if it isn't, why has our brain, over millions of years of evolution, developed such an inefficient process?

A recent paper of Livant and Pippenger cast the problem this way: Can an optimal decision-making system ever contain agents with conflicting priorities? The answer is no in general, but yes if the system has limited computational power, i.e. limited resources for dealing with complexity. Which of course is true for every real-world decision-making system, including brains.

Their reseach implies that, not only does it make sense for our brains to seemingly conflict with itself, but also that, if you are assembling a decision-making team, it actually makes sense to include people who disagree with each other. A belated lesson for Mr. Bush, perhaps?

Next time: phase changes!

On Capitalism

Like it or hate it, there's no denying that capitalism is the dominant economic system on the globe. In countries which established capitalism on their own terms (i.e. not the ones where capitalism was forced by intervention from other countries), productivity and average (material) quality of life have grown consistently, far outperfroming countries with other economic systems. What is behind this success? How can a system which is (in some sense) based on inequality provide better for its citizens than systems which explicitly try to provide for everyone?

Let's take a closer look at how capitalism works. Ideally, people provide goods or services which are useful to society, and if other people value these goods or services, they give money in exchange for them. Money is a powerful incentive, thus people have a motivation to provide things that other people want. If any particular need of society is not being taken care of, there is especially high incentive for some entrepreneur to come along and start providing it. Eventually, people evolve different strategies of providing good and services: they organize themselves in different ways and experiment with new products and methods of distribution. In theory (there's that word again!) the people and organizations that are successful are those that most effectively provide what other people want.

Broadly speaking, this is exactly how a complex system should be run. The incentives are in place for people to do things that are good for other people. Those who "run" the system (i.e. the government) do not dictate exactly what we should do, but instead make sure the incentives work correctly in encourage us to be useful. Creativity and experimentation are allowed, even encouraged. The final result is unpredictable, but the market works in delivering to most of us the things we need.

Of course, there are many, many problems with capitalism. Too many for me to list exhaustively, though I will highlight a few major ones:

  • Inequality: In any incentive-based system, some people will get more of the incentive and some will get less. This is just how incentives work. Under capitalism, however, money is tied to our basic ability to survive. If we want to avoid people starving to death, or living homeless, just because they are bad capitalists, the best solution is a strong social safety net that provides basic necessities for everyone.


  • Unfair competition-If you have a good idea and a good way of delivering it, you should be able to make money from it under capitalism. Unfortunately, established corporations have ways of squelching efforts by upstarts. These unfair practices should be illegal, but combatting them requires a strong, independent government, and for this we probably need publicly financed elections.


  • Harming the public-This is a broad category, including things like deceiving your customers out of money (think credit card companies) and using up resources that should belong to all of us, like the environment. Again, a strong, independent government is necessary, though citizen activists also have an important role in protesting these abuses.


  • Money "becomes" morality-I've been thinking a lot about this point lately. It seems that in any incentive-based system, people have a tendency to internalize the incentives to the point where they bleed into their notions of right and wrong. For capitalism, this means that some people seem to think anything that makes them money must be "right"--a point that mundane turbulence made earlier. This is a serious issue for capitalism because it feeds into all the other negatives above ("The love of money is the root of all evil.")

I think the overall message is that capitalism is very good at providing for the needs and wants of individuals, because the incentive (money) comes from individuals. It is much worse when it comes to providing for our collective needs (e.g. a clean environment.) We need to think creatively about how to make captitalism and democracy work for our needs as a whole, and not just as individuals.

On Communism

Communism was always a mystery to me. Why was it that all the countries supposedly founded on the egalitarian ideals of Marx ended up as repressive police states? Was it just a historical accident, or is there a deeper reason?


In this post I will argue that the failure of communism was the inevitable result of a failure to manage complexity. I expect this thesis to be somewhat controversial--chime in if you have an opinion. Also, I am not a history expert, so please forgive and correct any errors I make. As in many other areas, my thinking on this issue owes a large debt to Yaneer Bar-Yam.

Let's start with Marx's principle: "from each according to his ability, to each according to his need." According to this principle, everyone performs the tasks they are good at, and the goods and services produced are redistributed to those who need them. If this process runs smoothly, the needs of the entire society are taken care of.

However, as we all know from personal experience, it's complex enough to figure out what one person's abilities and needs are. Imagine trying to discern the needs and abilities of an entire country, and how best to match the needs and abilities with each other. To do this in a way which takes the idiosyncasies of each individual into account would be a massive complexity overload; it would take practically every individual in the country just to do the planning, with no one left to do the actual work.

So how did the USSR and other communist societies deal with this problem? Recall from last time the only way to control a complex system is to coercively reduce the system's complexity. This is precisely what happened in communist countries: they turned into permanent police states. In order for the leaders to control the economies they were trying to plan, the populace had to be forced into conformity and regimentation, i.e. lower complexity. People were forced into occupations that were not the best match for their talents, and governments made the simplifying assumption that everyone's needs were pretty much the same. It was the only way the organizational problem could be solved.

These simplifications worked, for a time. Eventually, in the USSR, people grew tired of the coercion and the economy stagnated. Gorbachev sought to reinvigorate the nation by allowing some economic and political freedoms, not realizing that the lack of freedom was precisely what was made the organizational system possible. No longer able to control the recomplexified system, the government fell.

So could communism ever work? Not, in my view, on the scale of a whole country. The principle of need and ability could be applied to smaller groups, where the organizational challenges are less severe. We see this, for example, in cooperative communities such as the kibbutzim of Israel (though even these are suffering from complexity management challenges.) In these smaller communist societies, you miss out on the efficiency provided by economies of scale, and there is no opportunity for highly specialized professions such as neurosurgeon or theoretical physicist. But the upside is the possibility of a society where everyone's needs are taken care of. Not such a bad deal.

Join us next time when we ask, "Does capitalism do any better?"

How Complex is a Human?

We humans are an egotistical bunch. We'd like to think that we are capable of anything, that our minds are infinite, and that there is no limit to our potential ingenuity.

Nevertheless we are, by any measure, creatures of finite complexity. Our bodies and our brains contain a finite number of cells, we live a finite amount of time, and at any given time there are a finite number of things we are physically and mentally capable of doing.

Since our complexity is finite, we should be able to quantify it somehow. As we discussed last time, there are different ways of quantifying complexity. We could look at ourselves as a system of cells and chemicals, and ask how many pages it would take to write a complete description of how a human is composed of these parts. Alternatively, we could look at ourselves as active beings and ask how many potential actions we could take at any given instant. Or how many actions we actually take (on average) over the course of a lifetime. Yaneer Bar-Yam has suggested this example: You could record a digital movie of a person from birth until death, store this movie in a digital file, and calculate the size of this file in gigabytes. I don't know if any of these computations has ever been tried, but each would give you an approximate number which quantifies just how finite we are.

Scary, huh? At least I find it so.

This may seem like just an interesting excercise, but it has important consquences due to the following fundamental rule:
You can't control a system that is more complex than yourself.
The reason for this is simple: If a system is more complex than you, it has more possible actions than you have potential responses. So it will eventually present you with a situation for which you have no response.

To illustrate this rule, suppose you are trying to manage a group of people; say, a family, business, class, or club. You might wish to control the actions of all of them, to make sure they don't act against your wishes. However, the group is more complex than you are because there are more of them then there are of you. The only way to control them completely would be to reduce the complexity of the group; for example, you could chain them to a wall and thereby limit their potential actions.

If you wish to organize a group without such restrictive measures, your best option is to put incentives and disincentives in place to promote the actions you wish. Then step back and let the group evolve as a system. If you designed your incentives correctly, the group should evolve into a system with the properties you desire. If not, the incentives should be changed. But no matter how you set up the system, you will not be in control of it. The group and its members will be making their own decisions, and different groups will evolve differently under the same set of incentives. This is the nature of the game.

Tune in next week, when we find that this discussion has massively political implications!

Quantifying Complexity

Complexity matters. This will hopefully become evident through the course of our discussion, but for now let's accept the principle that, in a great many situations, the extent to which something is complicated can be hugely important.

For a mathematician or scientist, a natural step after identifying something important is to attempt to quantify it, in the hopes of determining some of its properties. Us humans actually have a decent intuitive sense of different quantities of complexity. For example, we could all agree that Mozart's 40th is more complex than Twinkle Twinkle Little Star, or that sovling a crossword puzzle is more complex than tying a shoe. Other comparisons are less clear: Is a horse a more complex animal than a lion? Is China's economy more complex than India's?

Complexity researchers have identified several different ways that complexity can be quantified. These measures roughly fall into three categories:
  • Variety - the complexity of an object can be quantified in terms of the number of actions it can take or the number of states in which it can exist.
  • Descriptive complexity - the complexity of an object can be quantified in terms of the length of the shortest complete description of that object
  • Algorithmic complexity - the complexity of a process can be quantified in terms of the number of steps or amount of time required to complete that process.
These three categories can be linked mathematically, which supports the idea that they are three expressions of the same concept rather than three different concepts. However, none of these can be unambiguously applied to the real world. For example, the number of actions or states of an object can be difficult to quantify. How many different actions can a human take? Descriptive complexity notions are dependent on the language used to describe something, and on what counts as a "complete" description. Similarly, algorithmic complexity notions depend on how a process is broken into tasks. This is not to say that the above quantification schemes are useless; just that care should be used in applying them and the values they give should be seen as approximate.

This approximateness is a problem for many scientists, who are used to dealing with the exact. How can we apply our analytical tools to a quantity which can never be precisely measured?

The way forward, in my opinion, is as follows. We (complex systems researchers) will investigate abstract models in which complexity can be mathematically quantified. The goal of investigating such models will be to discover laws of complexity which may carry over to the real world. At the same time, these laws must be checked against real-world experiment and observation. Because of the semi-fuzzy nature of complexity, the laws we discover will likely not be quantitative (e.g. F=ma or e=mc^2), but qualitative (e.g. "energy is conserved.")

In the near future, we will investigate two examples of such laws: Occam's Razor (the simplest explanation is the likeliest) and Ashby's Law (an organism must be as complex as its environment.) In the meantime, can anyone think of other qualitative laws of complexity/complex systems?

Ideas for future posts

I imagine this will become a regular feature of the blog. The purpose of a post like this is twofold. First, it lets me record all the random ideas I don't want to forget. Second, I'm hoping it may stimulate some pre-discussion of some of these topics. Drop a comment if anything intrigues you!

The finite mind and the infinite universe
The resilience of life and the fragility of humanity
A systems definition of life?
Thermodynamics and the creation of complexity
The possibilities and limitations of complex systems research
Big-picture and little-picture thinking in science
Managing a complex system
Sustainable development in poor countries
Prisoner's Dilemma and its implications
Quantum physics: What does it mean?
My teaching experience in Chicago: specifically, the nature of rules
Good complexity vs. bad complexity
The beer distribution game
Definitions of complexity
Definitions of complex systems
The role of information in complex systems research
Universal behaviors: specifically, universal responses to stress
Chaos, unpredictability, and the butterfly effect
The impossibility of colonizing other planets
Outsourcing and the fall of the Ottoman empire
What can I do? I am only one person!
Cooperation and competition
Youtube, craftster, and the evolution of creativity
Nontraditional forms of collective action
Comparative economic and political systems

Well, that seems pretty good for a starting list. More to follow!

How Can We Change Behavior?

This one's been bugging me recently. Our current lifestyle is unsustainible (by "our" I mean the collective inhabitants of this planet.) Scientists know it, I know it, and I think most people I know would agree. We can't sustain our current way of life for more than about fifty years before environmental catastrophes force us into a very bad place.

Now, I believe people are inherently good. If a deity visited someone and said "you can personally save the entire planet, but you'll have to make some major lifestyle changes," I think anyone I know would say yes. Despite our differences, we all want the best for all of us.

However, it's not that simple. Even if anyone would sacrifice to individually save the world, that doesn't mean everyone will sacrifice to collectively save the world. Why not? Here are a few reasons:

  1. We don't know what to do. We know we should consume less, but does this mean we should bury ourselves and fertilize the soil? What does a sustainible lifestyle look like? We don't really know (though some have thought about it!)

  2. Culture. It's impossible not to at least partially assimilate the attitudes and lifestyles of those around you. Our imaginations are limited, so we're most likely to act similarly to those around us.

  3. Disconnect between action and consequence. This is a big one, in my opinion. Suppose that every time we drove a car, ate meat, or left a light switch on, we could see the environment deteriorate around us. Or conversely, suppose every time we recycled or biked to work, we could feel the earth gain vitality. If either of these things were true, I'm sure our problems would be gone in no time. But because environmental consequences occur over such large timespans and spatial scales, it's hard to see the effect of anything we do. Positive and negative feedback can change behavior, but only if that feedback is immediate and visible. It's much harder to worry about an incremental change fifty years from now.

  4. Many other reasons: hopelessness, inertia, not wanting to admit the scale of the problem, not wanting to clean a mess someone else created, etc.

I highlight the first three reasons because they are inherently complex systems issues. Because the physical and social issues around global warming are so complex, the best course of action is not clear (though some actions are clearly better than others.) Because we live in a complex system, our actions are influenced by those around us. And also because the physics of the environment is complex, the consequences of our actions are far removed from the actual action.

So what's to be done? Standard complex systems theory tells us that a complex system cannot be coerced. You cannot force everyone to live sustainibly unless you first convert the world into a totalitarian state, which is a terrible solution. To change the behavior of a complex system you must somehow change the rules of the game, creating incentives and disincentives that promote responsible action. This is essentially the theory behind the Kyoto protocol: don't force nations to change, but give them incentives to live cleaner.

Unfortunately, the carrots and sticks provided by the Kyoto protocol apply only at the nation level. Collective rewards and punishments are not very effective at changing behavior; feedback works best on the individual scale. Eventually we must find a way to trickle these carrots and sticks down to the state, local, and ultimately individual level, so that people are tangibly rewarded for living sustainibly and tangibly punished for not.

Is this possible? It may not be, but I think it's our only hope.

The What and the Why

So what are complex systems, and why are they worth studying? These two questions could fill books, and I will be returning often to both of them, but it only makes sense to start this blog off with a preliminary stab at some answers.

What are complex systems? They are systems which have so many parts and variables that traditional scientific models fail to describe them. They are found across the physical, biological, and social sciences. The salient features that distinguish complex systems are:
  • Many small parts that interact to create large-scale behavior. These "parts" may be molecules, cells, people, animals, air particles, or grains of sand.
  • A sufficient number of parts so that even if the individual interactions between them were perfectly understod (as they are in some physics situations), it would still be computationally impossible to precisely predict the large-scale behavior.
  • Despite the impossibility of exact predictions, these systems still exhibit characteristic behaviors which make them amenable to mathematical analysis. These behaviors may include self-similarity (intermediate-scale behavior mimics large-scale behavior, e.g. local governments resemble national governments) and tradeoffs or fluctuations between large-scale cooperation and individual or small-scale action.

So why study these systems? First of all, we study them because they shape our world. I think it's no exaggeration to say that the future of humanity depends on our understanding of how complex systems work (for example, responding to global warming will require a deep understanding of the many systems that interact to create greenhouse gases, and how these systems can be changed.)

The second reason we study these systems is that it has not already been done. Broadly speaking, the problems of simple systems in science have already been solved. Given two particles, molecules, or cells; a physicist, chemist, or biologist could pretty much tell you how they will interact. Small-scale interactions are well-understood in all areas of the physical sciences (the social sciences are a different matter, but we can discuss this later.) On the other hand, complex systems are not only poorly understood, they have historically been ignored in many areas of science. The reason for this is that exact predictions are the stock and trade of physical science. To scientifically tackle problems where exact prediction is impossible requires a paradigm shift. Scientists must learn to ask different kinds of questions and expect different kinds of answers. This paradigm shift started sometime in the 80's (more on complex systems history in further posts) and it is still occuring today. Because the field is so young, many fundamental questions are still open. This makes it a great field for the scientifically adventurous.

The third argument for complex systems study is that it is fascinating. Inherently interdisciplinary, complex systems research brings together scholars from across the hard and soft sciences. Because the focus is on big-picture questions, an open mind can be just as valuable an asset as libraries of technical knowledge. The culture of complex systems research is such that no questions are off limits, and investigating problems outside of your field of expertise is encouraged rather than shunned. So you get to work on fascinating problems, and no one tells you not to!

That's my summary of what I do and why. Future posts will focus more on specific systems or conceptual issues. In the meantime, please comment!