Why do attempted murderers get less jail time than murderers? Why is a drunken driver who hits a tree punished less than one who hits a person? If our laws are set up to correct behavior, why do they punish according to the outcome of an action rather than the intention?
Questions like these were raised in a talk I heard today by Fiery Cushman who, in addition to having an awesome name, has done some fascinating research on this subject.
It turns out the phenomenon of punishing by outcomes rather than intentions is reflected not just in our legal system, but in our individual behavior. In a variety of experiments, (see his publications page for details) Cushman found that people's decisions to reward and punish, even amoung children as young as five, are based on the result of someone's actions rather than on what the person was thinking.
Interestingly, when asked if they want to be friends with a person, or whether that person is "good," intentionality becomes much more important. Thus, if you throw a rock at me and miss, I'll think you're a jerk, but I won't chase you down the way I would if you'd hit me. (Hah, I'm actually a wimp. I'd go home and cry. But you get the point.)
This talk being part of an evolutionary dynamics class, Cushman turned to the question of how this punishment instinct might have evolved, and why it evolved so differently from moral judgments.
The answers to these questions are still cloudy, but they may have to do with our interactions with the natural (non-human) environment. Consequences in the natural world are based on outcomes: If you climb a dangerous cliff but don't fall, you aren't punished, even though it was still a bad idea. So from these non-social interactions, we're "used" to being punished based on outcomes; in evolutionary terms, we've adapted to it. And according to some of Cushman's experiments, we learn better from outcome-based punishment, because it's what we expect. So punishment evolved to fit our already-established learning patterns. I think. If you're having trouble following this, it's tricky stuff. I can barely follow it myself.
- Home
- Angry by Choice
- Catalogue of Organisms
- Chinleana
- Doc Madhattan
- Games with Words
- Genomics, Medicine, and Pseudoscience
- History of Geology
- Moss Plants and More
- Pleiotropy
- Plektix
- RRResearch
- Skeptic Wonder
- The Culture of Chemistry
- The Curious Wavefunction
- The Phytophactor
- The View from a Microbiologist
- Variety of Life
Field of Science
-
-
Don't tell me they found Tyrannosaurus rex meat again!2 weeks ago in Genomics, Medicine, and Pseudoscience
-
-
-
Course Corrections4 months ago in Angry by Choice
-
-
The Site is Dead, Long Live the Site2 years ago in Catalogue of Organisms
-
The Site is Dead, Long Live the Site2 years ago in Variety of Life
-
Does mathematics carry human biases?4 years ago in PLEKTIX
-
-
-
-
A New Placodont from the Late Triassic of China5 years ago in Chinleana
-
Posted: July 22, 2018 at 03:03PM6 years ago in Field Notes
-
Bryophyte Herbarium Survey7 years ago in Moss Plants and More
-
Harnessing innate immunity to cure HIV8 years ago in Rule of 6ix
-
WE MOVED!8 years ago in Games with Words
-
-
-
-
post doc job opportunity on ribosome biochemistry!9 years ago in Protein Evolution and Other Musings
-
Growing the kidney: re-blogged from Science Bitez9 years ago in The View from a Microbiologist
-
Blogging Microbes- Communicating Microbiology to Netizens10 years ago in Memoirs of a Defective Brain
-
-
-
The Lure of the Obscure? Guest Post by Frank Stahl12 years ago in Sex, Genes & Evolution
-
-
Lab Rat Moving House13 years ago in Life of a Lab Rat
-
Goodbye FoS, thanks for all the laughs13 years ago in Disease Prone
-
-
Slideshow of NASA's Stardust-NExT Mission Comet Tempel 1 Flyby13 years ago in The Large Picture Blog
-
in The Biology Files
The Evolution of Cooperation
First of all, a personal triumph: I've had my first academic paper accepted! "A New Phylogenetic Diversity Measure Generalizing the Shannon Index with Application to Phyllostomid Bats" is tentatively accepted for publication at the American Naturalist, a venerable biology journal. Whooo!
But on to our main topic: It's one of evolution's oldest riddles. If evolution is a brutal battle for survival, in which only the fittest survive, why do we see so much cooperation in nature? Why, in extreme cases, do some animals sacrifice themselves to help others of their species? In the competition between individuals, genes, and species, what kind of advantage does this altruistic behavior confer?
This question is quite deep and has generated an array of possible answers, whose implications go beyond evolutionary biology. I'll outline the history of how this question has been explored, and offer something of a synthesis to conclude.
Bottom line is, it doesn’t pay to be a nice guy in a world of assholes. But if you can find other nice people to interact with, and some mechanism for keeping the assholes out of your little nice-people club, then you’re on to something.
Each of the proposed mechanisms for cooperation has interesting implications for human society. I’ll highlight just one of them for now: spatial structure. When humans first evolved, long-distance travel was difficult, and so different societies could develop independently with their own norms of cooperation or selfishness. But now we can travel across the world in a day, so the spatial separation is gone. Any thoughts on the implications of this change for the stability of human cooperation?
Further reading
But on to our main topic: It's one of evolution's oldest riddles. If evolution is a brutal battle for survival, in which only the fittest survive, why do we see so much cooperation in nature? Why, in extreme cases, do some animals sacrifice themselves to help others of their species? In the competition between individuals, genes, and species, what kind of advantage does this altruistic behavior confer?
This question is quite deep and has generated an array of possible answers, whose implications go beyond evolutionary biology. I'll outline the history of how this question has been explored, and offer something of a synthesis to conclude.
- Reciprocation-It pays to help someone else if that person will help you in return. This fact is incontrovertible, and helps explain many of the interactions we see in nature, like monkeys grooming each other. However, reciprocation does not explain the acts of extreme altruism sometimes seen in nature, such as cellular slime moulds that sacrifice themselves to help others find food. So it can’t be the whole story—some actions really are selfless.
- Group selection-This is the idea that Darwinian evolution acts on groups of organisms as well as on individuals. If the members of a group cooperate well together, then the group as a whole may survive, while other less cooperative groups die off. This idea fell out of favor in the 60's as mathematical analysis showed group selection is generally a much weaker evolutionary force than individual selection. New models, however, show that group selection can be important in some circumstances.
- Kin discrimination-This view holds that the real unit of Darwinian selection is not organisms or groups but genes. Since genes are the material that is passed on through generations, the genes that help themselves out will survive the best. So if your gene “sees” that another individual has the same gene, your gene will “want” to help that person out in order to further its own interests. Of course, genes can’t really see each other. But your genes can tell you to help out your relatives, who are likely to have the same genes as you. This is the kin discrimination theory: our genes tell us to help our immediate family members, and thereby further their own gene-centric interests. Preferential behavior toward relatives is commonly observed in animals, and one study even found closely-related plants helping each other out.
- Repeated interactions-Axelrod’s tournaments of Prisoner’s Dilemma games show that, while it may be beneficial to act selfishly in the short run, more cooperative strategies are better if you know you’ll be interacting with someone repeatedly. The best strategies for repeated interactions are those which reward others who cooperate with you and punish those who don’t.
- Spatial structure-Cooperators do best if they’re surrounded by other cooperators. One way this can happen is in ecosystems where offspring are born close to their parents and don’t move much. In this case, the children of cooperators stay and cooperate with their relatives, while the children of selfish bastards hang out with their selfish bastard relatives and be miserable. Thus, systems with a strong spatial structure and little movement tend to favor cooperators. The system breaks down if the selfish bastards can move fast enough to find the cooperators and exploit them. Robert Austin found that spatial separation could help "altruistic" bacteria survive coexist with their "selfish" bretheren.
- Punishment-Evolutionary biologists have also explored the idea that punishment can help enforce cooperative behavior. Punishment can be “vigilante-style”, where any individual who sees someone else acting unethically can hurt them, or there can be some kind of agreed-upon authority whose job it is to punish misbehavers. The question of if and how punishment works in nature seems still up for debate.
Bottom line is, it doesn’t pay to be a nice guy in a world of assholes. But if you can find other nice people to interact with, and some mechanism for keeping the assholes out of your little nice-people club, then you’re on to something.
Each of the proposed mechanisms for cooperation has interesting implications for human society. I’ll highlight just one of them for now: spatial structure. When humans first evolved, long-distance travel was difficult, and so different societies could develop independently with their own norms of cooperation or selfishness. But now we can travel across the world in a day, so the spatial separation is gone. Any thoughts on the implications of this change for the stability of human cooperation?
Further reading
HIV evolves inside the body
This post is also taken from the excellent Evolutionary Dynamics course taught by Martin Nowak at Harvard.
The progression of HIV in the human body was a mystery for a long time. It sits in your body for years, not doing much, then suddenly it takes over your immune system and BAM!---you have AIDS. (The actual sound it makes when it reaches this point is unclear.) Pictorially, the process looks like this:
The red line represents the amount of disease in your body. When HIV is first contracted, the amount of virus shoots up dramatically, but then decreases sharply as the immune system responds. The virus load then stays at a small level, increasing only gradually, until the mysterious trigger happens and it shoots up again, this time impervious to immune responses. The upper blue line represents the amount of CD4 cells in your body, which are the immune cells that HIV attacks. (The slide is stolen from Martin's lecture.)
The question, then, is what is HIV doing in the long "asymptomatic" phase, and how does whatever it's doing enable it to suddenly explode after such a long time?
Nowak gave a surprising answer in a 1999 paper: it is evolving.
The idea is that when a person is first infected, they have only one strain of the disease in them (i.e. whatever strain they got from whoever transmitted it to them.) The immune system can handle this: it makes an antibody designed to attack that strain, at beats it back down to a miniscule level. It can't kill it completely though, because HIV can hide in healthy immune cells.
Now, while HIV is hiding in plain sight, it's also reproducing and mutating at a very high rate. Once it's mutated enough, the antibodies can't recognize it, so different antibodies must be produced to contain it. This processes continues and the disease becomes more and more diverse within you.
But there's a limit on the number of different campaigns your immune system can wage at once. Nowak found a mathematical diversity threshold--i.e. a critical number of strains of the virus, beyond which the immune system can't deal with all of them at once (though any one of them at a time would be fine.) And then, BAM!
I found this fascinating because, while we all know the power of evolution to produce remarkable organisms, we don't usually think of this process happening within our own bodies. Also, this hints at the difficulty of finding a cure for HIV, since it is specifically designed to mutate its way out of trouble.
The progression of HIV in the human body was a mystery for a long time. It sits in your body for years, not doing much, then suddenly it takes over your immune system and BAM!---you have AIDS. (The actual sound it makes when it reaches this point is unclear.) Pictorially, the process looks like this:
The red line represents the amount of disease in your body. When HIV is first contracted, the amount of virus shoots up dramatically, but then decreases sharply as the immune system responds. The virus load then stays at a small level, increasing only gradually, until the mysterious trigger happens and it shoots up again, this time impervious to immune responses. The upper blue line represents the amount of CD4 cells in your body, which are the immune cells that HIV attacks. (The slide is stolen from Martin's lecture.)
The question, then, is what is HIV doing in the long "asymptomatic" phase, and how does whatever it's doing enable it to suddenly explode after such a long time?
Nowak gave a surprising answer in a 1999 paper: it is evolving.
The idea is that when a person is first infected, they have only one strain of the disease in them (i.e. whatever strain they got from whoever transmitted it to them.) The immune system can handle this: it makes an antibody designed to attack that strain, at beats it back down to a miniscule level. It can't kill it completely though, because HIV can hide in healthy immune cells.
Now, while HIV is hiding in plain sight, it's also reproducing and mutating at a very high rate. Once it's mutated enough, the antibodies can't recognize it, so different antibodies must be produced to contain it. This processes continues and the disease becomes more and more diverse within you.
But there's a limit on the number of different campaigns your immune system can wage at once. Nowak found a mathematical diversity threshold--i.e. a critical number of strains of the virus, beyond which the immune system can't deal with all of them at once (though any one of them at a time would be fine.) And then, BAM!
I found this fascinating because, while we all know the power of evolution to produce remarkable organisms, we don't usually think of this process happening within our own bodies. Also, this hints at the difficulty of finding a cure for HIV, since it is specifically designed to mutate its way out of trouble.
Evolution of Irregular Verbs
I'm currently taking an amazing class offered by the Program for Evolutionary Dynamics (PED) at Harvard. The goal of the course (and the research program) is to study evolution---of animals, diseases, languages, and other entities---with full mathematical rigor. Today's class included a presentation by one of the PED researchers on the evolution of irregular verbs, based on an article that appeared in Nature in 2007.
Anyone who has ever studied a foreign language will remember with a sense of frustration that all the screwy irregular verbs were precisely the verbs like "to be" or "to go" that get used more often than any others. Obscure, rarely used verbs tend to conjugate in regular patterns.
These researchers found that early on in the English language, many verbs that are now regular, such as "help" or "walk", were once irregular ("I halp my friend study for his quiz yesterday.") As time went on, these verbs regularized (their conjugations evolved to the regular form) one by one, except for those very common verbs like "to be" and "to go" that remain highly irregular (how do you get "went" from "go"?)
Moreover, the speed at which these verbs regularized is directly related to the frequency of their usage. This relationship can be expressed in a remarkably simple mathematical law: the speed at which a verb regularizes is inversely proportional to the square root of its frequency. In other words, if verb A is used 100 times as much as verb B, verb B will regularize 10 times as fast.
The simplicity of this law suggests that there must be some kind of fundamental explanation---a simple model of language use that predicts this law mathematically. No such explanation has been found to date, but you can bet I'll be looking for one!
Anyone who has ever studied a foreign language will remember with a sense of frustration that all the screwy irregular verbs were precisely the verbs like "to be" or "to go" that get used more often than any others. Obscure, rarely used verbs tend to conjugate in regular patterns.
These researchers found that early on in the English language, many verbs that are now regular, such as "help" or "walk", were once irregular ("I halp my friend study for his quiz yesterday.") As time went on, these verbs regularized (their conjugations evolved to the regular form) one by one, except for those very common verbs like "to be" and "to go" that remain highly irregular (how do you get "went" from "go"?)
Moreover, the speed at which these verbs regularized is directly related to the frequency of their usage. This relationship can be expressed in a remarkably simple mathematical law: the speed at which a verb regularizes is inversely proportional to the square root of its frequency. In other words, if verb A is used 100 times as much as verb B, verb B will regularize 10 times as fast.
The simplicity of this law suggests that there must be some kind of fundamental explanation---a simple model of language use that predicts this law mathematically. No such explanation has been found to date, but you can bet I'll be looking for one!
Evolution and Interdependence
So President Bush has finally taken a complex systems view of the economy:
Stupidity aside, he's entirely correct: our economy is highly interdependent. We discussed this situation last post, now I'd like to give some perspective on how interdependence comes to be.
Our economy, like life, is an evolutionary system, featuring competition, innovation, and adaptation to internal and external challeges. And I think some of the difficulty in understanding the current financial crisis comes from a misconception about evolution.
We usually think of (biological) evolution as a species-level process: each species makes its own incremental improvements in search of competetive advantage. But this is too simple a picture. Species do not evolve in isolation; theyco-evolve in concert with all they interact with: plants, animals, microbes, and even minerals. In this co-evolutionary process, species develop relationships with each other; sometimes competitive, but often symbiotic or mutually beneficial in some way.
In the long run, co-evolution seems to produce increasing interdependence. Consider that all life started out as single-celled organisms, and that the co-evolution of these organisms led to multicellularity, which is a form of indterdependence so advanced that the component cells can no longer live on their own. On a larger scale, multicellular organisms co-evolved to form ecosystems. While not as interdependent as a multicellular organism, an ecosystem still has the property that if you remove enough vital components, the whole system fails.
An interesting thing happens now. As interdependence grows, so does the scale at which evolution occurs. Life started with cells competing against cells, grew into organsims competing with organisms, and now, in a sense, we also have ecosystems competing with ecosystems. The rainforest, for example, is competing with the desert in Africa. If the rainforest fails, so do all species that live there.
A similar process happens with economies. They begin with small, relatively self-suffient businesses. These businesses develop relationships with each other, co-evolve, and grow webs of interdependence. In the US, the webs have become so complex that an obscure industry known has mortgage-backed securities has sunk our entire economy.
So here too, evolution has "scaled up." It's no longer just companies competing against companies, it's also our whole nation's economy competing against those of other nations, and indeed the whole world's economy competing against, well, itself.
I don't think interdependence can be avoided, but it certainly needs to be understood. When people speak of the "free hand of the market" correcting our economy's mistakes, they're thinking of individual companies competing idependently, and failing to grasp the reality that, to some extent, our economy lives or dies as a whole.
Stupidity aside, he's entirely correct: our economy is highly interdependent. We discussed this situation last post, now I'd like to give some perspective on how interdependence comes to be.
Our economy, like life, is an evolutionary system, featuring competition, innovation, and adaptation to internal and external challeges. And I think some of the difficulty in understanding the current financial crisis comes from a misconception about evolution.
We usually think of (biological) evolution as a species-level process: each species makes its own incremental improvements in search of competetive advantage. But this is too simple a picture. Species do not evolve in isolation; they
In the long run, co-evolution seems to produce increasing interdependence. Consider that all life started out as single-celled organisms, and that the co-evolution of these organisms led to multicellularity, which is a form of indterdependence so advanced that the component cells can no longer live on their own. On a larger scale, multicellular organisms co-evolved to form ecosystems. While not as interdependent as a multicellular organism, an ecosystem still has the property that if you remove enough vital components, the whole system fails.
An interesting thing happens now. As interdependence grows, so does the scale at which evolution occurs. Life started with cells competing against cells, grew into organsims competing with organisms, and now, in a sense, we also have ecosystems competing with ecosystems. The rainforest, for example, is competing with the desert in Africa. If the rainforest fails, so do all species that live there.
A similar process happens with economies. They begin with small, relatively self-suffient businesses. These businesses develop relationships with each other, co-evolve, and grow webs of interdependence. In the US, the webs have become so complex that an obscure industry known has mortgage-backed securities has sunk our entire economy.
So here too, evolution has "scaled up." It's no longer just companies competing against companies, it's also our whole nation's economy competing against those of other nations, and indeed the whole world's economy competing against, well, itself.
I don't think interdependence can be avoided, but it certainly needs to be understood. When people speak of the "free hand of the market" correcting our economy's mistakes, they're thinking of individual companies competing idependently, and failing to grasp the reality that, to some extent, our economy lives or dies as a whole.
Too Important to Fail?
The federal government is set to take over mortgage companies Fannie Mae and Freddie Mac. Earlier this summer, the government rescued the investment bank Bear Stearns. In each case it was decided that, even though the companies were in trouble of their own making, the damage caused by their failure would be too great for the economy to bear.
Strictly speaking, this isn't how our economy is supposed to work. It's supposed to be survival of the fittest: the companies that make the best decisions survive, and others fail. In this way good practices are rewarded, better business models evolve, and society progresses.
The problem is that, as part of this evolutionary process, the US economy has become increasingly interdependent. Companies need each other to survive, so that if a big one goes down it could take others with it. In the cases of Fannie Mae, Freddie Mac, and Bear Stearns, it was deemed that the failure of these companies would take out entire sectors of the economy, and as a country we couldn't let that happen.
I won't argue the merits of these decisions, but I'm interested in what they say about our economy. We're accustomed to thinking of our economy in terms of a system of competing animals. If one dies, others arise to take its place. But it may turn out our economy is more like another system: the human body, wherein if one part fails, the system suffers as a whole.
If this is true, then the whole of economic theory is based on an incorrect assumption. We may have some fundamental rethinking to do about how our economy works and why.
Strictly speaking, this isn't how our economy is supposed to work. It's supposed to be survival of the fittest: the companies that make the best decisions survive, and others fail. In this way good practices are rewarded, better business models evolve, and society progresses.
The problem is that, as part of this evolutionary process, the US economy has become increasingly interdependent. Companies need each other to survive, so that if a big one goes down it could take others with it. In the cases of Fannie Mae, Freddie Mac, and Bear Stearns, it was deemed that the failure of these companies would take out entire sectors of the economy, and as a country we couldn't let that happen.
I won't argue the merits of these decisions, but I'm interested in what they say about our economy. We're accustomed to thinking of our economy in terms of a system of competing animals. If one dies, others arise to take its place. But it may turn out our economy is more like another system: the human body, wherein if one part fails, the system suffers as a whole.
If this is true, then the whole of economic theory is based on an incorrect assumption. We may have some fundamental rethinking to do about how our economy works and why.
The paradox of order and randomness
Consider the following two images:
First view each image as is, and then click on them to see larger versions. Ignore for a moment the different sizes, and the copyright notice in the second picture (hope I'm not breaking any laws!) What's going on in these pictures?
The first is a randomly generated image, in which a computer essentially flipped a coin to decide the color (black or white) of each pixel. The second is composed of alternating black and white pixels in a checkered pattern (click on the image to see this clearly.)
At this resolution, the first picture still has some texture to it. But zoom out a bit more and it would reduce to a uniform grey, just like the second.
This highlights something of a paradox in complex systems theory: complete randomness is actually pretty boring. Sure, it's unpredictable, but because it has no structure, there's not much else you can say about it. And if you squint at it, it all averages out to grey. Contrast this to the following fractal image:
Now this picture has a lot of interesting structure to describe, like most complex systems.
Why is this a paradox? Because according to the defintions of complexity we discussed some months ago, a completely random system is more complex than anything else! Any order or structure in a system makes it easier to describe, thereby reducing complexity according to conventional definitions. So the fractal is actually less complex than the random image.
Complex systems researchers have recognized this problem for a long time, but there's no consensus on how to resolve it. Some have suggested adopting a different definition of complexity that behaves something like this:
That is, complexity is greatest somewhere between total order and complete randomness. But this is unsatisfying; complexity is not a mere mixture between order and randomness, but a delicate balance combining features of the two.
Of course, I have my own opinion as to how this paradox should be resolved. But that's a tale for another time.
First view each image as is, and then click on them to see larger versions. Ignore for a moment the different sizes, and the copyright notice in the second picture (hope I'm not breaking any laws!) What's going on in these pictures?
The first is a randomly generated image, in which a computer essentially flipped a coin to decide the color (black or white) of each pixel. The second is composed of alternating black and white pixels in a checkered pattern (click on the image to see this clearly.)
At this resolution, the first picture still has some texture to it. But zoom out a bit more and it would reduce to a uniform grey, just like the second.
This highlights something of a paradox in complex systems theory: complete randomness is actually pretty boring. Sure, it's unpredictable, but because it has no structure, there's not much else you can say about it. And if you squint at it, it all averages out to grey. Contrast this to the following fractal image:
Now this picture has a lot of interesting structure to describe, like most complex systems.
Why is this a paradox? Because according to the defintions of complexity we discussed some months ago, a completely random system is more complex than anything else! Any order or structure in a system makes it easier to describe, thereby reducing complexity according to conventional definitions. So the fractal is actually less complex than the random image.
Complex systems researchers have recognized this problem for a long time, but there's no consensus on how to resolve it. Some have suggested adopting a different definition of complexity that behaves something like this:
That is, complexity is greatest somewhere between total order and complete randomness. But this is unsatisfying; complexity is not a mere mixture between order and randomness, but a delicate balance combining features of the two.
Of course, I have my own opinion as to how this paradox should be resolved. But that's a tale for another time.
Free Will, Randomness, and Determinism
Astronomer, inventor, and old friend of the family/distant relative Bob Doyle has begun a project to address old philosophical problems using information theory.
One such problem, as he explained to me at my aunt's 75th birthday last weekend, is free will versus determinism. Philosophers have been arguing for eternity whether free will exists, and if it does, where it comes from. Disconcertingly, free will seems incompatible with the major theories of physics. In Newtonian physics, all future states of the universe are completely determined by its present state, so no choices can ever be made. In quantum physics, events happen randomly according to precise mathematical rules, so the only "choices" are simply rolls of God's dice. Neither one of these theories seem to allow for any human or animal agency in changing world events.
Bob's idea is that the combination of Newtonian determinism and quantum randomness can explain more than either theory separately. Randomness generates new information and ideas in our brains, giving us novel options to choose from. But our brain is deterministic enough to sort through these ideas and choose the ones that are consistent with our character and past experience. In other words, randomness provides the "free" aspect of free will, and determinism provides the "will."
I don't think this theory is complete, because there's no real explanation of what the choice-making process looks like. But it seems beyond dispute that both random and deterministic forces play a role in what we call "human creativity." Currently, Bob is scouring the history of philosophy for all that's been said on the free will question, and how information theory and physics could connect to this. The blog of his efforts is now a proud memeber of the plektix blogroll.
One such problem, as he explained to me at my aunt's 75th birthday last weekend, is free will versus determinism. Philosophers have been arguing for eternity whether free will exists, and if it does, where it comes from. Disconcertingly, free will seems incompatible with the major theories of physics. In Newtonian physics, all future states of the universe are completely determined by its present state, so no choices can ever be made. In quantum physics, events happen randomly according to precise mathematical rules, so the only "choices" are simply rolls of God's dice. Neither one of these theories seem to allow for any human or animal agency in changing world events.
Bob's idea is that the combination of Newtonian determinism and quantum randomness can explain more than either theory separately. Randomness generates new information and ideas in our brains, giving us novel options to choose from. But our brain is deterministic enough to sort through these ideas and choose the ones that are consistent with our character and past experience. In other words, randomness provides the "free" aspect of free will, and determinism provides the "will."
I don't think this theory is complete, because there's no real explanation of what the choice-making process looks like. But it seems beyond dispute that both random and deterministic forces play a role in what we call "human creativity." Currently, Bob is scouring the history of philosophy for all that's been said on the free will question, and how information theory and physics could connect to this. The blog of his efforts is now a proud memeber of the plektix blogroll.
A Mathematician's Apology
First, the apology: I started a summer job last week and it's taking up a huge amount of time. So posts will be infrequent until my job ends in August.
But since it's on my mind, I'd like to share a bit about this job. I'm assisting in the PROMYS for Teachers program. The goal of the program is to give math teachers an experience similar to the way mathematicians do math.
We mathematicians approach math differently from the way anyone else does. Most people learn math by watching a teacher explain a concept and demonstrate some examples. Students then apply these concepts to some practice problems, and that's pretty much it.
Mathematicians are in the business of discovering new mathematics, not reproducing what is known. To do this, we do what all scientists do: we experiment. Except that for us, experimentation involves only a pencil and paper (computers are sometimes used, but not as much as you might think.) We take numbers or shapes or other mathematical objects and play around with them. This can be a frustrating and fruitless process, but eventually we hope to discover something interesting about the way these objects work.
The next step after discovery is to describe our discovery. This is harder than it sounds, because mathematical language is very precise. One's first few attempts to describe a discovery are often wrong in some way; perhaps an important qualifier has been left out. Sometimes we have to invent new language to describe a discovery.
Finally, we try to justify our discovery by proving it from first principles or from other established theorems. This process can range from easy (a few minutes of thinking) to moderately hard (a few weeks) to epic (a few centuries.) The most famous mathematical theorems come from discoveries that are simple to describe but surprisingly difficult to prove.
In the PROMYS for Teachers program, we try to give teachers a taste of this experience. We give them numerical problems that hint at deep mathematical patterns. We then ask them to describe these patterns precisely and prove them if possible. This is often frustrating for them, since they haven't been shown how to do the problems or proofs beforehand. But by making their own discoveries, they take ownership of the mathematics, and when the process works it is tremendously exciting.
This program is modelled on the Ross program at Ohio State University, which I attended as a high school student. That program basically made me into a mathematician, so it is very rewarding to me to be able to share this experience with teachers.
But since it's on my mind, I'd like to share a bit about this job. I'm assisting in the PROMYS for Teachers program. The goal of the program is to give math teachers an experience similar to the way mathematicians do math.
We mathematicians approach math differently from the way anyone else does. Most people learn math by watching a teacher explain a concept and demonstrate some examples. Students then apply these concepts to some practice problems, and that's pretty much it.
Mathematicians are in the business of discovering new mathematics, not reproducing what is known. To do this, we do what all scientists do: we experiment. Except that for us, experimentation involves only a pencil and paper (computers are sometimes used, but not as much as you might think.) We take numbers or shapes or other mathematical objects and play around with them. This can be a frustrating and fruitless process, but eventually we hope to discover something interesting about the way these objects work.
The next step after discovery is to describe our discovery. This is harder than it sounds, because mathematical language is very precise. One's first few attempts to describe a discovery are often wrong in some way; perhaps an important qualifier has been left out. Sometimes we have to invent new language to describe a discovery.
Finally, we try to justify our discovery by proving it from first principles or from other established theorems. This process can range from easy (a few minutes of thinking) to moderately hard (a few weeks) to epic (a few centuries.) The most famous mathematical theorems come from discoveries that are simple to describe but surprisingly difficult to prove.
In the PROMYS for Teachers program, we try to give teachers a taste of this experience. We give them numerical problems that hint at deep mathematical patterns. We then ask them to describe these patterns precisely and prove them if possible. This is often frustrating for them, since they haven't been shown how to do the problems or proofs beforehand. But by making their own discoveries, they take ownership of the mathematics, and when the process works it is tremendously exciting.
This program is modelled on the Ross program at Ohio State University, which I attended as a high school student. That program basically made me into a mathematician, so it is very rewarding to me to be able to share this experience with teachers.
WSJ: Teach for America "proves" that teachers don't need pay
I know it’s my own fault for evening opening the Wall Street Journal to the editorial page. But somehting I found there last weekend irked me more than their usual “liberals are naive idiots” fare.
An editorial entitled "Amazing Teacher Facts" argued, using the example of Teach for America, that teachers don’t need to be paid any more than they currently are. If these bright young college grads are lining up to teach in inner-city schools at standard salaries, and doing a good job of it, then clearly money isn’t the issue in hiring quality teachers. The culprit must instead be the bureaucracy that requires teachers to take “education” courses (their quotes) to enter the profession the normal way.
This pinched my nerve because I did Teach for America, teaching for two years at Austin High School in Chicago. I was lost my first year and barely competent my second, but in a school with a large number of burnout teachers, this made me a valued member of the faculty.
So yes, TFA teachers do make a positive contribution to their schools. Some of them even become outstanding teachers. This despite being paid a salary that, while livable for 20somethings with no families to support, is far less than these Ivy League grads could be making on Wall Street.
But the WSJ editorial completely fails to ask the question of why, precisely, these Harvard and Yale types are flocking to teach in inner-city LA and rural Louisiana. In my opinion this is due to a phenomenal feat of marketing on the part of TFA. They managed to convince college seniors that teaching is A) a noble cause (which it always has been) and B) an attractive career move (which it never has been in the past.) Paradoxically, by admitting such a small percentage of applicants, TFA has made teaching an elite profession, at least when it is done through TFA. I can’t tell you how many conversations I’ve had that went:
“I’m a high school teacher”
“Oh.”
“...through Teach for America.”
“Oooooooooooooooooohhh!”
What the example of Teach for America proves is precisely what the Wall Street Journal was unwilling to admit: that to recruit quality teachers, you need to raise the status of the teaching profession. Our society usually equates status with money, so the most direct way to get qualified teachers is to pay them what they’re worth (six figures, at least!) TFA is bringing new respect to the teaching profession, but it will never be able to fill our massive teacher shortage while simultaneously maintaining its elite identity. Fixing public education will require a societal consensus that teaching is one of our most important professions, and they need to be paid accordingly.
An editorial entitled "Amazing Teacher Facts" argued, using the example of Teach for America, that teachers don’t need to be paid any more than they currently are. If these bright young college grads are lining up to teach in inner-city schools at standard salaries, and doing a good job of it, then clearly money isn’t the issue in hiring quality teachers. The culprit must instead be the bureaucracy that requires teachers to take “education” courses (their quotes) to enter the profession the normal way.
This pinched my nerve because I did Teach for America, teaching for two years at Austin High School in Chicago. I was lost my first year and barely competent my second, but in a school with a large number of burnout teachers, this made me a valued member of the faculty.
So yes, TFA teachers do make a positive contribution to their schools. Some of them even become outstanding teachers. This despite being paid a salary that, while livable for 20somethings with no families to support, is far less than these Ivy League grads could be making on Wall Street.
But the WSJ editorial completely fails to ask the question of why, precisely, these Harvard and Yale types are flocking to teach in inner-city LA and rural Louisiana. In my opinion this is due to a phenomenal feat of marketing on the part of TFA. They managed to convince college seniors that teaching is A) a noble cause (which it always has been) and B) an attractive career move (which it never has been in the past.) Paradoxically, by admitting such a small percentage of applicants, TFA has made teaching an elite profession, at least when it is done through TFA. I can’t tell you how many conversations I’ve had that went:
“I’m a high school teacher”
“Oh.”
“...through Teach for America.”
“Oooooooooooooooooohhh!”
What the example of Teach for America proves is precisely what the Wall Street Journal was unwilling to admit: that to recruit quality teachers, you need to raise the status of the teaching profession. Our society usually equates status with money, so the most direct way to get qualified teachers is to pay them what they’re worth (six figures, at least!) TFA is bringing new respect to the teaching profession, but it will never be able to fill our massive teacher shortage while simultaneously maintaining its elite identity. Fixing public education will require a societal consensus that teaching is one of our most important professions, and they need to be paid accordingly.
The Wire
I've been working my way through The Wire for the past semester or so. For those who don't know, the Wire is a TV drama exploring the drug trade in Baltimore and its intersection with all the different systems that function in the city. The first season centers on a drug organization and the police unit investigating them, and the series telescopes outward from there, adding the docks, city hall, the education system, and the print media to its focus in subsequent seasons. The creator, a former cop and public school teacher in Baltimore, has a deep understanding of how all these systems interact with each other, and in particular, how the organizational dynamics of a system can impede that system's objectives. Watching the series should be worth graduate credit in both sociology and complex systems theory. (In fact, one academic journal has issued a call for papers on the series. Deadline is September!)
There are many different jumping-off points I could use from the series, but I'll focus today on a recurring pattern: Drug sellers run a highly complex organization. They switch stash-houses frequently, speak in code, and never let the top guys get anywhere near the actual drugs. Some within the police department realize this, and set up sophisticated surveillance operations to gather information about the drug sellers. But every now and then one of the "top brass" in the police department gets wind of this operation, and wonders why so much time and money are being spent to investigate a bunch of "thugs." They send down a command to send a boatload of units down to the drug area and start locking people up.
Needless to say, this works about as well as attacking a swarm of gnats with a sledgehammer. They catch a couple low-level dealers, but ruin all the intelligence they had on anyone higher up. So the investigation must start all over again.
In theoretical terms, the mistake here is attempting a blunt, simple solution to a nimble, complex problem. When you look for it, you can see this mistake in many places, from our pre-Petraeus anti-insurgency strategy in Iraq, to our federal education policy that mandates standardized tests. To truly solve a complex problem requires an approach as subtle and multifaceted as the problem itself.
There are many different jumping-off points I could use from the series, but I'll focus today on a recurring pattern: Drug sellers run a highly complex organization. They switch stash-houses frequently, speak in code, and never let the top guys get anywhere near the actual drugs. Some within the police department realize this, and set up sophisticated surveillance operations to gather information about the drug sellers. But every now and then one of the "top brass" in the police department gets wind of this operation, and wonders why so much time and money are being spent to investigate a bunch of "thugs." They send down a command to send a boatload of units down to the drug area and start locking people up.
Needless to say, this works about as well as attacking a swarm of gnats with a sledgehammer. They catch a couple low-level dealers, but ruin all the intelligence they had on anyone higher up. So the investigation must start all over again.
In theoretical terms, the mistake here is attempting a blunt, simple solution to a nimble, complex problem. When you look for it, you can see this mistake in many places, from our pre-Petraeus anti-insurgency strategy in Iraq, to our federal education policy that mandates standardized tests. To truly solve a complex problem requires an approach as subtle and multifaceted as the problem itself.
Sub-Prime Mortgage Crisis Part II: Lessons for Complex Systems
Last time, we talked about what went wrong in the US mortgage market, based on the explanation given by NPR and This American Life. What does this debacle tell us in general about how complex systems can go wrong?
The main problem, in a theoretical sense, is that a feedback loop got too long and complex.
A feedback loop is the process by which an action leads to a consequence for the actor. Let's look at the old mortgage system:
Under this system, if the bank made a bad loan, they'd lose their money. So there was a very direct link between action and consequence. Banks have been dealing with this feedback loop for centuries and have gotten pretty good at making only loans that will get repaid.
But in the early 2000's, the system was replaced by this:
There's still a feedback loop here, but it's longer and more complex. Long, complex feedback loops are dangerous because they can fool people into thinking they're making good decisions, when really their bad decisions haven't caught up with them yet. The investors were pouring yet more money into the broken system, because their actions hadn't caught up with them yet, and they were too far removed from the homeowners to see what terrible shape they were in.
We moved essentially from
bad action ---> bad consequence
to
REALLY bad action --- (long time delay) ---> REALLY bad consequence
It's unlikely that investors will make this same mistake again, because they understand much better now how the mortgage market works. But the general mistake of stretching out a feedback loop, and assuming that you're doing well just because nothing's gone wrong so far, will probably be repeated many, many times.
The main problem, in a theoretical sense, is that a feedback loop got too long and complex.
A feedback loop is the process by which an action leads to a consequence for the actor. Let's look at the old mortgage system:
Under this system, if the bank made a bad loan, they'd lose their money. So there was a very direct link between action and consequence. Banks have been dealing with this feedback loop for centuries and have gotten pretty good at making only loans that will get repaid.
But in the early 2000's, the system was replaced by this:
There's still a feedback loop here, but it's longer and more complex. Long, complex feedback loops are dangerous because they can fool people into thinking they're making good decisions, when really their bad decisions haven't caught up with them yet. The investors were pouring yet more money into the broken system, because their actions hadn't caught up with them yet, and they were too far removed from the homeowners to see what terrible shape they were in.
We moved essentially from
bad action ---> bad consequence
to
REALLY bad action --- (long time delay) ---> REALLY bad consequence
It's unlikely that investors will make this same mistake again, because they understand much better now how the mortgage market works. But the general mistake of stretching out a feedback loop, and assuming that you're doing well just because nothing's gone wrong so far, will probably be repeated many, many times.
Sub-Prime Mortgage Crisis-Explained!
Recently, my favorite radio show teamed up with NPR news to do an in-depth collaboration on exactly what went wrong with the US sub-prime mortgage crisis. It turns out to be a perfect example of how a complex system can go wrong. So I thought I'd give a summary of what they found, and discuss how it relates to what we know about complex systems in general.
The whole thing started with what our radio hosts call "the global pool of money." In the early 2000's, there ended up being a whole lot of people around the globe with lots of money to invest. The amount of money looking to be invested had doubled in the past xxx years, due in part to growing economies in other countries.
The wealth holders of this money needed somewhere to invest this money, to keep it safe and growing. A large subset of them wanted safe investments, where the return on their money would be moderate but reliable. So they and their brokers looked around for safe investments to make.
While this was happening, Alan Greenspan was trying to help the US economy out of the post-internet bubble slump. He did this by setting interest rates extremely low: around 1%. This means that US treasury bonds, one of the safest investments historically, would be getting extremely low returns for a long time. So the pool of money had to look elsewhere.
The lack of traditional safe investment options meant that the brokers had to get creative. So they looked around and they saw this:
All over the country, retail banks (the kind of banks you and I use) were loaning money to homeowners, who were repaying the money with interest. These were safe investments on the banks' part because historically, very few homeowners default on their mortgages. The brokers wanted to get in on this action, but mortgages are too small and detailed to get involved with on an individual level. So they set up a system like this:
The retail banks would lend money to homeowners, and then sell these mortgages to investment banks. The investment banks would buy tons of these mortgages and organize them into "bundles" of hundreds at a time. These bundles would be sold to Wall Street firms, who would create "mortgage-backed securities" out of the bundles, and sell shares in these securites to the global pool of money.
This system worked fine for a while. But by 2003 or so, virtually every credit-worthy indvidual with a home had already taken a mortgage. There were no more mortgages to be bought. But the global pool of money had seen how effective these mortgage-backed securities were, and they demanded more. This sent an echoing voice all the way down the chain saying "GIVE US MORE MORTGAGES!"
To fill this incredible demand, the retail banks started relaxing the standards for who they loaned to. The radio show tells the fascinating story of how every week, one requirement after another was dropped, until they reached rock bottom: the NINA loan. NINA stands for "No Income, No Asset." It means you can get a loan without even claiming to have a job or any money in the bank whatsoever. In the words of one former mortgage banker "All you needed was a credit score, and a pulse."
In the old system, no bank would ever think of giving a loan without verifying the borrowers income and assets. This is because the bank had an interest in seeing that it got its money back. But under the new system, the banks would just sell the mortgage up the chain and wash their hands of it. If the borrower defaulted two months later, it would be someone else's problem.
Still, you would think that someone would realize that an investment system built on no income, no asset loans was bound to fail. And indeed, many people did realize it. But the money kept flowing in from the global pool, and everyone in the chain was getting rich in the process. Saying "no" to the system seemed like ignoring a pot of gold right in front of your face.
Two additional factors prevented reason from prevailing. First, the computer models used by the investment banks and Wall Street firms were telling them that everything was going fine. No one made the connection that the models were using data from pre-2003, when loans were made on the basis of actual assets. Second, housing prices in the US were going up. If a borrower defaulted, then the bank would own the house, which as long as prices were rising would be worth more than the bank loaned originally.
Of course, housing prices didn't keep going up. And the Wall Street firms noticed at some point that some of the mortgages they were investing in were defaulting on the very first payment. So they stopped buying these bundled mortgages. At that point, the middlemen in the system (the retail and investment banks) were left holding mortgages that no one up the chain wanted, and that would almost certainly be defaulted from the bottom of the chain. And they went bankrupt en masse.
That's enough writing for today. Next time we'll use this crisis as a case study for some general complex systems principles.
The whole thing started with what our radio hosts call "the global pool of money." In the early 2000's, there ended up being a whole lot of people around the globe with lots of money to invest. The amount of money looking to be invested had doubled in the past xxx years, due in part to growing economies in other countries.
The wealth holders of this money needed somewhere to invest this money, to keep it safe and growing. A large subset of them wanted safe investments, where the return on their money would be moderate but reliable. So they and their brokers looked around for safe investments to make.
While this was happening, Alan Greenspan was trying to help the US economy out of the post-internet bubble slump. He did this by setting interest rates extremely low: around 1%. This means that US treasury bonds, one of the safest investments historically, would be getting extremely low returns for a long time. So the pool of money had to look elsewhere.
The lack of traditional safe investment options meant that the brokers had to get creative. So they looked around and they saw this:
All over the country, retail banks (the kind of banks you and I use) were loaning money to homeowners, who were repaying the money with interest. These were safe investments on the banks' part because historically, very few homeowners default on their mortgages. The brokers wanted to get in on this action, but mortgages are too small and detailed to get involved with on an individual level. So they set up a system like this:
The retail banks would lend money to homeowners, and then sell these mortgages to investment banks. The investment banks would buy tons of these mortgages and organize them into "bundles" of hundreds at a time. These bundles would be sold to Wall Street firms, who would create "mortgage-backed securities" out of the bundles, and sell shares in these securites to the global pool of money.
This system worked fine for a while. But by 2003 or so, virtually every credit-worthy indvidual with a home had already taken a mortgage. There were no more mortgages to be bought. But the global pool of money had seen how effective these mortgage-backed securities were, and they demanded more. This sent an echoing voice all the way down the chain saying "GIVE US MORE MORTGAGES!"
To fill this incredible demand, the retail banks started relaxing the standards for who they loaned to. The radio show tells the fascinating story of how every week, one requirement after another was dropped, until they reached rock bottom: the NINA loan. NINA stands for "No Income, No Asset." It means you can get a loan without even claiming to have a job or any money in the bank whatsoever. In the words of one former mortgage banker "All you needed was a credit score, and a pulse."
In the old system, no bank would ever think of giving a loan without verifying the borrowers income and assets. This is because the bank had an interest in seeing that it got its money back. But under the new system, the banks would just sell the mortgage up the chain and wash their hands of it. If the borrower defaulted two months later, it would be someone else's problem.
Still, you would think that someone would realize that an investment system built on no income, no asset loans was bound to fail. And indeed, many people did realize it. But the money kept flowing in from the global pool, and everyone in the chain was getting rich in the process. Saying "no" to the system seemed like ignoring a pot of gold right in front of your face.
Two additional factors prevented reason from prevailing. First, the computer models used by the investment banks and Wall Street firms were telling them that everything was going fine. No one made the connection that the models were using data from pre-2003, when loans were made on the basis of actual assets. Second, housing prices in the US were going up. If a borrower defaulted, then the bank would own the house, which as long as prices were rising would be worth more than the bank loaned originally.
Of course, housing prices didn't keep going up. And the Wall Street firms noticed at some point that some of the mortgages they were investing in were defaulting on the very first payment. So they stopped buying these bundled mortgages. At that point, the middlemen in the system (the retail and investment banks) were left holding mortgages that no one up the chain wanted, and that would almost certainly be defaulted from the bottom of the chain. And they went bankrupt en masse.
That's enough writing for today. Next time we'll use this crisis as a case study for some general complex systems principles.
Pirates are even cooler than we thought!
So this is mostly a "I saw this and thought it was cool" kind of post: An article in Sunday's Boston Globe describes the research of Peter Leeson and Marcus Rediker claiming that pirates were practicing democracy aboard their ships in the 1600's, well before America or Europe ever got around to it.
Before each voyage, pirates voted on a captain and a quartermaster, whose main job was to be a check on the captain's power. Either officer could be "recalled" at any time. Ground rules were laid out in a written charter. They also had primitive forms of trial and workmen's compensation.
The researchers differ on the motivation for this democracy. Leeson sees it as a necessary organizational system for a cadre of criminals who have to work together without killing each other. Rediker sees it as a political reaction to despotic organization of commercial ships, wherein captains hold absolute power and floggings were routine and often deadly. Pirates, according to Rediker, tried to create a utopian alternative.
Inasmuch as there is a single motivation for anything, I'm inclined to agree with Leeson's point of view. The success of a pirate ship depends on the ability of its members to work together. There is a natural check on any one pirate's power in that any other pirate could pretty easily kill him in his sleep. Unlike the case of commercial ships, pirate society is not tied to any larger land-based social structures.
The question then becomes, what is the based way to maintain organization in a small self-contained society where no individual can dominate the others through force? I think the best and perhaps only workable answer in the long term is democracy, or something like it.
Before each voyage, pirates voted on a captain and a quartermaster, whose main job was to be a check on the captain's power. Either officer could be "recalled" at any time. Ground rules were laid out in a written charter. They also had primitive forms of trial and workmen's compensation.
The researchers differ on the motivation for this democracy. Leeson sees it as a necessary organizational system for a cadre of criminals who have to work together without killing each other. Rediker sees it as a political reaction to despotic organization of commercial ships, wherein captains hold absolute power and floggings were routine and often deadly. Pirates, according to Rediker, tried to create a utopian alternative.
Inasmuch as there is a single motivation for anything, I'm inclined to agree with Leeson's point of view. The success of a pirate ship depends on the ability of its members to work together. There is a natural check on any one pirate's power in that any other pirate could pretty easily kill him in his sleep. Unlike the case of commercial ships, pirate society is not tied to any larger land-based social structures.
The question then becomes, what is the based way to maintain organization in a small self-contained society where no individual can dominate the others through force? I think the best and perhaps only workable answer in the long term is democracy, or something like it.
Life's Universal Scaling Law
It ain't easy being green. Biology has long suffered under the label "soft science," a term used (often disparagingly) to draw a contrast with the "hard sciences" of physics and chemistry, whose laws are guaranteed with the certainty of mathematics. But this picture is not altogether true. While biological processes are more complex than physical ones, making simple mathematical formulas harder to come by, there are yet some mathematical rules that hold with a remarkable degree of consistency.
One famous example is the relationship of a animal's mass to its metabolism (the rate at which it expends energy). This relationship is expressed in the simple formula
R = R0M3/4,
where R is the metabolic rate, R0 is a constant, and M is the mass of the organism.
Separate laws exist for mammals, birds, unicellular organisms, and even living structures like mitochondria within cells. The values of R0 are
different for each law, but the mysterious 3/4 exponent stays the same.
These laws have been observed since 1930, but the reason for the 3/4 exponent has been a mystery until recently. The discovery by Geoff West et al of a mechanism underlying this law was a major triumph for the complex systems movement: a universal law of life explained by complex systems principles.
Specifically, West showed that the 3/4 exponent comes from the way a living thing distributes its resources. If the cells in an animal acted like independent beings, each gathering and consuming its own food, the metabolic rate would be a simple multiple of the mass, that is
R = R0M
with no exponent. But the cells of an animal aren't independent. They work together to collect, process, and consume energy. To do this they need networks (such as blood vessels) to move resources around. West and his collaborators showed that the 3/4 exponent is determined by the requirements that the network a) reach every part of the animal's body, and b) waste as little energy as possible.
Extending this approach, they were able to explain other scaling laws like the relationship between heart rate and mass. Currently, West is investigating scaling laws in large-scale living communities, such as forests and cities.
I haven't talked much about network theory (a topic for another time perhaps) but West's work suggests the great potential of this complex systems subfield to explain some of life's mysteries.
One famous example is the relationship of a animal's mass to its metabolism (the rate at which it expends energy). This relationship is expressed in the simple formula
R = R0M3/4,
where R is the metabolic rate, R0 is a constant, and M is the mass of the organism.
Separate laws exist for mammals, birds, unicellular organisms, and even living structures like mitochondria within cells. The values of R0 are
different for each law, but the mysterious 3/4 exponent stays the same.
These laws have been observed since 1930, but the reason for the 3/4 exponent has been a mystery until recently. The discovery by Geoff West et al of a mechanism underlying this law was a major triumph for the complex systems movement: a universal law of life explained by complex systems principles.
Specifically, West showed that the 3/4 exponent comes from the way a living thing distributes its resources. If the cells in an animal acted like independent beings, each gathering and consuming its own food, the metabolic rate would be a simple multiple of the mass, that is
R = R0M
with no exponent. But the cells of an animal aren't independent. They work together to collect, process, and consume energy. To do this they need networks (such as blood vessels) to move resources around. West and his collaborators showed that the 3/4 exponent is determined by the requirements that the network a) reach every part of the animal's body, and b) waste as little energy as possible.
Extending this approach, they were able to explain other scaling laws like the relationship between heart rate and mass. Currently, West is investigating scaling laws in large-scale living communities, such as forests and cities.
I haven't talked much about network theory (a topic for another time perhaps) but West's work suggests the great potential of this complex systems subfield to explain some of life's mysteries.
Is Life Fractal?
I'm sure you all know what fractals look like, but a few pretty pictures never hurt anyone:
Isn't that cool? The key thing about fractals is that if you look at just a small part of it, it resembles the whole thing. For instance, the following picture was obtained by zooming in on the upper left tail of the previous one:
One of the original "big ideas" of complex systems is that fractal patterns seem to appear spontaneously in nature and in human society. Let's look at some examples:
Physical Systems: Pop quiz: is this picture a close-up of a rock you could hold in your hand, or wide shot of a giant cliff face?
I don't know what the answer is. Without some point of reference it's very hard to determine the scale because rocks are fractal: small parts of them look like the whole.
Other examples in physical systems include turbulence (small patches of bumpy air look like large patches) and coastlines (think Norway). These two examples in particular inspired Benoit Mandelbrot to give fractals their name and begin their mathematical exploration.
Biological Systems: Here's an example you're probably familiar with:
And one you probably aren't:
The first was a fern, the second was a vegetable called a chou Romanesco, which has to be the coolest vegetable I've ever seen.
In the case of these living systems, there's a simple reason why you see fractals: they are grown from cells following simple rules. The fern, for example, first grows a single stalk with leaves branching out. These leaves follow the same rule and grow their own leaves, and so on.
Of course, the pattern doesn't exist forever. If you zoom in far enough, eventually you see leaves with no branches. This is an important feature of all real-world fractals: there is some minimum scale (e.g. the atomic scale or the cellular scale) at which the fractal pattern breaks down.
Social Systems: Some people like to extend this reasoning to the social realm, arguing that individuals form families, which form communities and corporations, which form cities, nations and so on. You can try to draw parallels between behavior at the nation level or the corporation level to behavior at the human level.
Personally, I'm a little dubious on this argument. My doubts stem partly from my personal observation that humans seem to act morally on an individual scale, but that corporations on the whole behave far worse than individuals. I think there's something fundamentally different about the centralized decision-making process of a human, and the more decentralized process of a corporation. But this is all my personal opinion. Feel free to debate me on it.
Isn't that cool? The key thing about fractals is that if you look at just a small part of it, it resembles the whole thing. For instance, the following picture was obtained by zooming in on the upper left tail of the previous one:
One of the original "big ideas" of complex systems is that fractal patterns seem to appear spontaneously in nature and in human society. Let's look at some examples:
Physical Systems: Pop quiz: is this picture a close-up of a rock you could hold in your hand, or wide shot of a giant cliff face?
I don't know what the answer is. Without some point of reference it's very hard to determine the scale because rocks are fractal: small parts of them look like the whole.
Other examples in physical systems include turbulence (small patches of bumpy air look like large patches) and coastlines (think Norway). These two examples in particular inspired Benoit Mandelbrot to give fractals their name and begin their mathematical exploration.
Biological Systems: Here's an example you're probably familiar with:
And one you probably aren't:
The first was a fern, the second was a vegetable called a chou Romanesco, which has to be the coolest vegetable I've ever seen.
In the case of these living systems, there's a simple reason why you see fractals: they are grown from cells following simple rules. The fern, for example, first grows a single stalk with leaves branching out. These leaves follow the same rule and grow their own leaves, and so on.
Of course, the pattern doesn't exist forever. If you zoom in far enough, eventually you see leaves with no branches. This is an important feature of all real-world fractals: there is some minimum scale (e.g. the atomic scale or the cellular scale) at which the fractal pattern breaks down.
Social Systems: Some people like to extend this reasoning to the social realm, arguing that individuals form families, which form communities and corporations, which form cities, nations and so on. You can try to draw parallels between behavior at the nation level or the corporation level to behavior at the human level.
Personally, I'm a little dubious on this argument. My doubts stem partly from my personal observation that humans seem to act morally on an individual scale, but that corporations on the whole behave far worse than individuals. I think there's something fundamentally different about the centralized decision-making process of a human, and the more decentralized process of a corporation. But this is all my personal opinion. Feel free to debate me on it.
Is Global Complexity Worth It?
I had a wide-ranging lunch conversation with my friend Seth
a week ago. We touched on many things, but kept circling back to the above question. More specifically, I was wondering if our current level of global complexity could ever be sustainable, even with the best international governance and planning.
Let's define what we're talking about. In today's world, actions you take have consequences around the globe. For example, if you buy a computer, it likely was not made in a workshop down the road. The parts that go into your computer come from many different countries. These parts had to cross vast distances to come together, burning oil from other countries in the process. These parts were assembled in yet other countries, shipped several more times, and finally delivered to you. The money that you paid for the computer feeds back into all these various countries and processes, strengthening and perhaps changing them.
The effects of this global entanglement have been amazing. Without it, we wouldn't have computers, cellphones, airplanes, plastics, television, cars, or curry powder in the supermarket. None of these products can be made in any one local community; they all require cooperation on a continental, if not global, scale.
But I'm worried by globalization, on both a theoretical and practical level. It's clear that as humans, we aren't living within our means--I won't go into the details of that argument here. What concerns now is whether the very structure of our global society may be preventing us from ever living within our means.
First, feedback loops are getting too complex. Suppose we lived in a simple, hundred person community, and someone was stealing from his neighbors, dumping trash in the public square, or doing other undesirable actions. These actions would become apparent to everyone in short order, and the community could punish the perpetrator in various ways; economically, socially, even physically.
In theory, we have a legal system now to provide these kinds of punishments. But the more complex our society becomes, the harder it is to identify those who are screwing things up. Furthermore, laws and enforcement vary wildly across countries. Multinational corporations can get away with dumping trash in the ocean, toppling Central American democracies, intentionally creating blackouts in California, or supporting sweatshops in China because a) the actions might be legal in whatever location they're operating out of, b) they can obscure their practices behind a wall of complexity that regulators can't penetrate, and c) the consumers usually have no idea what the company is doing and therefore can't exercise moral judgment in their purchases. It could be decades before any consequences (legal, economic, or environmental) catch up with the perpetrator. And decades is too long to be an effective deterrent.
Second, we are increasingly interdependent. Witness how the mortgage crisis spread throughout American economic sectors and is now spreading through the world. Infectious diseases like avian flu have the potential to go global due to the volume of international travel. Even our environmental problems have globalized--we worry about global warming now, whereas the environmental agenda in the past was more about local pollution issues.
I see this as a problem because it means we have only one chance to screw up. The inhabitants of Easter Island destroyed their ecosystem and suffered for it, but the damage was contained to the island. In our current connected world, one disaster could ruin things for all humanity.
Can we do anything about global complexity and interdependence? I've been thinking about ways we can promote some simplicity in our economy, like buying local food or supporting local independent retailers over mega-chains. I'm not advocating we go back to preindustrial tribal society, but a little extra simplicity seems like a good thing.
a week ago. We touched on many things, but kept circling back to the above question. More specifically, I was wondering if our current level of global complexity could ever be sustainable, even with the best international governance and planning.
Let's define what we're talking about. In today's world, actions you take have consequences around the globe. For example, if you buy a computer, it likely was not made in a workshop down the road. The parts that go into your computer come from many different countries. These parts had to cross vast distances to come together, burning oil from other countries in the process. These parts were assembled in yet other countries, shipped several more times, and finally delivered to you. The money that you paid for the computer feeds back into all these various countries and processes, strengthening and perhaps changing them.
The effects of this global entanglement have been amazing. Without it, we wouldn't have computers, cellphones, airplanes, plastics, television, cars, or curry powder in the supermarket. None of these products can be made in any one local community; they all require cooperation on a continental, if not global, scale.
But I'm worried by globalization, on both a theoretical and practical level. It's clear that as humans, we aren't living within our means--I won't go into the details of that argument here. What concerns now is whether the very structure of our global society may be preventing us from ever living within our means.
First, feedback loops are getting too complex. Suppose we lived in a simple, hundred person community, and someone was stealing from his neighbors, dumping trash in the public square, or doing other undesirable actions. These actions would become apparent to everyone in short order, and the community could punish the perpetrator in various ways; economically, socially, even physically.
In theory, we have a legal system now to provide these kinds of punishments. But the more complex our society becomes, the harder it is to identify those who are screwing things up. Furthermore, laws and enforcement vary wildly across countries. Multinational corporations can get away with dumping trash in the ocean, toppling Central American democracies, intentionally creating blackouts in California, or supporting sweatshops in China because a) the actions might be legal in whatever location they're operating out of, b) they can obscure their practices behind a wall of complexity that regulators can't penetrate, and c) the consumers usually have no idea what the company is doing and therefore can't exercise moral judgment in their purchases. It could be decades before any consequences (legal, economic, or environmental) catch up with the perpetrator. And decades is too long to be an effective deterrent.
Second, we are increasingly interdependent. Witness how the mortgage crisis spread throughout American economic sectors and is now spreading through the world. Infectious diseases like avian flu have the potential to go global due to the volume of international travel. Even our environmental problems have globalized--we worry about global warming now, whereas the environmental agenda in the past was more about local pollution issues.
I see this as a problem because it means we have only one chance to screw up. The inhabitants of Easter Island destroyed their ecosystem and suffered for it, but the damage was contained to the island. In our current connected world, one disaster could ruin things for all humanity.
Can we do anything about global complexity and interdependence? I've been thinking about ways we can promote some simplicity in our economy, like buying local food or supporting local independent retailers over mega-chains. I'm not advocating we go back to preindustrial tribal society, but a little extra simplicity seems like a good thing.
Tragedy of the Commons in Evolution
Based partly on the feedback from last column, I'd like to probe a bit deeper into the connection between altruism, evolution, and space. Not outer space, mind you, but space here on planet Earth.
We all know that in Darwinian evolution, the species that survive and reproduce best in their environment are the ones that persist and evolve. We know from observing nature that this system tends to produce sustainable ecosystems in which every species seems to play a useful role, even if they are also competing for survival. In particular, no level of the food chain eats so much of the level below as to cause it to go extinct.
Now suppose that in a grassland ecosystem, some animal speices got really good at eating grass. So good, in fact, that it could devour an entire field, roots and all, in a season, and use all that energy to reproduce faster much than its competitors. It would seem that this species has an evolutionary advantage over its slower peers. Of course, this advantage would be very short-term; the grass couldn't grow back the next season, so all species, including this super-eater, would starve.
This situation might be called a Tragedy of the Commons, a phrase popularized by a 1968 Science article by Garret Hardin. This phrase refers to a general situation where there is a shared resource everyone depends on. Without some check on everyone's behavior, some individuals may be tempted to take more than their share, and if this happens too often, the resource is depleted and everyone suffers. (The current depletion of the global edible fish population is one of many real-life examples.)
The question is, why hasn't this tragedy wiped out life on earth by this point? What's to stop a super-eater from spontaneously evolving somewhere, multiplying rapidly, spreading throughout the planet, and destroying all life everywhere?
Several studies (May and Nowak, Werfel and Bar-Yam, Austin et. al.), each taking different approaches, point to a common answer: space. If a selfish overeater evolves somewhere, it will exhaust the resources around it, but then it will die off while other species in other ecosystems live sustainably. As long as there is sufficient space in the world, an overzealous species will cause its own destruction before it can spread very far. In this way, evolution on a sufficiently large planet actually favors organisms that live in harmony with their environment.
Now, if there was some species that could not only suck its environment dry, but also move fast enough to outrace the devastation it was causing, we'd have a real problem on our hands. Fortunately, it seems this has never happened.
Or has it???
We all know that in Darwinian evolution, the species that survive and reproduce best in their environment are the ones that persist and evolve. We know from observing nature that this system tends to produce sustainable ecosystems in which every species seems to play a useful role, even if they are also competing for survival. In particular, no level of the food chain eats so much of the level below as to cause it to go extinct.
Now suppose that in a grassland ecosystem, some animal speices got really good at eating grass. So good, in fact, that it could devour an entire field, roots and all, in a season, and use all that energy to reproduce faster much than its competitors. It would seem that this species has an evolutionary advantage over its slower peers. Of course, this advantage would be very short-term; the grass couldn't grow back the next season, so all species, including this super-eater, would starve.
This situation might be called a Tragedy of the Commons, a phrase popularized by a 1968 Science article by Garret Hardin. This phrase refers to a general situation where there is a shared resource everyone depends on. Without some check on everyone's behavior, some individuals may be tempted to take more than their share, and if this happens too often, the resource is depleted and everyone suffers. (The current depletion of the global edible fish population is one of many real-life examples.)
The question is, why hasn't this tragedy wiped out life on earth by this point? What's to stop a super-eater from spontaneously evolving somewhere, multiplying rapidly, spreading throughout the planet, and destroying all life everywhere?
Several studies (May and Nowak, Werfel and Bar-Yam, Austin et. al.), each taking different approaches, point to a common answer: space. If a selfish overeater evolves somewhere, it will exhaust the resources around it, but then it will die off while other species in other ecosystems live sustainably. As long as there is sufficient space in the world, an overzealous species will cause its own destruction before it can spread very far. In this way, evolution on a sufficiently large planet actually favors organisms that live in harmony with their environment.
Now, if there was some species that could not only suck its environment dry, but also move fast enough to outrace the devastation it was causing, we'd have a real problem on our hands. Fortunately, it seems this has never happened.
Or has it???
Altruistic and Selfish Bacteria
The Boston University Physics Department hosted a very interesting talk yesterday by Robert Austin of Princeton. Austin has been studying the social behavior of bacteria, in order to help understand the social dynamics of other organisms, including humans. He shared with us some intriguing results about selfish and altruistic individuals, and the social dynamics between the two.
Indeed, Austin and his collaborators found a single gene that controls bacteria "selfishness." If it's off, bacteria slow down their metabolism and reproduction rate when they sense their environment has been depleted of nutrients. This prevents them from completely destroying their living space. However, if this gene is turned on ("expressed" is the technical term) the bacteria go right on eating until nothing is left. They even develop the ability to feed off of other dead bacteria.
Interesetingly, the gene is off by default when bacteria are found in the wild. But if you put them in a petri dish, mix them together, and cut off their food supply, you rather quickly (after only about 4 days!) see selfish mutants emerge. These mutants rapidly consume all the remaining food, including each other, and then starve.
This is an interesting conundrum. The petri dish situation seems pretty dire: first the cheaters win, and then everyone loses. This is another prisoner's dilemma situation: cheaters seem to have the advantage over the self-restraining altruists, but if everyone cheats then everyone is worse off.
On the other hand, bacteria in the wild exercise restraint, so there must be something different going on in the wild than in the petri dish.
Intrigued, Austin and his colleagues set up a different experiment. They designed an artificial landscape conatining many differnt chambers in which the bacteria could isolate themselves. Food sources were spread unevenly through the landscape. They also found a way to "manufacture" the selfish bacteria by fiddling with their DNA, and they dyed them a different color from the altruists to discern the interactions between the two.
In this situation, the altruists and the cheaters managed to coexist by segregating themselvs. The altruists gathered in dense clumps (and lived in harmony?) while the cheaters spread out sparsely (they don't even like each other!) around the altruists, occasionally gobbling up a dead one. Somehow, the altruists are able to segregate themselves in such a way that the cheaters can't steal their food; a marked contrast to the first experiments in which the bacteria were continually mixed together. Here's what this segretation looks like within two of the "chambers":
The chamber on the left, which is nutrient-poor, contains mainly cheaters waiting for others to die. The nutrient-rich chamber on the right contains "patches" of altruists and cheaters, never fully mixed. You can't see it from the picture, but the green altuists are very densely clumped and the red cheaters are spread apart from each other.
The possible life lesson here is that altruists can exist in a society with cheaters if the altruists can segregate themselves to form (utpoian?) communities. If there is forced mixing between the two groups then, unfortunately, it all ends in tragedy.
A very similar lesson can be found in the work of Werfel and Bar-Yam, but that's a story for another time.
Indeed, Austin and his collaborators found a single gene that controls bacteria "selfishness." If it's off, bacteria slow down their metabolism and reproduction rate when they sense their environment has been depleted of nutrients. This prevents them from completely destroying their living space. However, if this gene is turned on ("expressed" is the technical term) the bacteria go right on eating until nothing is left. They even develop the ability to feed off of other dead bacteria.
Interesetingly, the gene is off by default when bacteria are found in the wild. But if you put them in a petri dish, mix them together, and cut off their food supply, you rather quickly (after only about 4 days!) see selfish mutants emerge. These mutants rapidly consume all the remaining food, including each other, and then starve.
This is an interesting conundrum. The petri dish situation seems pretty dire: first the cheaters win, and then everyone loses. This is another prisoner's dilemma situation: cheaters seem to have the advantage over the self-restraining altruists, but if everyone cheats then everyone is worse off.
On the other hand, bacteria in the wild exercise restraint, so there must be something different going on in the wild than in the petri dish.
Intrigued, Austin and his colleagues set up a different experiment. They designed an artificial landscape conatining many differnt chambers in which the bacteria could isolate themselves. Food sources were spread unevenly through the landscape. They also found a way to "manufacture" the selfish bacteria by fiddling with their DNA, and they dyed them a different color from the altruists to discern the interactions between the two.
In this situation, the altruists and the cheaters managed to coexist by segregating themselvs. The altruists gathered in dense clumps (and lived in harmony?) while the cheaters spread out sparsely (they don't even like each other!) around the altruists, occasionally gobbling up a dead one. Somehow, the altruists are able to segregate themselves in such a way that the cheaters can't steal their food; a marked contrast to the first experiments in which the bacteria were continually mixed together. Here's what this segretation looks like within two of the "chambers":
The chamber on the left, which is nutrient-poor, contains mainly cheaters waiting for others to die. The nutrient-rich chamber on the right contains "patches" of altruists and cheaters, never fully mixed. You can't see it from the picture, but the green altuists are very densely clumped and the red cheaters are spread apart from each other.
The possible life lesson here is that altruists can exist in a society with cheaters if the altruists can segregate themselves to form (utpoian?) communities. If there is forced mixing between the two groups then, unfortunately, it all ends in tragedy.
A very similar lesson can be found in the work of Werfel and Bar-Yam, but that's a story for another time.
Information, Part Deux
First, a note of personal triumph: I have a paper up on the arXiv! For those who don't know, the arXiv is a way for researchers to distribute their work in a way which is free for all users, but also official, so that no one can scoop you once you've posted to the site. In the paper, I argue that a new and more general mathematics of information is needed, and I present a axiomatic framework for this mathematics using the language of category theory.
For those unfamiliar with such highfalutin language, it's really not as complicated as it sounds. I'll probably do a post soon explaining the content of the paper in layperson's terms. But first, and based partly on the feedback to my last post, I think it's important to say more on what information is and why I, as a complex systems theorist, am interested in it.
I'm currently thinking that information comes in three flavors, or more specifically, three broad situations where the concept comes in handy.
The neat thing about these flavors of information is that they are all described by the same mathematics. The first time I learned that the same formula could be used in all these situations, it blew my mind.
One might think that is concept is already amazingly broad; why is a "more general mathematics of information" needed? The answer is that people were so inspired by the concept of information that they've applied it to fields as diverse as linguistics, psychology, anthropology, art, and music. However, the traditional mathematics of information doesn't really support these nontraditional applications. To use the standard formula, you need to know the probability of each possible outcome of an event; but "probability" doesn't really make sense when talking about art, for example. So a big part of my research project is trying to understand the behavior of information when the basic formula does not apply.
*Having just invented the mathematical concept of communication information, Claude Shannon was unsure of what to call it. Von Neumann, the famous mathematician and physicist, told him "You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage."
For those unfamiliar with such highfalutin language, it's really not as complicated as it sounds. I'll probably do a post soon explaining the content of the paper in layperson's terms. But first, and based partly on the feedback to my last post, I think it's important to say more on what information is and why I, as a complex systems theorist, am interested in it.
I'm currently thinking that information comes in three flavors, or more specifically, three broad situations where the concept comes in handy.
- Statisitcal information: Some things in life appear to be random. Really, this means that there's information we don't have about what's going to happen. It turns out there's a formula to quantify the uncertainty of an event---how much we don't know. This enables us to make statements like "event A is twice as uncertain as event B", and, more powerfully, statements like "knowing the outcome of event C will give us half of the necessary information to predict event B." The second statement uses the concept of mutual information: the amount of information that something tells you about something else. Mutual information can be understood as quantifying the statistical relationship between two uncertain events, and forms the basis of a general theory of complex systems proposed by Bar-Yam.
- Physical Information: If the "uncertain event" you're interested in is the position and velocity of particles in a system, then calculating the statistical uncertainty will give you what physicists call the entropy of the system. Entropy has all the properties of statistical information, but also satisfies physical laws like the second law of thermodynamics (entropy of a closed system does not decrease.)
- Communication Information: Now suppose that the "uncertain event" is a message you'll receive from someone. In this case, quantifying the uncertainty results in communication information (which is also called entropy, and there's a funny reason* why.) Communication information differs from statistical information in that, for communication, the information comes in the form of a message, which is independent of the physical system used to convey it.
The neat thing about these flavors of information is that they are all described by the same mathematics. The first time I learned that the same formula could be used in all these situations, it blew my mind.
One might think that is concept is already amazingly broad; why is a "more general mathematics of information" needed? The answer is that people were so inspired by the concept of information that they've applied it to fields as diverse as linguistics, psychology, anthropology, art, and music. However, the traditional mathematics of information doesn't really support these nontraditional applications. To use the standard formula, you need to know the probability of each possible outcome of an event; but "probability" doesn't really make sense when talking about art, for example. So a big part of my research project is trying to understand the behavior of information when the basic formula does not apply.
*Having just invented the mathematical concept of communication information, Claude Shannon was unsure of what to call it. Von Neumann, the famous mathematician and physicist, told him "You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage."
Information
Uif nbq jt opu uif ufssjupsz.
Were you able to tell what I was trying to communicate there? Let me express it differently:
Click here for the sentence I've been trying to convey, and its significance. The point is that the same information can be expressed in different ways, and understood (with some work) even if you don't know the the communication system (i.e. code) I'm using.
I started thinking recently about how one would define the concept of information. I don't have a definition yet, but I think one of its crucial properties is that information is independent of the system used to communicate it. The same information can be expressed in a variety of languages, codes, or pictures, and received through lights on a screen, ink arranged on paper, or compressed waves of air. To reach us, information might travel in the form of electrical pulses (as were used in telegraph machines), radio waves, light in fiber-optical channels, or smoke signals. This independence of physical form distinguishes information from other physical quantities. Energy, for example, can come in many forms; but you would react much differently from coming in contact with light energy versus heat energy or kinetic energy.
It takes a certain level of intelligence to conceive of information independently of its form. We humans understand that ink on paper can refer to the same thing as sound waves, whereas to a bacterium these are completely different physical phenomena. It would be interesting to investigate which animals can understand different physical processes as conveying the same message.
One might be tempted to conclude that information exists only in the minds of intelligent beings, with no independent physical meaning. But this is not true: information appears in the laws of physics. The second law of thermodynamics, for example, says that the closed systems become less predictable with time, and more information is therefore required to understand them.
So information is a physical quantity, but exists independently of its many possible forms. This falls far short of a definition, but it may help explain the uniqueness of information among the quantities considered in science.
Were you able to tell what I was trying to communicate there? Let me express it differently:
Click here for the sentence I've been trying to convey, and its significance. The point is that the same information can be expressed in different ways, and understood (with some work) even if you don't know the the communication system (i.e. code) I'm using.
I started thinking recently about how one would define the concept of information. I don't have a definition yet, but I think one of its crucial properties is that information is independent of the system used to communicate it. The same information can be expressed in a variety of languages, codes, or pictures, and received through lights on a screen, ink arranged on paper, or compressed waves of air. To reach us, information might travel in the form of electrical pulses (as were used in telegraph machines), radio waves, light in fiber-optical channels, or smoke signals. This independence of physical form distinguishes information from other physical quantities. Energy, for example, can come in many forms; but you would react much differently from coming in contact with light energy versus heat energy or kinetic energy.
It takes a certain level of intelligence to conceive of information independently of its form. We humans understand that ink on paper can refer to the same thing as sound waves, whereas to a bacterium these are completely different physical phenomena. It would be interesting to investigate which animals can understand different physical processes as conveying the same message.
One might be tempted to conclude that information exists only in the minds of intelligent beings, with no independent physical meaning. But this is not true: information appears in the laws of physics. The second law of thermodynamics, for example, says that the closed systems become less predictable with time, and more information is therefore required to understand them.
So information is a physical quantity, but exists independently of its many possible forms. This falls far short of a definition, but it may help explain the uniqueness of information among the quantities considered in science.
How much can we know?
Scientific progress is often viewed as an inexorable march toward increasing knowledge. We'll never know everything about the universe, but we've gotten used to the idea that we keep knowing ever more, at an ever-increasing rate.
However, as we discussed some time ago, human beings are creatures of finite complexity. There is only a finite amount we can do, and, more relevant to the present discussion, there is only a finite amount we can know. It's very likely that the human brain holds less pure information than the average hard drive. So while we humans as a collective might be able to increase our knowledge indefinitely, our knowledge as individuals has a definite limit.
What does this limit mean for the study and practice of science? For one thing, it limits the knowledge that a single scientist can apply to a particular problem. A researcher studying a virus can't apply all of science, or all of molecular biology, or all of virology to his study. Even just the scientific knowledge about this particular virus might be too much to fit into this researcher's brain. As scientists, we attack our problems using whatever knowledge we've gained from coursework, reading, and conversations with others--a tiny fraction of the wealth of potentially relevant knowledge out there.
Worse, as the frontier of knowledge keeps expanding, the amount of background knowledge needed to comprehend a single patch of this frontier increases steadily. I started my math career in differential geometry/topology: a beautiful subject, but one that requires years of graduate coursework to understand current research questions even on a superficial level. Since we have finite brainpower, no individual can maintain this kind of expertise in more than a few subjects. So we become specialists, unable to discuss our research with anyone outside our narrowly defined field. Before I switched to complex systems, I was continually frustrated by the isolation that came with specialized research. And I hear this same frustration from many of the other math/science grad students I talk to.
The danger is that science will keep branching into smaller, more arcane, and more isolated subsubdisciplines. This would make interdisciplinary research increasingly difficult, and the prospect of a science career ever more daunting and unappealing for students. And it would not get us any closer to solving some of our biggest problems in science, which lie not at the fringes of some highly specialized discipline, but in the synthesis of results from all branches of science.
What is needed is a sustained push for big-picture thinking. Whereas small-picture science focuses on the complex and the narrowly defined, big-picture sceince seeks the broad and the simple. It combines the many complex discoveries made by small-picture scientists, and distills them into ideas that can fit in a single human's head.
Here's a useful example, stolen from the website eigenfactor.org and based on this paper:
The above is a diagram of a yeast protein interaction network. It represents the cumulative work of many scientists who investigated whether and how certain proteins interact with each other. A remarkable achievement, certainly.
But the sheer volume of information makes this diagram useless to anyone but a specialist, and probably not very helpful for the specialists either. Trying to draw conclusions from a diagram like this would be like trying to navigate cross country using a map that shows every side street and alley in the US. It's just too much information for one brain to handle.
The authors go on to describe an algorithm that can transform complex networks like this:
into simplified ones like this:
that represent simple, understandable relationships.
I don't mean to belittle the work done by small-picture scientists; without them the big picture thinkers would have nothing to talk about. But I think the scientific establishment is so structured around the small-picturists that big picture thinking often gets squeezed out, which only impedes our understanding of science in general.
However, as we discussed some time ago, human beings are creatures of finite complexity. There is only a finite amount we can do, and, more relevant to the present discussion, there is only a finite amount we can know. It's very likely that the human brain holds less pure information than the average hard drive. So while we humans as a collective might be able to increase our knowledge indefinitely, our knowledge as individuals has a definite limit.
What does this limit mean for the study and practice of science? For one thing, it limits the knowledge that a single scientist can apply to a particular problem. A researcher studying a virus can't apply all of science, or all of molecular biology, or all of virology to his study. Even just the scientific knowledge about this particular virus might be too much to fit into this researcher's brain. As scientists, we attack our problems using whatever knowledge we've gained from coursework, reading, and conversations with others--a tiny fraction of the wealth of potentially relevant knowledge out there.
Worse, as the frontier of knowledge keeps expanding, the amount of background knowledge needed to comprehend a single patch of this frontier increases steadily. I started my math career in differential geometry/topology: a beautiful subject, but one that requires years of graduate coursework to understand current research questions even on a superficial level. Since we have finite brainpower, no individual can maintain this kind of expertise in more than a few subjects. So we become specialists, unable to discuss our research with anyone outside our narrowly defined field. Before I switched to complex systems, I was continually frustrated by the isolation that came with specialized research. And I hear this same frustration from many of the other math/science grad students I talk to.
The danger is that science will keep branching into smaller, more arcane, and more isolated subsubdisciplines. This would make interdisciplinary research increasingly difficult, and the prospect of a science career ever more daunting and unappealing for students. And it would not get us any closer to solving some of our biggest problems in science, which lie not at the fringes of some highly specialized discipline, but in the synthesis of results from all branches of science.
What is needed is a sustained push for big-picture thinking. Whereas small-picture science focuses on the complex and the narrowly defined, big-picture sceince seeks the broad and the simple. It combines the many complex discoveries made by small-picture scientists, and distills them into ideas that can fit in a single human's head.
Here's a useful example, stolen from the website eigenfactor.org and based on this paper:
The above is a diagram of a yeast protein interaction network. It represents the cumulative work of many scientists who investigated whether and how certain proteins interact with each other. A remarkable achievement, certainly.
But the sheer volume of information makes this diagram useless to anyone but a specialist, and probably not very helpful for the specialists either. Trying to draw conclusions from a diagram like this would be like trying to navigate cross country using a map that shows every side street and alley in the US. It's just too much information for one brain to handle.
The authors go on to describe an algorithm that can transform complex networks like this:
into simplified ones like this:
that represent simple, understandable relationships.
I don't mean to belittle the work done by small-picture scientists; without them the big picture thinkers would have nothing to talk about. But I think the scientific establishment is so structured around the small-picturists that big picture thinking often gets squeezed out, which only impedes our understanding of science in general.
Causality
My friend Seth asks, via gchat away message, "How is it that there are phenomena which are independant of all but a very small set of conditions?" In essence, he is asking about the concept of causality, and how it is that this concept can even make sense.
One of the first abstract ideas we are asked to understand as kids is the idea that some actions cause others. It fell because I dropped it. She's crying because I kicked her. But as we grow older, we see that causality is rarely so simple. Most events depend on a great many past events, none of which can be identified as a single cause. Yet we still use the language of causality ("Today's earnings report caused stocks to close lower") and, at least sometimes, this usage seems appropriate. So how can we tell when causality makes sense and when it doesn't?
In my conversations with Seth on this idea, I was reminded of a principle from special relativity: causality can't travel faster than light. For example, nothing you do today can affect events on Alpha Centauri tomorrow, because not even light can get from here to there that quickly.
This leads to the idea of a "causal cone" in spacetime. Consider the following picture:
The blue "cone" coming out of point A shows all the points in spacetime that light could possibly reach from point A. So points B and C can be affected by something that happens at point A, but Point D cannot because, like Alpha Centauri tomorrow, it is too far away in space and not far enough in the future.
In most everyday situations, causality travels at a speed much slower than light. The specific speed depends on the medium through which causality is travelling. For example, if an underwater earthquake causes a tidal wave, causality travels at the speed by which waves move through water. A rumor travels at the speed it takes people to hear the rumor and repeat it. The point is that, in all cases, causality moves at a finite speed. You can't affect something that is too close to the present (timewise) and too far away in a spatial sense or an information-sharing network sense (i.e. too many degrees removed from you.)
Now consider a situation where we have three potentially causal events:
Suppose we know that events D, E, and F could only have been caused by A, B, or C. Clearly D was caused by B, since A and C are too far away to have influenced D. E, on the other hand, could have been caused by A, B, or both, and F could have been caused by any combination of the three.
In real life, there are millions of events happening all the time, all of which have the potential to cause or influence other events. In the immediate aftermath of an event, the causal cone is undiluted by other cones (as in event B above). But as we get further away (spatially and temporally) from the event, other cones intersect and complicate the effects caused by the original event. This leads to our conclusion:
The statement "A causes B" is most likely to make sense if B happens immediately following and in close proximity to A. Otherwise, there are too many other effects that could dilute the influence of A.
Visually, this would probably look like a "causal flame" coming out of event A, representing the points in spacetime over which A the the most direct influence.
In short, you could reasonably say that dropping the urn caused it to break. But you'd have a much harder time arguing that this event caused your relationship to break up two years later.
One of the first abstract ideas we are asked to understand as kids is the idea that some actions cause others. It fell because I dropped it. She's crying because I kicked her. But as we grow older, we see that causality is rarely so simple. Most events depend on a great many past events, none of which can be identified as a single cause. Yet we still use the language of causality ("Today's earnings report caused stocks to close lower") and, at least sometimes, this usage seems appropriate. So how can we tell when causality makes sense and when it doesn't?
In my conversations with Seth on this idea, I was reminded of a principle from special relativity: causality can't travel faster than light. For example, nothing you do today can affect events on Alpha Centauri tomorrow, because not even light can get from here to there that quickly.
This leads to the idea of a "causal cone" in spacetime. Consider the following picture:
The blue "cone" coming out of point A shows all the points in spacetime that light could possibly reach from point A. So points B and C can be affected by something that happens at point A, but Point D cannot because, like Alpha Centauri tomorrow, it is too far away in space and not far enough in the future.
In most everyday situations, causality travels at a speed much slower than light. The specific speed depends on the medium through which causality is travelling. For example, if an underwater earthquake causes a tidal wave, causality travels at the speed by which waves move through water. A rumor travels at the speed it takes people to hear the rumor and repeat it. The point is that, in all cases, causality moves at a finite speed. You can't affect something that is too close to the present (timewise) and too far away in a spatial sense or an information-sharing network sense (i.e. too many degrees removed from you.)
Now consider a situation where we have three potentially causal events:
Suppose we know that events D, E, and F could only have been caused by A, B, or C. Clearly D was caused by B, since A and C are too far away to have influenced D. E, on the other hand, could have been caused by A, B, or both, and F could have been caused by any combination of the three.
In real life, there are millions of events happening all the time, all of which have the potential to cause or influence other events. In the immediate aftermath of an event, the causal cone is undiluted by other cones (as in event B above). But as we get further away (spatially and temporally) from the event, other cones intersect and complicate the effects caused by the original event. This leads to our conclusion:
The statement "A causes B" is most likely to make sense if B happens immediately following and in close proximity to A. Otherwise, there are too many other effects that could dilute the influence of A.
Visually, this would probably look like a "causal flame" coming out of event A, representing the points in spacetime over which A the the most direct influence.
In short, you could reasonably say that dropping the urn caused it to break. But you'd have a much harder time arguing that this event caused your relationship to break up two years later.
The Prisoner's Dilemma
You and an acquaintance are charged (and rightfully so!) as co-conspirators in a train robbery. You are being interviewed separately by the police. You can either rat your buddy out or keep silent and do more time. Your acquaintance has the same choices.
If one of you rats and the other remains silent, the one who cooperated with the police gets off free and the other serves 10 years. If you both keep silent, they can only convict on a lesser charge (for lack of evidence), so you each do a year. If you both rat on each other, you each do five years.
Both you and your acquaintance know this information. Assuming neither of you cares what happens to the other, and there are no recriminations in the outside world (we'll revisit both of these assumptions later), what is likely to happen?
Under the assumptions we've made, neither of you has any incentive to help the other. No matter what the other guy does, you get a better result by ratting on him. You both come to the same conclusion, so you both do five years.
This game is one of the most famous examples in game theory. It presents something of a dilemma: By each choosing the option that serves them best, the two "players" in the game end up with a result (5 years each) that is worse than if they had each kept silent. Choosing the best option individually leaves them worse off as a whole.
The game is traditionally phrased in terms of prisoners, but it applies pretty well to any situation when people have an opportunity to screw someone else over for their own benefit. If it truly is better in each situation to screw the other person, then everyone will end up screwing everyone else (in a bad way), and everyone will be worse off.
I think of this game when I drive up my street after a snowstorm, looking for a parking spot. People on my block (and all over Boston, from what I've seen) have the perverse idea that if they dig their car out of the snow, they "own" the spot they parked it in until the snow melts. They mark their spots with chairs or traffic cones. I've thought about doing the same. On the one hand, I think it's ridiculous for people to "claim" spots, just because they happened to park there before the storm. On the other hand, if everyone else does it and I don't, I can't park anywhere. If everyone else has chosen the selfish option, why shouldn't I? Classic Prisoner's Dilemma.
So does this bode ill for humankind? Is this a game-theoretic "proof" that we're all going to stab each other in the back? To answer these questions, let's look back at the assumptions we made.
First, we assumed that you don't care what happens to the other person. If you do care, you'd be much more likely to keep silent, which would end with a better result for both of you. A little selflessness helps everyone.
We also assumed that there were no consequences to your actions beyond what was spelled out in the game. A lot of ways you can screw others for your own benefit, such as breaking into your neighbor's house, are illegal. Laws can't deal with all prisoner's dilemma situations, but they can eliminate some of the worst ones.
There is a third, hidden assumption that we made: we assumed the game would only be played once. If the game is played over and over many times, is it possible for more cooperative strategies to emerge successful? This question was addressed by David Axelrod in The Evolution of Cooperation who found that, while selfishness is the best short-term strategy, cooperative strategies will win in the long run if the game is played enough times. More specifically, he identified four hallmarks of successful strategies: (here I quote Wikipedia)
Are these principles to live by? Perhaps. Axelrod and others think the success of these kinds of strategies may help explain the evolution of altruistic behavior in animals. At any rate, it seems to suggest that nice guys can get ahead, if they're willing to be mean at the right moments.
If one of you rats and the other remains silent, the one who cooperated with the police gets off free and the other serves 10 years. If you both keep silent, they can only convict on a lesser charge (for lack of evidence), so you each do a year. If you both rat on each other, you each do five years.
Both you and your acquaintance know this information. Assuming neither of you cares what happens to the other, and there are no recriminations in the outside world (we'll revisit both of these assumptions later), what is likely to happen?
Under the assumptions we've made, neither of you has any incentive to help the other. No matter what the other guy does, you get a better result by ratting on him. You both come to the same conclusion, so you both do five years.
This game is one of the most famous examples in game theory. It presents something of a dilemma: By each choosing the option that serves them best, the two "players" in the game end up with a result (5 years each) that is worse than if they had each kept silent. Choosing the best option individually leaves them worse off as a whole.
The game is traditionally phrased in terms of prisoners, but it applies pretty well to any situation when people have an opportunity to screw someone else over for their own benefit. If it truly is better in each situation to screw the other person, then everyone will end up screwing everyone else (in a bad way), and everyone will be worse off.
I think of this game when I drive up my street after a snowstorm, looking for a parking spot. People on my block (and all over Boston, from what I've seen) have the perverse idea that if they dig their car out of the snow, they "own" the spot they parked it in until the snow melts. They mark their spots with chairs or traffic cones. I've thought about doing the same. On the one hand, I think it's ridiculous for people to "claim" spots, just because they happened to park there before the storm. On the other hand, if everyone else does it and I don't, I can't park anywhere. If everyone else has chosen the selfish option, why shouldn't I? Classic Prisoner's Dilemma.
So does this bode ill for humankind? Is this a game-theoretic "proof" that we're all going to stab each other in the back? To answer these questions, let's look back at the assumptions we made.
First, we assumed that you don't care what happens to the other person. If you do care, you'd be much more likely to keep silent, which would end with a better result for both of you. A little selflessness helps everyone.
We also assumed that there were no consequences to your actions beyond what was spelled out in the game. A lot of ways you can screw others for your own benefit, such as breaking into your neighbor's house, are illegal. Laws can't deal with all prisoner's dilemma situations, but they can eliminate some of the worst ones.
There is a third, hidden assumption that we made: we assumed the game would only be played once. If the game is played over and over many times, is it possible for more cooperative strategies to emerge successful? This question was addressed by David Axelrod in The Evolution of Cooperation who found that, while selfishness is the best short-term strategy, cooperative strategies will win in the long run if the game is played enough times. More specifically, he identified four hallmarks of successful strategies: (here I quote Wikipedia)
- Nice: The most important condition is that the strategy must be "nice", that is, it will not defect before its opponent does. Almost all of the top-scoring strategies were nice; therefore a purely selfish strategy will not "cheat" on its opponent, for purely utilitarian reasons first.
- Retaliating: However, Axelrod contended, the successful strategy must not be a blind optimist. It must sometimes retaliate. An example of a non-retaliating strategy is Always Cooperate. This is a very bad choice, as "nasty" strategies will ruthlessly exploit such softies.
- Forgiving: Another quality of successful strategies is that they must be forgiving. Though they will retaliate, they will once again fall back to cooperating if the opponent does not continue to play defects. This stops long runs of revenge and counter-revenge, maximizing points.
- Non-envious: The last quality is being non-envious, that is not striving to score more than the opponent (impossible for a ‘nice’ strategy, i.e., a 'nice' strategy can never score more than the opponent).
Are these principles to live by? Perhaps. Axelrod and others think the success of these kinds of strategies may help explain the evolution of altruistic behavior in animals. At any rate, it seems to suggest that nice guys can get ahead, if they're willing to be mean at the right moments.
The language of autism
An autistic woman demonstrates, then explains, her personal language and way of communicating. It blew my mind.
Here's a New York Times article on her. I don't have too much to add other than that nature didn't make any bad brains. Just way different ones.
Here's a New York Times article on her. I don't have too much to add other than that nature didn't make any bad brains. Just way different ones.
Phase Transitions
One of the biggest projects of complex systems research is to find "universal" phenomena: patterns that manifest themselves in similar ways across physical, social, and biological systems. One phenonenon that appears regularly throughout complex systems is phase transitions: those instances when a slight change in the rules causes a massive change in a system's behavior. These changes only seem to happen when the system is at certain "critical" points. Understanding when these phase changes occur, and what happens when they do, will go a long way toward increasing our understanding of systems behavior in general.
To illustrate the many manifestations of this idea, let's look at some examples:
Given their importance and ubiquity, understanding the how, why, and when of phase transitions is a crucial project. The good news is that they're not totally unpredictable--there are certain signs that tell you when a phase transition may be approaching. However, this discussion must wait for another time.
To illustrate the many manifestations of this idea, let's look at some examples:
- Physics: Water boils at 212 degrees Fahrenheit. This fact is so commonplace that it's easy to forget how fundamentally surprising it is. Temperature is basically a measure of how "jittery" the molecules in a substance are. Most of the time, if you increase water's temperature by a degree or two, you make the individual molecules buzz around faster but the liquid itself ("the system") retains all of its basic properties. But at the magical point of 212 degrees, a slight change in jitteriness radically changes the system's behavior. At the critical point, the slight change is just enough to overcome certain forces holding the molecules together, and off they go.
- Computer Science: Say you give a computer a randomly selected problem from a certain class of problems (like finding the shortest route between two points on a road map), and see how long the computer takes to solve it. Of course, there are many ways of "randomly" choosing a problem, so let's say you have some parameters which tell you how likely some problems are versus others. For the most part, a small change in the parameters won't change the complexity of the problem much, but at some critical values, a small change can make the problem much simpler or much more difficult. (For a technical exposition see here.) Papadimitriou claimed that, in some mathematical sense, these are the same kind of phase transitions as in solids and liquids, but I don't know the details on that claim.
- Mathematics: There are several mathematical phenomena that behave like phase transitions, but I'll focus on bifurcations. A dynamical system in mathematics is a system that evolves from one state to another via some rule. Change the rule a little and you'll change the system's behavior, usually not by much, but sometimes by a whole lot. For instance, the system might shift from being in equilibrium to alternating between two states. Change the rules a bit more and it could start cycling though four states, then eight. Another small change could land you in chaos, in which predicting the future behavior of the system is next to impossible.
- Ecology: Okay enough with theory, let's look at some situations where phase transitions matter in a huge way. Ecosystems are adaptive, meaning that they can absorb a certain amount of change while maintaing their basic state. However, Folke et. al. have extensively documented what they call "regime shifts" in ecosystems--changes in ecosystems from one stable state to anoher, very different stable state (think rainforest to desert.) Often these shifts appear triggered by human behavior. Folke et. al. also review ways to increase ecosystem resilience (i.e. make them less susceptible to regime shifts) by, for example, promoting and maintaining biodiversity.
- Economics: Well for starters, there was the Great Phase Transition of 1929, or the current phase transition triggered by sub-prime lending. In both events, small crashes cascaded into much larger ones because of underlying problems in the market: in the first case people buying stocks with borrowed money; in the second case investment in risky mortgages that only make sense when interest rates are low. These underlying problems created a situation where a single "spark" could bring the whole market down.
- Social Sciences: The idea of a "tipping point" actually belonged to sociological theory before Malcolm Gladwell popularized it. It refers to any process that, upon gaining critical momentum, cascades dramatically. The term was first coined to describe white flight: once a critical number of nonwhites moved into a neighborhood, all the whites would head for the 'burbs. It has since been used to describe all manner of trends and fads, as well as contagious diseases. Trends that never catch enough initial supporters will die out quickly, but beyond a certain point, they're unstoppable. "Facebook" ustoppable.
Given their importance and ubiquity, understanding the how, why, and when of phase transitions is a crucial project. The good news is that they're not totally unpredictable--there are certain signs that tell you when a phase transition may be approaching. However, this discussion must wait for another time.
Where did my gills go, again?
This has nothing to do with complex systems theory, but it's so cool I had to share it. I found out on Wired Science today that, according to anatomist Neil Shubin's book Your Inner Fish, hiccups are a leftover evolutionary impulse from our time as amphibians. Essentially, when we hiccup, our brain is trying to get us to breathe through our gills, rather than our lungs. As the Guardian explains:
Kevin Costner may have been a visionary after all.
Spasms in our diaphragms, hiccups are triggered by electric signals generated in the brain stem. Amphibian brain stems emit similar signals, which control the regular motion of their gills. Our brain stems, inherited from amphibian ancestors, still spurt out odd signals producing hiccups that are, according to Shubin, essentially the same phenomenon as gill breathing.
Kevin Costner may have been a visionary after all.
Christos Papadimitriou
Christos Papadimitriou, one of the world's foremost computational theorists, gave a talk Thursday at MIT entitled "The Algorithmic Lens: How the Computational Perspective is Changing the Sciences." Through a series of eight "vignettes" in math, physics, biology and economics, he showed how ideas from computer science have influenced thinking in all other sciences. I don't know if he explicitly aligns himself with the complex systems movement, but the ideas he presented were very much in line with complex systems thinking, and gave me a lot to ponder.
When mathematicians and physicists look at a problem, the main questions they ask are "Is there a solution?" and "How do we find it?" If there is even a theoretical procedure for finding the answer, the mathematicians and physicists are usually satisfied. What computer scientists bring to the table is another question "How complex is the solution procedure?" Computer scientists ask this question because they know of many problems which can be solved in theory, but even the fastest computer in the world couldn't solve them before the end of the universe. Computational complexity started as a practical concern of deciding which problems can be solved in reasonable amounts of time, but it was soon recognized as an interesting theoretical problem as well. Papadimitriou's thesis is that importance of this question has now spread beyond computer science to all of the natural and social sciences.
In this post I'll focus on two of his vignettes. My next post will focus on a third.
The first is from economics. It is a central tenet of economic theory that a market will always "find" its equilibrium: that magical point where supply, demand, and price are perfectly aligned. However, Deng and Huang, among others, have shown that finding such an equilibrium is not polynomially bounded, meaning that even very powerful computers can't find equilibria in large markets.
Now, if the market itself is performing computations as its various players try to sort out optimum prices and production levels. If it were true that the market could always find its equilibrium, this would constitute a "proof" that the the problem can be solved relatively easily. In fact, you could just write a computer program to emulate what the market does.
But since an easy solution is theoretically impossible, this must mean that markets don't always find their equilibria. And this fact is actually obvious from looking at how markets really behave: they go up and down, they crash, they generally do strange things. So this tenet of economic theory must be due for a serious revision.
A second interesting vignette was ostensibly about the brain, though it has much wider implications. As we make decisions, we can often feel different parts of our brain working against each other. Part of us wants something and part of us wants something else. Papadimitriou asked, "Could this possibly be the most efficient way to make decisions?" It doesn't seem particularly efficient. And if it isn't, why has our brain, over millions of years of evolution, developed such an inefficient process?
A recent paper of Livant and Pippenger cast the problem this way: Can an optimal decision-making system ever contain agents with conflicting priorities? The answer is no in general, but yes if the system has limited computational power, i.e. limited resources for dealing with complexity. Which of course is true for every real-world decision-making system, including brains.
Their reseach implies that, not only does it make sense for our brains to seemingly conflict with itself, but also that, if you are assembling a decision-making team, it actually makes sense to include people who disagree with each other. A belated lesson for Mr. Bush, perhaps?
Next time: phase changes!
When mathematicians and physicists look at a problem, the main questions they ask are "Is there a solution?" and "How do we find it?" If there is even a theoretical procedure for finding the answer, the mathematicians and physicists are usually satisfied. What computer scientists bring to the table is another question "How complex is the solution procedure?" Computer scientists ask this question because they know of many problems which can be solved in theory, but even the fastest computer in the world couldn't solve them before the end of the universe. Computational complexity started as a practical concern of deciding which problems can be solved in reasonable amounts of time, but it was soon recognized as an interesting theoretical problem as well. Papadimitriou's thesis is that importance of this question has now spread beyond computer science to all of the natural and social sciences.
In this post I'll focus on two of his vignettes. My next post will focus on a third.
The first is from economics. It is a central tenet of economic theory that a market will always "find" its equilibrium: that magical point where supply, demand, and price are perfectly aligned. However, Deng and Huang, among others, have shown that finding such an equilibrium is not polynomially bounded, meaning that even very powerful computers can't find equilibria in large markets.
Now, if the market itself is performing computations as its various players try to sort out optimum prices and production levels. If it were true that the market could always find its equilibrium, this would constitute a "proof" that the the problem can be solved relatively easily. In fact, you could just write a computer program to emulate what the market does.
But since an easy solution is theoretically impossible, this must mean that markets don't always find their equilibria. And this fact is actually obvious from looking at how markets really behave: they go up and down, they crash, they generally do strange things. So this tenet of economic theory must be due for a serious revision.
A second interesting vignette was ostensibly about the brain, though it has much wider implications. As we make decisions, we can often feel different parts of our brain working against each other. Part of us wants something and part of us wants something else. Papadimitriou asked, "Could this possibly be the most efficient way to make decisions?" It doesn't seem particularly efficient. And if it isn't, why has our brain, over millions of years of evolution, developed such an inefficient process?
A recent paper of Livant and Pippenger cast the problem this way: Can an optimal decision-making system ever contain agents with conflicting priorities? The answer is no in general, but yes if the system has limited computational power, i.e. limited resources for dealing with complexity. Which of course is true for every real-world decision-making system, including brains.
Their reseach implies that, not only does it make sense for our brains to seemingly conflict with itself, but also that, if you are assembling a decision-making team, it actually makes sense to include people who disagree with each other. A belated lesson for Mr. Bush, perhaps?
Next time: phase changes!
Subscribe to:
Posts (Atom)