follow us on

Chapter Twenty: The Butterfly Sleeps
from Out of Control, by Kevin Kelly, ©1994. Used by permission.


outofcontrol-sm.gif Order for free (4P)

Some ideas are reeled into our mind wrapped up in facts; and some ideas burst upon us naked without the slightest evidence they could be true but with all the conviction they are. The ideas of the latter sort are the more difficult to displace.    (4Q)

The idea of antichaos-order for free-came in a vision of the unverifiable sort.    (4R)

The idea was dealt to Stuart Kauffman, an undergraduate medical student at Dartmouth College some thirty years ago. As Kauffman remembers it, he was standing in front of a bookstore window daydreaming about the design of a chromosome. Kauffman was a sturdy guy with curly hair, easy smile, and no time to read. As he stared in the window, he imagined a book, a book with his name on it in the author's slot, a book that he would write in the future.    (4S)

In his vision the pages of the book were filled with a web of arrows connecting other arrows, weaving in and out of a living tangle. It was the icon of the Net. But the mess was not without order. The tangle sparked mysterious, almost cabalistic, "currents of meanings" along the threads. Kauffman discerned an image emerging out of the links in a "subterranean way," just as recognition of a face springs from the crazy disjointed surfaces in a cubist painting.    (4T)

As a medical student studying cell development, Kauffman saw the intertwined lines in his fantasy as the interconnections between genes. Out of that random mess, Kauffman suddenly felt sure, would come inadvertent order-the architecture of an organism. Out of chaos would come order for no reason: order for free. The complexity of points and arrows seemed to be generating a spontaneous order. To Kauffman the depiction was intimately familiar; it felt like home. His task would be to explain and prove it. "I don't know why this question, this ill-lit path," he says, but it has become a "deeply felt, deeply held image."    (4U)

Kauffman pursued his vision by taking up academic research in cell development. As many other developmental biologists had, he studied Drosophila, the famous fruit fly, as it progressed from fertilized egg to adult. How did the original lone egg cell of any creature manage to divide and specialize first into two, then four, then eight new kinds of cells? In a mammal the original egg cell would propagate an intestinal cell line, a brain cell line, a hair cell line; yet each substantially specialized line of cells presumably ran the same operating software. After a relatively few generations of division, one cell type could split into all the variety and bulk of an elephant or oak. A human embryo egg needed to divide only 50 times to produce the trillions of cells that form a baby.    (4V)

What invisible hand controlled the fate of each cell, as it traveled along a career path forking 50 times, guiding it from general egg to hundreds of kinds of specialized cells? Since each cell was supposedly driven by identical genes (or were they actually different?), how could cells possibly become different? What controlled the genes?    (4W)

Françoise Jacob and Jacques Monod discovered a major clue in 1961 when they encountered and described the regulatory gene. The regulatory gene's function was stunning: to turn other genes on. In one breath it blew away all hopes of immediately understanding DNA and life. The regulatory gene set into motion the quintessential cybernetic dialogue: What controls genes? Other genes! And what controls those genes? Other genes! And what...    (4X)

That spiraling, darkly modern duet reminded Kauffman of his home image. Some genes controlling other genes which in turn might control still others was the same tangled web of arrows of influence pointing in every direction in his vision book.    (4Y)

Jacob and Monod's regulatory genes reflected a spaghetti-like vision of governance-a decentralized network of genes steering the cellular network to its own destiny. Kauffman was excited. His picture of "order for free" suggested to him a fairly far-out idea: that some of the differentiation (order) each egg underwent was inevitable, no matter what genes you started out with!    (4Z)

He could think of a test for this notion. Replace all the genes in the fruitfly with random genes. His bet: you would not get Drosophila, but you would get the same order of monsters and freak mutations Drosophila produced in the natural course of things. "The question I asked myself," Kauffman recalls, "was the following. If you just hooked up genes at random, would you get anything that looked useful?" His intuitive hunch was that simply because of distributed bottom-up control and everything-is-connected-to-everything type of cell management, certain classes of patterns would be inevitable. Inevitable! Now here was a germ of heresy. Something to devote one's years to!    (50)

"I had a hard time in medical school," he continues, "because instead of studying anatomy I was scribbling all these notebooks with little model genomes." The way to prove this heresy, Kauffman cleverly decided, was not to fight nature in the lab, but to model it mathematically. Use computers as they became accessible. Unfortunately there was no body of math with the ability to track the horizontal causality of massive swarms. Kauffman began to invent his own. At the same time (about 1970) in about a half-dozen other fields of research, the mathematically inclined (such as John Holland) were coming up with procedures that allowed them to simulate the effects of a mob of interdependent nodes whose values simultaneously depend on each other.    (51)

Net math: a counter-intuitive style of math  (52)

This set of math techniques that Kauffman, Holland and others devised is still without a proper name, but I'll call it here "net math." Some of the techniques are known informally as parallel distributed processing, Boolean nets, neural nets, spin glasses, cellular automata, classifier systems, genetic algorithms, and swarm computation. Each flavor of net math incorporates the lateral causality of thousands of simultaneous interacting functions. And each type of net math attempts to coordinate massively concurrent events-the kind of nonlinear happenings ubiquitous in the real world of living beings. Net math is in contradistinction to Newtonian math, a classical math so well suited to most physics problems that it had been seen as the only kind of math a careful scientist needed. Net math is almost impossible to use practically without computers.    (53)

The wide variety of swarm systems and net maths got Kauffman to wondering if this kind of weird swarm logic-and the inevitable order he was sure it birthed-were more universal than special. For instance, physicists working with magnetic material confronted a vexing problem. Ordinary ferromagnets-the kind clinging to refrigerator doors and pivoting in compasses-have particles that orient themselves with cultlike uniformity in the same direction, providing a strong magnetic field. Mildly magnetic "spin glasses," on the other hand, have wishy-washy particles that will magnetically "spin" in a direction that depends in part on which direction their neighbors spin. Their "choice" places more clout on the influence of nearby ones, but pays some attention to distant particles. Tracing the looping interdependent fields of this web produces the familiar tangle of circuits in Kauffman's home image. Spin glasses used a variety of net math to model the material's nonlinear behavior that was later found to work in other swarm models. Kauffman was certain genetic circuitry was similar in its architecture.    (54)

Unlike classical mathematics, net math exhibits nonintuitive traits. In general, small variations in input in an interacting swarm can produce huge variations in output. Effects are disproportional to causes-the butterfly effect.    (55)

Even the simplest equations in which intermediate results flow back into them can produce such varied and unexpected turns that little can be deduced about the equations' character merely by studying them. The convoluted connections between parts are so hopelessly tangled, and the calculus describing them so awkward, that the only way to even guess what they might produce is to run the equations out, or in the parlance of computers, to "execute" the equations. The seed of a flower is similarly compressed. So tangled are the chemical pathways stored in it, that inspection of a unknown seed-no matter how intelligent-cannot predict the final form of the unpacked plant. The quickest route to describing a seed's output is therefore to sprout it.    (56)

Equations are sprouted on computers. Kauffman devised a mathematical model of a genetic system that could sprout on a modest computer. Each of the 10,000 genes in his simulated DNA is a teeny-weeny bit of code that can turn other genes either on or off. What the genes produced and how they were connected were assigned at random.    (57)

design_development_mono.gif
How do we combine our work to get spontaneous order?

This was Kauffman's point: that the very topology of such complicated networks would produce order-spontaneous order!-no matter what the tasks of the genes.   (59)

While he worked on his simulated gene, Kauffman realized that he was constructing a generic model for any kind of swarm system. His program could model any bunch of agents that interact in a massive simultaneous field. They could be cells, genes, business firms, black boxes, or simple rules-anything that registers input and generates output interpreted as input by a neighbor.    (5A)

He took this swarm of actors and randomly hooked them up into an interacting network. Once they were connected he let them bounce off one another and recorded their behavior. He imagined each node in the network as a switch able to turn certain neighboring nodes off or on. The state of the neighbor nodes looped back to regulate the initial node. Eventually this gyrating mess of he-turns-her-who-turns-him-on settled down into a stable and measurable state. Kauffman again randomly rearranged the entire net's connections and let the nodes interact until they all settled down. He did that many times, until he had "explored" the space of possible random connections. This told him what the generic behavior of a net was, independent of its contents. An oversimplified analogous experiment would be to take ten thousand corporations and randomly link up the employees in each by telephone networks, and then measure the average effects of these networks, independent of what people said over them.    (5B)

By running these generic interacting networks tens of thousands of times, Kauffman learned enough about them to paint a rough portrait of how such swarm systems behaved under specific circumstances. In particular, he wanted to know what kind of behavior a generic genome would create. He programmed thousands of randomly assembled genetic systems and then ran these ensembles on a computer-genes turning off and on and influencing each other. He found they fell into "basins" of a few types of behaviors.    (5C)

At a slow speed water trickles out of a garden hose in one uneven but consistent pattern. Turn up the tap, and it abruptly sprays out in a chaotic (but describable) torrent. Turn it up full blast, and it gushes out in a third way like a river. Carefully screw the tap to the precise line between one speed and a slower one, and the pattern refuses to stay on the edge but reverts to one state or the other, as if it were attracted to a side, any side. Just as a drop of rain falling on the ridge of a continental divide must eventually find its way down to either the Pacific Basin or the Atlantic Basin, roll down one side or the other it must.    (5D)

Sooner or later the dynamics of the system would find its way to at least one "basin" that entrapped the shifting motions into a persistent pattern. In Kauffman's view a randomly assembled system would find its way to a stock pattern (a basin); thus, out of chaos, order for free emerges.    (5E)

As he ran uncounted genetic simulations, Kauffman discovered a rough ratio (the square root) between the number of genes and the number of basins the genes in the system settled into. This proportion was the same as the number of genes in biological cells and the number of cell types (liver cells, blood cells, brain cells) those genes created, a ratio that is roughly constant in all living things.    (5F)

Kauffman claims this universal ratio across many species suggests that the number of cell types in nature may derive from cellular architecture itself. The number of types of cells in your body, then, may have little to do with natural selection and more to do with the mathematics of complex gene interactions. How many other biological forms, Kauffman gleefully wonders, might also owe little to selection?    (5G)

He had a hunch about a way to ask the question experimentally. But first he needed a method to cook up random ensembles of life. He decided to simulate the origin of life by generating all possible pools of prelife parts-at least in simulation. He would let the virtual pool of parts interact randomly. If he could then show that out of this soup order inevitably emerged, he would have a case. The trick would be to allow molecules to converge into a lap game.    (5H)

Lap games, jets, and auto-catalytic sets   (5I)

The lap game peaked in popularity a decade ago. It is a spectacular outdoor game that advertises the power of cooperation. The facilitator of the lap game takes a group of 25 or more people and has them stand fairly close together in a circle, so that each participant is staring at the back of the head of the person in front of him. Just picture a queue of people waiting in line for a movie and connect them in a tidy circle.    (5J)

At the facilitator's command this circle of people bend their knees and sit on the spontaneously generated knee-lap of the person behind them. If done in unison, the ring of people lowering to sit are suddenly propped up on a self-supporting collective chair. If one person misses the lap behind him, the whole circling line crashes. The world's record for a stable lap game is several hundred people.    (5K)

Auto-catalytic sets and the selfish Uroborus snake circle are much like lap games. Compound (or function) A makes compound (or function) B with the aid of compound (or function) C. But C itself is produced by A and D. And D is generated by E and C, and so on. Without the others none can be. Another way of saying this is to state that the only way for a particular compound or function to survive in the long run is for it to be a product of another compound or function. In this circular world all causes are results, just as all knees are laps. Contrary to common sense, all existences depend on the consensual existence of all others.    (5L)

design_development_mono.gif
How do Patches find each other? What are the dependencies of each patch to others? How do they co-evolve?

As the reality of the lap game proves, however, circular causality is not impossible. Tautology can hold up 200 pounds of flesh. It's real. Tautology is, in fact, an essential ingredient of stable systems.    (5M)

Cognitive philosopher Douglas Hofstadter calls these paradoxical circuits "Strange Loops." As examples, Hofstadter points to the seemingly ever rising notes in a Bach canon, or the endlessly rising steps in an Escher staircase. He also includes as Strange Loops the famous paradox about Cretan liars who say they never lie, and Gödel's proof of unprovable mathematical axioms. Hofstadter writes in Gödel, Escher, Bach: "The 'Strange Loop' phenomenon occurs whenever, by moving upwards (or downwards) through the levels of some hierarchical system, we unexpectedly find ourselves right back where we started."    (5N)

Life and evolution entail the necessary strange loop of circular causality-of being tautological at a fundamental level. You can't get life and open-ended evolution unless you have a system that contains that essential logical inconsistency of circling causes. In complex adapting processes such as life, evolution, and consciousness, prime causes seem to shift, as if they were an optical illusion drawn by Escher. Part of the problem humans have in trying build systems as complicated as our own human biology is that in the past we have insisted on a degree of logical consistency, a sort of clockwork logic, that blocks the emergence of autonomous events. But as the mathematician Gödel showed, inconsistency is an inevitable trait of any self-sustaining system built up out of consistent parts.    (5O)

Gödel's 1931 theorem demonstrates, among other things, that attempts to banish self-swallowing loopiness are fruitless, because, in Hofstadter's words, "it can be hard to figure out just where self-referencing is occurring." When examined at a "local" level every part seems legitimate; it is only when the lawful parts form a whole that the contradiction arises.    (5P)

In 1991, a young Italian scientist, Walter Fontana, showed mathematically that a linear sequence of function A producing function B producing function C could be very easily circled around and closed in a cybernetic way into a self-generating loop, so that the last function was coproducer of the initial function. When Kauffman first encountered Fontana's work he was ecstatic with the beauty of it. "You have to fall in love with it! Functions mutually making one another. Out of all function space, they come gripping one another's arms in an embrace of creating!" Kauffman called such a autocatalytic set an "egg." He said, "An egg would be a set of rules having the property that the rules they pose are precisely the ones that create them. That's really not crazy at all."    (5Q)

To get an egg you start with a huge pool of different agents. They could be varieties of protein pieces or fragments of computer code. If you let them interact upon each other long enough, they will produce small loops of thing-producing-other things. Eventually, if given time and elbowroom the spreading network of these local loops in the system will crowd upon itself, until every producer in the circuit is a product of another, until every loop is incorporated into all the other loops in massively parallel interdependence. At this moment of "catalytic closure" the web of parts suddenly snaps into a stable game-the system sits in its own lap, with its beginning resting on its end, and vice versa.    (5R)

design_development_mono.gif
How does the world of information technology, peace and social justice activists, environmental visionaries, independent media pioneers and many others create a "catalytic closure" and snap into a stable system?

Life began in such a soup of "polymers acting on polymers to form new polymers," Kauffman claims. He demonstrated the theoretical feasibility of such a logic by running experiments of "symbol strings acting on symbol strings to form new symbol strings." His assumption was that he could equate protein fragments and computer code fragments as logical equivalents. When he ran networks of bits of code-which-produce-code as a model for proteins, he got autocatalytic systems that are circular in the sense of the lap game: they have no beginning, no center, and no end.    (5S)

Life popped into existence as a complete whole much as a crystal suddenly appears in its final (though miniature) form in a supersaturated solution: not beginning as a vague half-crystal, not appearing as a half-materialized ghost, but wham, being all at once, just as a lap game circle suddenly emerges from a curving line of 200 people. "Life began whole and integrated, not disconnected and disorganized," writes Stuart Kauffman. "Life, in a deep sense, crystallized."    (5T)

He goes on to say, "I hope to show that self-reproduction and homeostasis, basic features of organisms, are natural collective expressions of polymer chemistry. We can expect any sufficiently complex set of catalytic polymers to be collectively autocatalytic." Kauffman was creeping up on that notion of inevitability again. "If my model is correct then the routes to life in the universe are boulevards, rather than twisted back alleyways." In other words, given the chemistry we have, "life is inevitable."    (5U)

 A question worth asking (5V)

"We've got to get used to dealing in billions of things!" Kauffman once told an audience of scientists. Huge multitudes of anything are different: the more polymers, the exponentially more possible interactions where one polymer can trigger the manufacture of yet another polymer. Therefore, at some point, a droplet loaded up with increasing diversity and numbers of polymers will reach a threshold where a certain number of polymers in the set will suddenly fall out into a spontaneous lap circle. They will form an auto-generated, self-sustaining, self-transforming network of chemical pathways. As long as energy flows in, the network hums, and the loop stands.    (5W)

Codes, chemicals, or inventions can in the right circumstances produce new codes, chemicals, or inventions. It is clear this is the model of life. An organism produces new organisms which in turn create newer organisms. One small invention (the transistor) produces other inventions (the computer) which in turn permit yet other inventions (virtual reality). Kauffman wants to generalize this process mathematically to say that functions in general spawn newer functions which in turn birth yet other functions.    (5X)

design_development_mono.gif
What functions spawn newer functions? How does the ecosystem scaffold?

 

"Five years ago," recalls Kauffman, "Brian Goodwin [an evolutionary biologist] and I were sitting in some World War I bunker in northern Italy during a rainstorm talking about autocatalytic sets. I had this profound sense then that there's a deep similarity between natural selection-what Darwin told us-and the wealth of nations-what Adam Smith told us. Both have an invisible hand. But I didn't know how to proceed any further until I saw Walter Fontana's work with autocatalytic sets, which is gorgeous."    (5Y)

I mentioned to Kauffman the controversial idea that in any society with the proper strength of communication and information connection, democracy becomes inevitable. Where ideas are free to flow and generate new ideas, the political organization will eventually head toward democracy as an unavoidable self-organizing strong attractor. Kauffman agreed with the parallel: "When I was a sophomore in '58 or '59 I wrote a paper in philosophy that I labored over with much passion. I was trying to figure out why democracy worked. It's obvious that democracy doesn't work because it's the rule of the majority. Now, 33 years later, I see that democracy is a device that allows conflicting minorities to reach relative fluid compromises. It keeps subgroups from getting stuck on some locally good but globally inferior solution."    (5Z)

It is not difficult to imagine Kauffman's networks of Boolean logic and random genomes mirroring the workings of town halls and state capitals. By structuring miniconflicts and microrevolutions as a continuous process at the local level, large scale macro- and mega-revolutions are avoided, and the whole system is neither chaotic nor stagnant. Perpetual change is fought out in small towns, while the nation remains admirably stable-thus creating a climate to keep the small towns in ceaseless compromise-seeking modes. That circular support is another lap game, and an indication that such systems are similar in dynamics to the self-supporting vivisystems.    (60)

"This is just intuitive," Kauffman cautions me, "but you can feel your way from Fontana's 'string-begets-string-begets-string' to 'invention-begets-invention-begets-invention' to cultural evolution and then to the wealth of nations." Kauffman makes no bones about the scale of his ambition: "I am looking for the self-consistent big picture that ties everything together, from the origin of life, as a self-organized system, to the emergence of spontaneous order in genomic regulatory systems, to the emergence of systems that are able to adapt, to nonequilibrium price formation which optimizes trade among organisms, to this unknown analog of the second law of thermodynamics. It is all one picture. I really feel it is. But the image I'm pushing on is this: Can we prove that a finite set of functions generates this infinite set of possibilities?"    (61)

Whew. I call that a "Kauffman machine." A small but well-chosen set of functions that connect into an auto-generating ring and produce an infinite jet of more complex functions. Nature is full of Kauffman machines. An egg cell producing the body of a whale is one. An evolution machine generating a flamingo over a billion years from a bacterial blob is another. Can we make an artificial Kauffman machine? This may more properly be called a von Neumann machine because von Neumann asked the same question in the early 1940s. He wondered, Can a machine make another machine more complex that itself? Whatever it is called, the question is the same: How does complexity build itself up?   

"You can't ask the experimental question until, roughly speaking, the intellectual framework is in place. So the critical thing is asking important questions," Kauffman warned me. Often during our conversations, I'd catch Kauffman thinking aloud. He'd spin off wild speculations and then seize one and twirl it around to examine it from various directions. "How do you ask that question?" he asked himself rhetorically. His quest was for the Question of All Questions rather than the Answer of All Answers. "Once you've asked the question," he said, "there's a good chance of finding some sort of answer.    (63)

A Question Worth Asking. That's what Kauffman thought of his notion of self-organized order in evolutionary systems. Kauffman confided to me: "Somehow, each of us in our own heart is able to ask questions that we think are profound in the sense that the answer would be truly important. The enormous puzzle is why in the world any of us ask the questions that we do."    (64)

design_development_mono.gif
What are the real questions? What is worth asking your community?

There were many times when I felt that Stuart Kauffman, medical doctor, philosopher, mathematician, theoretical biologist, and MacArthur Award recipient, was embarrassed by the wild question he had been dealt. "Order for free" flies in the face of a conservative science that has rejected every past theory of creative order hidden in the universe. It would probably reject his. While the rest of the contemporary scientific world sees butterflies of random chance sowing out-of-control, nonlinear effects in every facet of the universe, Kauffman asks if perhaps the butterflies of chaos sleep. He wakes the possibility of an overarching design dwelling within creation, quieting disorder and birthing an ordered stillness. It's a notion that for many sounds like mysticism. At the same time, the pursuit and framing of this single huge question is the quasar source of Kauffman's considerable pride and energy: "I would be lying if I didn't tell you that when I was 23 and started wondering how in the world a genome with 100,000 genes controls the emergence of different cell types, I felt that I had found something profound, I had found a profound question. And I still feel that way. I think God was very nice to me."    (65)

"If you write something about this," Kauffman says softly, "make sure you say that this is only something crazy that people are thinking about. But wouldn't it be wonderful if somehow there are laws that make laws that make laws, so that the universe is, in John Wheeler's words, something that is looking in at itself!? The universe posts its own rules and emerges out of a self-consistent thing. Maybe that's not impossible, this notion that quarks and gluons and atoms and elementary particles have invented the laws by which they transform one another."    (66)

Deep down Kauffman felt that his systems built themselves. In some way he hoped to discover, evolutionary systems controlled their own structure. From the first glimpse of his visionary network image, he had a hunch that in those connections lay the answer to evolution's self-governance. He was not content to show that order emerged spontaneously and inevitably. He also felt that control of that order also emerged spontaneously. To that end he charted thousands of runs of random ensembles in computer simulation to see which type of connections permitted a swarm to be most adaptable. "Adaptable" means the ability of system to adjust its internal links so that it fits its environment over time. Kauffman views an organism, a fruitfly say, as adjusting the network of its genes over time so that the result of the genetic network-a fly body-best fits its changing surroundings of food, shelter, and predators. The Question Worth Asking was: what controlled the evolvability of the system? Could the organism itself control its evolvability?    (67)

design_development_mono.gif
How can a community create an adaptable network that controls its own evolution?

The prime variable Kauffman played with was the connectivity of the network. In a sparsely connected network, each node would on average only connect to one other node, or less. In a richly connected network, each node would link to ten or a hundred or a thousand or a million other nodes. In theory the limit to the number of connections per node is simply the total number of nodes, minus one. A million-headed network could have a million-minus-one connections at each node; every node is connected to every other node. To continue our rough analogy, every employee of GM could be directly linked to all 749,999 other employees of GM.    (68)

As Kauffman varied this connectivity parameter in his generic networks, he discovered something that would not surprise the CEO of GM. A system where few agents influenced other agents was not very adaptable. The soup of connections was too thin to transmit an innovation. The system would fail to evolve. As Kauffman increased the average number of links between nodes, the system became more resilient, "bouncing back" when perturbed. The system could maintain stability while the environment changed. It would evolve. The completely unexpected finding was that beyond a certain level of linking density, continued connectivity would only decrease the adaptability of the system as a whole.    (69)

design_development_mono.gif
What is the optimal linking density for your community, and where should these links be placed?

Kauffman graphed this effect as a hill. The top of the hill was optimal flexibility to change. One low side of the hill was a sparsely connected system: flat-footed and stagnant. The other low side was an overly connected system: a frozen grid-lock of a thousand mutual pulls. So many conflicting influences came to bear on one node that whole sections of the system sank into rigid paralysis. Kauffman called this second extreme a "complexity catastrophe." Much to everyone's surprise, you could have too much connectivity. In the long run, an overly linked system was as debilitating as a mob of uncoordinated loners.    (6A)

Somewhere in the middle was a peak of just-right connectivity that gave the network its maximal nimbleness. Kauffman found this measurable "Goldilocks'" point in his model networks. His colleagues had trouble believing his maximal value at first because it seemed counterintuitive at the time. The optimal connectivity for the distilled systems Kauffman studied was very low, "somewhere in the single digits." Large networks with thousands of members adapted best with less than ten connections per member. Some nets peaked at less than two connections on average per node! A massively parallel system did not need to be heavily connected in order to adapt. Minimal average connection, done widely, was enough.    (6B)

Kauffman's second unexpected finding was that this low optimal value didn't seem to fluctuate much, no matter how many members comprised a specific network. In other words, as more members were added to the network, it didn't pay (in terms of systemwide adaptability) to increase the number of links to each node. To evolve most rapidly, add members but don't increase average link rates. This result confirmed what Craig Reynolds had found in his synthetic flocks: you could load a flock up with more and more members without having to reconfigure its structure.    (6C)

Kauffman found that at the low end, with less than two connections per agent or organism, the whole system wasn't nimble enough to keep up with change. If the community of agents lacked sufficient internal communication, it could not solve a problem as a group. More exactly, they fell into isolated patches of cooperative feedback but didn't interact with each other.    (6D)

design_development_mono.gif
How do we make sure that the community establishes and maintains the "Goldilocks" level of interconnectedness? That all Patches (i.e., Environmental, Digital Democracy, Social Networks and Civil Society, etc.) are communicating with each other, but not to the point of gridlock. How do we grow this sweet spot?  (71)

At the ideal number of connections, the ideal amount of information flowed between agents, and the system as a whole found the optimal solutions consistently. If their environment was changing rapidly, this meant that the network remained stable-persisting as a whole over time.    (6E)

Kauffman's Law states that above a certain point, increasing the richness of connections between agents freezes adaptation. Nothing gets done because too many actions hinge on too many other contradictory actions. In the landscape metaphor, ultra-connectance produces ultra-ruggedness, making any move a likely fall off a peak of adaptation into a valley of nonadaptation. Another way of putting it, too many agents have a say in each other's work, and bureaucratic rigor mortis sets in. Adaptability conks out into grid-lock. For a contemporary culture primed to the virtues of connecting up, this low ceiling of connectivity comes as unexpected news.    (6F)

We postmodern communication addicts might want to pay attention to this. In our networked society we are pumping up both the total number of people connected (in 1993, the global network of networks was expanding at the rate of 15 percent additional users per month!), and the number of people and places to whom each member is connected. Faxes, phones, direct junk mail, and large cross-referenced data bases in business and government in effect increase the number of links between each person. Neither expansion particularly increases the adaptability of our system (society) as a whole.    (6G)

Self-tuning vivisystems (6H)

Stuart Kauffman's simulations are as rigorous, original, and well- respected among scientists as any mathematical model can be. Maybe more so, because he is using a real (computer) network to model a hypothetical network, rather than the usual reverse of using a hypothetical to model the real. I grant, though, it is a bit of a stretch to apply the results of a pure mathematical abstraction to irregular arrangements of reality. Nothing could be more irregular than online networks, biological genetic networks, or international economic networks. But Stuart Kauffman is himself eager to extrapolate the behavior of his generic test-bed to real life. The grand comparison between complex real-world networks and his own mathematical simulations running in the heart of silicon is nothing less than Kauffman's holy grail. He says his models "smell like they are true." Swarmlike networks, he bets, all behave similarly on one level. Kauffman is fond of speculating that "IBM and E. coli both see the world in the same way."    (6I)

I'm inclined to bet in his favor. We own the technology to connect everyone to everyone, but those of us who have tried living that way are finding that we are disconnecting to get anything done. We live in an age of accelerating connectivity; in essence we are steadily climbing Kauffman's hill. But we have little to stop us from going over the top and sliding into a descent of increasing connectivity but diminishing adaptability. Disconnection is a brake to hold the system from overconnection, to keep our cultural system poised on the edge of maximal evolvability.    (6J)

The art of evolution is the art of managing dynamic complexity. Connecting things is not difficult; the art is finding ways for them to connect in an organized, indirect, and limited way.    (6K)

From his experiments in artificial life in swarm models, Chris Langton, Kauffman's Santa Fe Institute colleague, derived an abstract quality (called the lambda parameter) that predicts the likelihood that a particular set of rules for a swarm will produce a "sweet spot" of interesting behavior. Systems built upon values outside this sweet spot tend to stall in two ways. They either repeat patterns in a crystalline fashion, or else space out into white noise. Those values within the range of the lambda sweet spot generate the longest runs of interesting behavior.    (6L)

langton-at-sfi.jpgBy tuning the lambda parameter Langton can tune a world so that evolution or learning can unroll most easily. Langton describes the threshold between a frozen repetitious state and a gaseous noise state as a "phase transition"-the same term physicists use to describe the transition from liquid to gas or liquid to solid. The most startling result, though, is Langton's contention that as the lambda parameter approaches that phase transition-the sweet spot of maximum adaptability-it slows down. That is, the system tends to dwell on the edge instead of zooming through it. As it nears the place it can evolve the most from, it lingers. The image Langton likes to raise is that of a system surfing on an endless perfect wave in slow motion; the more perfect the ride, the slower time goes.    (6M)

This critical slowing down at the "edge" could help explain why a precarious embryonic vivisystem could keep evolving. As a random system neared the phase transition, it would be "pulled in" to rest at that sweet spot where it would undergo evolution and would then seek to maintain that spot. This is the homeostatic feedback loop making a lap for itself. Except that since there is little "static" about the spot, the feedback loop might be better named "homeodynamic."    (6N)

Stuart Kauffman also speaks of "tuning" the parameters of his simulated genetic networks to the "sweet spot." Out of all the uncountable ways to connect a million genes, or a million neurons, some relatively few setups are far more likely to encourage learning and adaptation throughout the network. Systems balanced to this evolutionary sweet spot learn fastest, adapt more readily, or evolve the easiest. If Langton and Kauffman are right, an evolving system will find that spot on its own.    (6O)

Langton discovered a clue as to how that may happen. He found that this spot teeters right on the edge of chaotic behavior. He says that systems that are most adaptive are so loose they are a hairsbreadth away from being out of control. Life, then, is a system that is neither stagnant with noncommunication nor grid-locked with too much communication. Rather life is a vivisystem tuned "to the edge of chaos"-that lambda point where there is just enough information flow to make everything dangerous.    (6P)

Rigid systems can always do better by loosening up a bit, and turbulent systems can always improve by getting themselves a little more organized. Mitch Waldrop explains Langton's notion in his book Complexity, thusly: if an adaptive system is not riding on the happy middle road, you would expect brute efficiency to push it toward that sweet spot. And if a system rests on the crest balanced between rigidity and chaos, then you'd expect its adaptive nature to pull it back onto the edge if it starts to drift away. "In other words," writes Waldrop, "you'd expect learning and evolution to make the edge of chaos stable." A self-reinforcing sweet spot. We might call it dynamically stable, since its home migrates. Lynn Margulis calls this fluxing, dynamically persistent state "homeorhesis"-the honing in on a moving point. It is the same forever almost-falling that poises the chemical pathways of the Earth's biosphere in purposeful disequilibrium.    (6Q)

Kauffman takes up the theme by calling systems set up in the lambda value range "poised systems." They are poised on the edge between chaos and rigid order. Once you begin to look around, poised systems can be found throughout the universe, even outside of biology. Many cosmologists, such as John Barrow, believe the universe itself to be a poised system, precariously balanced on a string of remarkably delicate values (such as the strength of gravity, or the mass of an electron) that if varied by a fraction as insignificant as 0.000001 percent would have collapsed in its early genesis, or failed to condense matter. The list of these "coincidences" is so long they fill books. According to mathematical physicist Paul Davies, the coincidences "taken together...provide impressive evidence that life as we know it depends very sensitively on the form of the laws of physics, and on some seemingly fortuitous accidents in the actual values that nature has chosen for various particle masses, force strengths, and so on." In brief, the universe and life as we know are poised on the edge of chaos.    (6R)

What if poised systems could tune themselves, instead of being tuned by creators? There would be tremendous evolutionary advantage in biology for a complex system that was auto-poised. It could evolve faster, learn more quickly, and adapt more readily. If evolution selects for a self-tuning function, Kauffman says, then "the capacity to evolve and adapt may itself be an achievement of evolution." Indeed, a self-tuning function would inevitably be selected for at higher levels of evolution. Kauffman proposes that gene systems do indeed tune themselves by regulating the number of links, size of genome, and so on, in their own systems for optimal flexibility.    (6S)

Self-tuning may be the mysterious key to evolution that doesn't stop-the holy grail of open-ended evolution. Chris Langton formally describes open-ended evolution as a system that succeeds in ceaselessly self-tuning itself to higher and higher levels of complexity, or in his imagery, a system that succeeds in gaining control over more and more parameters affecting its evolvability and staying balanced on the edge.    (6T)

In Langton's and Kauffman's framework, nature begins as a pool of interacting polymers that catalyze themselves into new sets of interacting polymers in such a networked way that maximal evolution can occur. This evolution-rich environment produces cells that also learn to tune their internal connectivity to keep the system at optimal evolvability. Each step extends the stance at the edge of chaos, poised on the thin path of optimal flexibility, which pumps up its complexity. As long as the system rides this upwelling crest of evolvability, it surfs along.    (6U)

What you want in artificial systems, Langton says, is something similar. The primary goal that any system seeks is survival. The secondary search is for the ideal parameters to keep the system tuned for maximal flexibility. But it is the third order search that is most exciting: the search for strategies and feedback mechanisms that will increasingly self-tune the system each step on the way. Kauffman's hypothesis is that if systems constructed to self-tune "can adapt most readily, then they may be the inevitable target of natural selection. The ability to take advantage of natural selection would be one of the first traits selected."    (6V)

As Langton and colleagues explore the space of possible worlds searching for that sweet spot where life seems poised on the edge, I've heard them call themselves surfers on an endless summer, scouting for that slo-mo wave.    (6W)

Rich Bageley, another Santa Fe Institute fellow, told me "What I'm looking for are things that I can almost predict, but not quite." He explained further that it was not regular but not chaotic either. Some almost-out-of-control and dangerous edge in between.    (6X)

"Yeah," replied Langton who overheard our conversation. "Exactly. Just like ocean waves in the surf. They go thump, thump, thump, steady as a heartbeat. Then suddenly, WHUUUMP, an unexpected big one. That's what we are all looking for. That's the place we want to find."    (6Y)