January 12, 2012
Frontiers of Design Science: Computational Irreducibility
No matter what we do, we will always have to leave something out.
There is a poorly understood restriction at the heart of the way we design today: virtually all our plans are mathematically doomed to be “incomplete.” But in a seeming paradox, if we can understand this restriction better, we can learn to make much better designs. One good way to do that is to learn from new scientific insights in the mathematics of computer processes, or “computations”. Here’s a guided tour through this fascinating and most important subject.
Let’s start by noting that, as designers, we rely on mental models to understand, change, and evaluate what we do — even if we do it unconsciously. We need these “pictures in our heads” to guide our actions around various alternatives, not unlike the way we choose routes with driving maps. Even our language is a kind of model of what we are doing, giving us a sense of the issues and problems, and what we can expect of various alternatives. (This paragraph is an example!) Such mental models, or maps, allow us to respond to real conditions, and act intelligently.
As designers, we can choose one or more models that help us to solve the problems we face, and meet the goals we have for the design. There is a fundamental problem with all models, however, as the logician and mathematician Kurt Gödel implied in a famous 1931 paper. They are “incomplete” — they always leave something out. This is a key requirement for all models to even be useful. After all, a map that is just as detailed as the region it represents will leave us just as lost as being in the region! (The Argentine writer J. L. Borges wrote a wonderful one-paragraph story that illustrates this beautifully, “On Exactitude in Science”). So maps are useful because they are simpler — because they are abstract. Looking at it another way, the reality we are trying to understand is “computationally irreducible.” We can never reduce it to a formula, or a perfect blueprint.
No matter what we do, we will always have to leave something out. It’s just the way language works, and the way human ideas work — it’s how they help us to be more intelligent. But in practice this limitation can, and often does, cause two big problems. One, our models might leave out something really important. Say, a map might not show that a bridge is out — and we try to drive over it! Two, more subtly, we might forget the difference between the model and reality, and fail to take any precautions. Then the model may not only not help us to behave more intelligently, it might make us “dumber” — we could make serious and even catastrophic errors in negotiating the real world. It seems that we lapse into just these kinds of mistake all the time. We reduce our design ideas to abstractions, but then we re-impose them upon the world — and often, essential elements have gotten lost along the way, hence our design feels “dead.” For example, we plan cities that are organized neatly into top-down hierarchical categories — and then we find that they lack the interaction and the vitality of “natural” cities, with their characteristic complex network structures (see our post “Frontiers of Design Science: The Network City”).
A way around this problem utilizes what mathematicians call “adaptive iteration.” We will apply the model, check the result, modify the model if needed, apply it again — and continue in this stepwise process. Gradually, the form of the result evolves and emerges as a coherent whole, well adapted to its environment, but also expressing (at least in part) the history of its own model-based evolution. There is never a perfect “final” result. But there are degrees of adaptation, and frequently they fall above (or conversely below) critical thresholds of functionality.
In architectural design, this process of adaptive iteration is very different from the common practice of building a sculptural expression on top of a separate programmatic frame. The programmatic frame is often conceived ahead of time as a fairly rigid “template” — so many square feet, such and such uses, and so on. Then the “artist-architect” comes along, and supplies a kind of magic. Seemingly by mysterious genius, the designer transforms the dry program into what is hoped to be a great piece of expressive art — or at least, creates a suitably attractive aesthetic package, perhaps along with witty schematic manipulations.
But this imposition of art (real or pretend) on top of life is likely to be highly damaging to both, as the urban scholar Jane Jacobs famously warned. Moreover, an architect is not merely a sculptor at giant scales, but a professional, not unlike a medical professional, with a “duty of care” to provide a living environment with a high grade of quality of life for the rest of us. The architect is not working in a private gallery for the benefit of connoisseurs alone, but deeply affecting the ordinary life and wellbeing of people and regions. We now recognize that the aesthetic qualities of our environments are not just consumable commodities to enjoy, to share, and to perhaps elevate to fine art (where objectivity so easily disintegrates into contradicting schools and fashions) — they are biologically related to our deepest needs as creatures. Accumulating evidence reveals that the aesthetic quality of our environments has a profound impact upon our physical and mental health (see our post “Frontiers of Design Science: Biophilia”). We are beginning to understand that these aesthetic qualities are the intimate result of this adaptive, iterative, structuring process. (Just as it is in the natural evolution of plants and animals that we find beautiful.)
The Red-tailed Hawk’s body is an exquisite set of computational adaptations to its environment and to the complex demands of flight — a form that we find beautifully expressive.
Image: Steve Jurvetson
Again, this theoretical knowledge must be put to effective practical use. We can understand the logical structure of adaptive design, and the details of its transformational processes, by thinking of them as “computations,” or calculations of the degree of adaptation and the need to adapt further (or not). These computations are the steps that good designers throughout history have actually gone through in evolving very beautiful, highly adaptive designs. But we have lost this process in recent decades, in the rush to embrace “shortcuts” in the form of static design models. Put simply, these “shortcut” models have proven crudely yet powerfully productive — but they have come at a tremendous hidden cost.
To illustrate the idea, we will describe here a computational approach to adaptive design, in which a sequence of steps evolves the system towards a coherent state. The process of design is threatened by two extremes:
- proposing a solution too casually, without following any adaptive steps
- getting lost in endless variations
Any intelligent designer sees the potential danger of the latter, and is then, unfortunately, drawn to the former. Yet either of these extremes can be easily overcome if one understands the design process mathematically. In practice, adaptive design solutions require one to closely follow definite steps that are tested to work. During the process of adaptive computation, the steps themselves influence and actually create the final state in solution space.
The pattern of a nautilus is an example of a common kind of computation that generates a fractal pattern — that is, a pattern that is self-similar at different scales. For a related reason, it also tends to have a characteristic harmonic proportioning.
Image: Wikimedia Commons/Chris 73
Adaptive design is thus essentially interactive, not unlike the processes in Quantum Physics where the act of measurement changes the quantity being measured. In the macroscopic world, every step taking us towards an adaptive design transforms the configuration and molds it into the end product of the design process. It also happens that adaptive computation coming from different initial points but trying to adapt to the same conditions does evolve the same solution. This is known as evolutionary convergence.
The surprisingly similar bodies of sharks and dolphins, which followed entirely separate evolutionary histories separated by 300 million years — yet both have nearly identical dorsal fins and other common features. The reason is that both went through similar evolutionary “computations” to adapt to the same set of forces — in this case, the complex patterns of turbulence in water.
Image: Wikimedia Commons/Poco a Poco, Arnaud25
We wish to address the question: how much computation is needed to find an adaptive design solution? This problem is tied to a general understanding of how much work is required to create a complex system from scratch. Some clear step-wise process of adaptive design builds complex systems, yet understandably, everyone wants shortcuts. In generating complex systems, however, shortcuts compromise system coherence and functionality. In the very simplest, computationally reducible systems (like simple math problems) we don’t need iterated computational effort, but can shortcut to the final state — i.e., use a formula. But an adaptive design process is computationally irreducible, and we are fooling ourselves if we think that we can impose a template, or somehow reach a final configuration through a formula or shortcut. This is the “reducibility fallacy,” into which most architects and urbanists of the 20th- and so far in the 21st-centuries have fallen into, with the consequence that their designs are non-adaptive and ultimately dysfunctional.
Why compute when you can simply use a formula? because bypassing computation rules out any adaptation. Above is the Bacardi Rum Factory, designed for Santiago de Cuba, 1959 (unrealized), architect: Ludwig Mies van der Rohe. Instead the same building was built in Berlin in 1968 as the Neue Nationalgalerie. Does it adapt to the same function, location, users, and climate? Of course not: it is an object building, detached from adapted context. This is the beginning of commodification — of architecture as “industrial packaging”.
Image: Hans Knips
Any system that cannot be put together from a simple formula is computationally irreducible. That means, computation of the final configuration requires the same effort as the system has gone through to create itself — there is no computational reduction or shortcut possible. This situation corresponds to what physicist and computer scientist Stephen Wolfram has defined as “computational irreducibility” (see his book A New Kind of Science, 2002).
While there are no shortcuts in the process, there may indeed be simpler rules that guide the steps. We see this phenomenon in complex systems — say, fractal patterns that have at their heart quite simple iterative cycles. Traditional artists like carpet weavers, also do something similar when they follow fairly simple pattern-like design rules, yet weave very rich and complex results. We admire the “handcrafted” quality of such works — but it is not a question of whether they are made by hand; indeed, such designs could, in principle, be made by machine too. (And in the case of fractals, they often are.) The key point is that you have to go through the steps. There are no template-based shortcuts for design computations.
Another important insight of Wolfram is that all computationally irreducible systems are computationally equivalent. All evolved natural systems are found to be computationally irreducible, and are classified as having a type of irreducible computational complexity of equivalent difficulty. Nature used its maximal computational power to develop the multitude of complex structures (ourselves included) that we see in the universe. The adaptive computations we describe here should not be confused with irrelevant step-wise transformations, however. There is a crucial distinction to be made between computations that are adaptive to the grain of human activity and human life, and those that follow abstract rules created to produce extravagant aesthetic experiences. Some architects have taken a strong interest in such “morphogenetic design” while neglecting the need for architectural adaptation. But this is another form of the imposed “art-architecture” we mentioned before, which is very different from the adaptive iteration that occurs in real cities — and in a more vital, human-responsive architecture.
The results of such non-adaptive intellectual games are very unlikely to provide good quality environments for their users. In architecture, computational equivalence exists among distinct but equally adapted designs that use very different form languages (each one evolved independently). All adaptive design algorithms are thus computationally irreducible. Many architects working in traditional and vernacular idioms follow part of this adaptive computational algorithm for design. (It’s not enough simply to copy older prototypes — design must be computed!) The corollary is also true: any computationally in-equivalent design algorithms, such as ones that are too simple, or are almost totally random, cannot be adaptive. To emphasize what has just been derived: any design method that is not equivalent in its number of computational steps to more traditional methods of evolving a design can never be adaptive. This finding calls for a radical challenge to the way we design and build today. If we want true sustainability, it seems, we are going to have to re-think our current template-based design technology. That, however, offers us an exciting new frontier to work on!
Michael Mehaffy is an urbanist and critical thinker in complexity and the built environment. He is a practicing planner and builder, and is known for his many projects as well as his writings. He has been a close associate of the architect and software pioneer Christopher Alexander. Currently he is a Sir David Anderson Fellow at the University of Strathclyde in Glasgow, a Visiting Faculty Associate at Arizona State University; a Research Associate with the Center for Environmental Structure, Chris Alexander’s research center founded in 1967; and a strategic consultant on international projects, currently in Europe, North America and South America.
Nikos A. Salingaros is a mathematician and polymath known for his work on urban theory, architectural theory, complexity theory, and design philosophy. He has been a close collaborator of the architect and computer software pioneer Christopher Alexander. Salingaros published substantive research on Algebras, Mathematical Physics, Electromagnetic Fields, and Thermonuclear Fusion before turning his attention to Architecture and Urbanism. He still is Professor of Mathematics at the University of Texas at San Antonio and is also on the Architecture faculties of universities in Italy, Mexico, and The Netherlands.
Read more posts from Michael and Nikos here.
Recent Viewpoints
Viewpoints
How to Specify for a Happier World