Post originally authored for the blog of Strategie Digitali and published in three parts: – Part 1 on April 23rd 2018. – Part 2, on April 30th 2018. – Part 3, on May 7th 2018. “Carpo takes the reader on a critically considered and well-informed expedition beyond the horizon of materiality, to a land ruled […]
“Carpo takes the reader on a critically considered and well-informed expedition beyond the horizon of materiality, to a land ruled from the bottom up—a place without any need for scale or standards as we have known them. The book is beyond Cartesian and beyond digital.”
—Nicholas Negroponte, cofounder of the MIT Media Lab; author of Being Digital
The Second Digital Turn. Design beyond Intelligence.
October 13th, 2017. The MIT Press.
Do you remember The Digital Turn in Architecture? Within this collection, published in December 2012, the author presented «twenty years of digital design», from hypersurfaces to BIM, defining that «a meaningful building of the digital age is not just any building that was designed and built using digital tools: it is one that could not have been either designed or built without them». A definition we’re still struggling with, giving architect that are still denying the causal connection between what you are able to do and the instruments you are using to do it.
Last Autumn, Mario Carpo was back with a book that tried to investigate where are we going from here, now that algorithms are (or at least can be) a part of everyday practice.
As for other works of the same author (I’m thinking both Architecture in the Age of Printing and The Alphabet and the Algorithm) it is full of different persoectives, both historical and sociological, on what is happening right now in our world without us even paying too much attention. It dabbles a lot in stuff that’s outside architecture, because everything is becoming more contaminated or – as Douglas Adams’ Dirk Gently would say – everything is connected. By the author’s own admission, some disciplines are «manifestly outside» his expertise and ours. Still, the way he explains them is highly likely to create a connection with every digitally-aware architect, engineer, constructor and designer out there. A highly recommended book that brings back once again the centrality of digital instruments in nowadays practices.
If you’re thinking to find 3d printing, augmented reality and BIM in this book, think again: it really digs deep into what’s going on in the world, outside the AEC bubble, and some conclusions it draws are audacious to say the least.
Following are some topics that are dear to us and tickled my curiosity.
1. Digital Fabrication vs. Mass Customization
«variability is a deep-rooted ambition of architects and designers, craftsmen and engineers of all times and places.»
The first idea, and it’s already something that could be seen from 1992 on, is that digital design when connected with digital fabrication is not a mimetic innovation. By mimetic innovation we mean an innovation simply meant to do better what was being done with the previous technology. Allegedly, since digital fabrication doesn’t use casts and molds, doesn’t need to produce identical items in a certai amount in order to amortize initial costs: it has the luxury of producing original items at no significant increase of costs. The idea was also deeply explored in almost every article collected in Fabricating Architecture by Robert Corser (April 2010).
Of course, as designers, we immediately see the catch: even if it’s true that there is no physical mold, there is what we might call a digital one. The cost of designing the item you’re producing is still there and, if not overlooked, get’s explicitly minimized by the author. And if this shift from mass production to mass customization has not happened and doesn’t seem to be happening, it seems to be the fault of the project itself. In order to implement variations in production, you need to implement equally effortless variations in design. I do believe it’s called generative something, although it doesn’t quite come to mind right now.
«the design professions seem to have flatly rejected a techno-cultural development that would weaken (or, in fact, recast) some of their traditional authorial privileges».
2. Data Compression
If you’re familiar with Carpo’s work, you know that he likes to approach this topic talking about logarithms and an old anectode involving an Olivetti Divisumma calculator.
The point he makes is that there’s a high connection between the computing power of… well, computers, and the techniques we use to compress data. Logarithms are an obsolete compression technique: they were used to calculate stuff wihtout computer aid and were handful as long as machines indeed had a digit limit, but right now there’s no point in using them. One might argue the point in teaching them as well.
This doesn’t apply only to numbers. The alphabet, if you think about it, is «an old and effective compression technology for sound recording». Still, it’s significant the rise of emojis, more than emoticons: just as much as letters, they are transmitted through a coded number but they convey a meaning, more than a sound, and it’s possible to use them effectively only because of the search function within typing systems. «We used to think that one cannot easily type from a keyboard with thousands of keys», says Carpo, but the fact that people are successfully managing to surf through 46,000 pictograms, as in the Japanese smartphone application Line, is enough to give you pause.
The key to the success of this shift is in the superior power of search methods and goes in direct opposition to the traditional sorting method. G-mail is the perfect example, with its launching slogan: “Search, don’t sort”.
«computers do not have queries on the meaning of life, so they do not need taxonomies to make sense of the world, either—as we do, or did».
You don’t need to label, sort, classify, when you can search. Amazon has unorganized warehouses: they tag and track using RFID.
When this argument comes down to the digital model of, let’s say, a building, we immediately see where this is going. Classification systems are obsolete, or at least they will be as soon as the models are really being computed on the cloud and are supported by an efficient search algorithm. Are we really going towards a scenario in which quantity take-off will be effortlessly done through a simple “search plasterboard” query? It certainly is a fascinating theory.
3. Prediction vs. Simulation
Another area involved in the revolution of methods and concepts descending from the simple fact that we have a previously inconceivable computing power on our hands is the realm of modern science and, when it comes down to buildings,
in the realm of things such as structural calculation. With the new computing power, «instead of calculating predictions based on mathematical laws and formulas, the way they did, we can simply search for a precedent for the case we are trying to predict, retrieve it from the almost infinite, universal archive of all relevant precedents that ever took place, and replicate it». As it happens, this is pretty much how modern science was born: a set of experiments and observations got collected and we deduced laws and formulas from it. Now, I know what you’re thinking: if we stopped calculating and simply referred back to what has already been done, wouldn’t this stop progress altogether? Wrong. As Carpo continues, «in today’s computational environment, the precedent we dig up need not be a real one; it may equally have been simulated on purpose». If you think about it, this would make us a little less engineers, a little less scientists, and a little more artisans: before the industrial revolution, things were invented and tested and trialed through physical maquettes, and discarded upon failure.
This is a beautiful paradox, since architects are complaining about the exact opposite when it comes to algorithms being used for design purposes.
It’s also a complex issue. The more advanced and intelligent are what Carpo calls «iterative digital simulations», the less aware we are of the reasons behind success and failure of our solutions. But are we? Just as much as breaking a phisical maquette in the real world can teach you about physics, if you’re educated enough to listen and interpret results, breaking a digital simulation can educate you in the way the algorithm works.
«This heuristic design process is functionally equivalent to the big data»
4. Concerning Materials
The last field taken into consideration is a spin on the concept of fabrication, but a rather intersting one. We are accustomed to consider construction materials as homogeneous, even when they are not. Even in information modelling, we do that all the time, at different levels, and sometimes this creates clashes in responsibilities. This about reinforced concrete. Up until a certain phase we consider it, model it and calculate it as a single block. There’s no other affordable way of dealing with it.
Or is there?
«we can now design and fabricate materials with variable properties at minuscule, almost molecular scales»
The reasons because we don’t model something as simple and straightforward as rebars in concrete since the very beginning is well known and is completely tied to two factors: the lack of time and the lack of computing power on machines. What if these two factors were solved with algorithms and the cloud? The frontiere is already shifting to something else: the modelling and computing of irregolarities within what we are brainfucked into considering monolitic materials. Concrete, wood, bricks are far from being homogeneous. So are the systems we obtain by assembling them. A huge amount of untapped power resides in these dishomogeneities, because current systems are bound to consider the strenght of these materials at their weakest points. X-ray log scanning is already being used in timbering for similar purposes: to identify the inner structure of logs and instruct a CNC machine to cut them in the most efficient way.
«Calculus is a mathematics of continuity, which abhors singularities»
The Second Digital Turn – So what?
Last week we painted an outline for Mario Carpo‘s The Second Digital Turn. Design beyond Intelligence: we talked about the new trends the author foresees for the design practices, being mass customization, the discard for any data compression need, the rise of simulation vs. the tradition of predictive science, the possibility to mimic the complexity and imperfection of natural materials within the digital models and manifacturing.
The interesting thing about this book is that it doesn’t stop here.
If all these factors are true and we can expect higher computation powers to have this impact on what we do, there are some interesting predictions we can try and make about how future architecture will look.
1. Enough with those splines
The first and perhaps most surprising of these predictions is about geometry itself. The first digital turn in architecture saw the rise of splines. Logic dictates that the second turn should see its fall. The reason behind this prediction has an ironclad logic and follows in the footsteps of everything we said so far: splines are, in their own way, a data compression system just as logarithms. Is «a compact, economical, small-data shorthand we use to replace what is in fact an extraordinarily long list of numbers». We couldn’t deal with stocking all points that would take to define a really complex curve: splines and meshes were the best thing we could do until now. Generative forms based on shapes coming from nature, and I mean this as a cellular level, don’t and cannot take advantage of this kind of data compression systems. They need to be digitally described in every single detail.
The calculus-based spline is a quintessential small-data tool. As a design figure, splines are space-age technology; they belong with the Beatles and flared jeans.
If we imagine a future in which everything goes hand by hand, design to production throughout assembly, it is indeed possible to imagine a future of architecture looking more and more like Michael Hansmeyer and Benjamin Dillenburger’s digital grotto.
2. Bring decoration back
Another intersting consequence of these reasonings is immediately obvious if take a closer look at the matter of producing variations at a lesser cost. The whole theory of ornament as a supplement, developed in Taylorism and reinforced by opposition even during revivals of ornament such as art nouveau, is based on concepts related to mass producion and to the cost of labor. Decoration is useless and expensive. But what if it weren’t neither of those things? When working with subtractive technologies, like CNC machines, the advantage of simpler shapes is in both time and material. The same goes with molds and presses of the previous industrial age. But what about addictive technologies like 3d-printing? What if decoration becomes not only effortless but even cost and performance friendly? Are we to expect richer and richer forms?
The installation “Making of Incidental Space” was part of the Swiss contribution to the XV Biennale di Architettura in Venice. It was designed by the architect Christian Kerez together with the curator Sandra Oehy.
«Neither designed nor scripted, Kerez’s walk-in grotto was the high-resolution, calligraphic transcription, 42 times enlarged, of a cavity originally produced by a random accident inside a container the size of a shoebox.»
This overload of data, of detail, of definition, could manifest itself in lots of ways. One of them is the perfect mimicking of nature’s accidents, like it happened in Kerez’ grotto. But who knows what else can the future offer us.
3. Towards Collective Creation?
The last and perhaps more provocative thought that this book offers is in relation to the collaborative aspects of all this big-data and computational talk. It was a belief of the first digital turn, along the idea of mass-customization, that design was going to become collaborative, that the client and end consumer would participate actively if not in the shaping of the project at least in the fine-tuning and in the ranking of its possible variations. This, to put it candidly, didn’t happen. This is called “The Participatory Turn That Never Was” and indeed it seems to be the most stretched part of the predictions. In a Postface dated 2016, the author admits that «the zero marginal cost society is unlikely to ever become an enticing prospect for the design professions, for two different reasons».
The first of them is rather obvious: the participative scheme is based on the assumption that trial and error can be performed at libitum. Land and building materials however are not renewable and mistakes on that field can be costly.
The second reason is provocatory: the interest in self-preservation of designers is resisting these changes as professionals state that they cannot be replaced by a machine and their expertise has an unique value that justifies the cost. According to Carpo, «this is where digital technologies have already started to prove them wrong». Authorship is threatened by crowdfunding, artificial intelligence forces us to think in a different way following a different logic. We should embrace the aid of digital tools, not try to imitate their methods and keep for us only what we do best, leaving all the heavy-lifting to the machines. The shadows that lurks is not machines overturning us but, as it often happens, us doing that to ourselves.
«In all aspects of contemporary culture, and most remarkably in economics and politics, theories today are universally reviled».
If we don’t remember that everyone should keep its own trade, that the ability to contribute doesn’t necessarily mean you have something of value to say, that Wikipedia is in fact funded by the real jobs that is putting food on the table for all those competent editors who try and keep it clean for free, if we mistake collaboration with demagogy, we will be doomed. And, in opposition to computational simulations, «the next atomic blast in physical reality may not allow for a retrial».
Mentioned Projects: top 8
To close our reasoning around Mario Carpo’s The Second Digital Turn, which kept us company each Monday for the last two weeks (here and here), we give you a selection of project mentioned in the book. It is significant that lots of them are almost 5 years old: the book itself is a collection of older writings and contains a “postface” dated 2016. I selected 8 that were particularly significant in their time or bear particular relevance to the topics brought to attention in the book.
1. The 3d printed Proto House 1.0 by Softkill Design
2. ICD/ITKE Research Pavilions by Achim Menges
The book features a couple of them. The first one is the famous “spider” from 2012. Developed by Achim Menges within the ICD Institute for Computational Design, alongside prof. Jan Knippers from the ITKE Institute of Building Structures and Structural Design, it was produced through the robotic winding of a carbon/glass fiber structure. More details and pictures can be found at this page.
The second one is from 2014–15, same team, and relies on the sensor-driven real-time robot control for a cyber-physical fiber placement system.
3. PolyBrick by Jenny E. Sabin
A project developed between 2015 and 2016, therefore slightly more recent than the others, it’s a 3d printed high-fire clay body and was developed with Martin Miller, Nicholas Cassab, Jingyang Liu Leo, and David Rosenwasser. Prototypes can be seen on the designer’s website, alongside a video.
4. Heydar Aliyev Centre by Zaha Hadid Architects
You can’t have a book on parametric architecture without a project by Zaha Hadid. This time, Carpo picks this bulding in Baku, Azerbaijan, to make a point on splines and discreteness in relation to the finite element analysis method. If you’re curious on how to develop an adaptive component in Revit that gives you back the radius of curvature… well, ok, that’s a different story, but follow us around. Meanwhile enjoy this video from DeZeen on the project.
5. EZCT Architecture & Design Research, Studies on Optimization: Computational Chair Design Using Genetic Algorithms
Before Autodesk and their nordic-looking algorithm-generated chair, there were Philippe Morel, Hatem Hamda and Marc Schoenauer from EZCT Architecture in 2004. The version featured in the book is the “T1-M 860”, developed through 860 iterations of 100 finite elements analysis each, for a grand total of 86,000 tries performed by the algorithm. You find a pamphlet on the research here.
6. BLOOM by Alisa Andrasek and Jose Sanchez
A crowdsourced garden, urban toy, and social game developed for the London Olympics. The story was featured on almost all prominent architecture and design magazines, including Archdaily, Design Boom which also features a Rhino and Grasshopper screenshot to clear any doubt on how this was developed.
7. Plantolith by Marjan Colletti
A project from 2013, takes its name from the idea that plants and monoliths are somehow opposite: the first are growing and complex while the others are static and monolithic. The result is this 250 kg of 3d printed silica sand.
«In geometric terms, plants and monoliths stand at the opposite sides of the spectrum. The first are growing, complex, multi-layered and convoluted systems, whilst the latter are static, homogeneous, heavy objects. Digital modelling techniques and 3D printing technologies allow the hybridization of the two.»
8. Quaquaversal Centrepiece by Iris Van Herpen
The famous ready-to-wear collection for Spring-Summer 2016, performed at the Musée d’Historie de la Médicine in Paris on October 8, 2015 where three robotic arms from REX|LAB, University of Innsbruck, dressed up Gwendoline Christie in 3d print.