Conversations about data and architecture today seem split between naive euphoria and cynical paranoia. The former often stems from an enthusiastic embrace of the increasingly pervasive myth of parametric “optimization:” the idea that the ideal solution to any given design problem can be achieved by simply converting the appropriate architectural values (structural, programmatic, or formal) into computational variables and running them through the right algorithm. Believers in this approach are inspired by the extraordinary powers of prediction evinced by analytics in other fields; if algorithms tell us everything from what stocks to buy to what tomorrow’s weather will be like, why can’t they tell us what our buildings should look like, too? There is no question that optimization equips architects with a great sales pitch—it reinforces the idea that the “best” design will automatically evolve from the proper code. But it also places blind faith in computational analysis and digital design tools, assuming that architectural intangibles like inhabitation or experience can be directly translated into data points.
The market-driven evangelism of Building Information Modeling (BIM) is an equally optimistic response to new technology. Unlike algorithmic designers, who equate architectural design with data analysis by seeking to transform data into architectural form, BIM’s proponents want to reimagine architecture itself as data by doubling the physical world in the digital realm. BIM’s ultimate goal is as quixotic, and perhaps ultimately as dubious, as Borges’s famous map at the scale of the territory. It is a digital simulation of architecture so detailed and exact—including each nut and bolt, every tool path, and all possible air flows and sun angles—that it would radically streamline the processes of design and construction, allowing a building to be built entirely in the computer before a penny is spent or a finger is lifted in the real world. Given the trillions of dollars spent annually on construction in the US alone, there is tremendous pressure to implement any technology that increases efficiency. Yet, while BIM has already unquestionably transformed both the architectural profession and the construction industry, the primary effect of these technologies has been in prosaic areas such as workflow organization and project management. Far more troubling, and much more radical, is the underlying assumption BIM shares with parametrics: that all dimensions of architecture are objectively quantifiable.
The notion of architecture as a data system also drives more fearful responses to technology’s impact on architecture, which often revolve around the increasingly active role that architecture itself plays in gathering and analyzing data. This focus seems particularly urgent after the last Venice Architecture Biennale. Many of the Elements on view showcased the kind of “smart” architecture that is already a reality—floors that track footsteps and offer real-time wayfinding updates, toilets that monitor vital signs and even diagnose illnesses, walls that adapt to changing light levels, and environmental conditioning systems that monitor patterns of use and adjust themselves. Such technologies are often presented in the benevolent, even emancipatory, terms of flexibility and interaction. But given that good old-fashioned “dumb” architecture is already notoriously effective as a mechanism of control, it is not surprising that these technologies have elicited concerns that in the era of big data, targeted advertising, and NSA surveillance, “smart” architecture will serve as yet another vehicle for profiling and surveillance—extending the obsessive monitoring and data-mining that already tracks our every move online into the physical world.
These concerns, while not unreasonable, ultimately suffer from the same limitations as the overly optimistic embrace of BIM technology and computational design. All give technology too much credence, grant data too much centrality, and underestimate the richness and complexity of architecture itself. Computational analysis is extraordinarily effective in solving certain kinds of problems—many of which are undeniably useful in the field of architecture, from simulating a material’s structural performance under seismic stress to predicting traffic patterns in an urban development. But in the end, design is not a matter of prediction or simulation, precisely because, at its best, it is the process of arriving at something unprecedented and unexpected, something that transforms or transcends existing expectations and prior ideas. Nor is architecture reducible to the discrete data points that drive most computational approaches. Considered in the broadest sense, as not just as physical building but cultural phenomenon, architecture is inexorably analog—comprising an infinitely complex range of qualities as ineffable as the intentions of the designer and the interactions of multiple publics.
This critique may be a surprise coming from Formlessfinder, an office named in part after a type of search engine. Although we have organized our office as a finder, however, there is a crucial distinction between adopting f the algorithm as a design tool and using the search engine as a model for a broader methodology. When we set up our office, the search engine was a natural point of reference because it is absolutely formlessness—not only in the superficial sense that anything digital is formless insofar as it is immaterial, but in the way it reconfigures traditional organizational structures.
The field of architecture has long been organized around strict hierarchies, with design itself considered to be a rigorously linear process. This is nowhere as obvious as in the professional structure of design services (where the architect sells his or her services in a fixed progression from concept to schematics to construction documentation), and the ways in which certain outputs are prioritized over others (orthographic projections, for example, were historically considered the architect’s primary deliverable, a status now occupied by renderings, and increasingly by animations). Ironically, the adoption of new technologies has done little to change these fundamental disciplinary structures.
In contrast, the finder has the potential to be radically non-hierarchical. It doesn’t necessarily valorize one category of result over another—whether an image, a video, a drawing, a text, or a song, information is information, and a file is a file. Perhaps even more importantly, a finder eliminates the need for organizational hierarchies. In a traditional system of information, say a library or an archive, a material is findable (and useful) only if it remains in a fixed relationship to every other element in the organization. But, if you have a good search engine, you don’t have to bother with an archive, because with enough processing power a user can work with information in a raw, disorganized, and essentially formless state. In other words, any particular piece of information can be retrieved from the most hopeless jumble; this is how search engines have rendered the fluid chaos of the internet not just navigable, but also useful, and even productive. Of course, as with any structure that controls the flow of information, hegemony is a problem—but the dominance of a handful of advertising-fueled mega-corporations over today’s internet should not necessarily undermine the fundamental potential of the search engine.
In a modest way, the software we call the Finder reveals some of this potential. Continually under development, the Finder functions as something between an in-house, open-source wiki, an architect’s data and graphic standards, a product catalog and materials database, and a visualizer. Crucially, however, it is not a design tool; it is not meant to deliver singular solutions or carry out uni-directional operations. Instead, it allows us to incorporate more, and more varied, information and material into our working process, continually pushing us to reevaluate our ideas and question our assumptions. For example, Tent Pile, our entry pavilion for the 2013 Design Miami fair, was produced through a strange collision of truss, tent, wall, and pile. This combinatory logic resulted from the jarring simultaneities and unforeseen interconnections revealed within the Finder, where we were able to compile (among many other things) found images of piles and infrastructural landscape projects, our own research into angles of repose, and videos of earth-moving equipment in operation. Our goal is to leverage the finder’s egalitarian spirit to expand the scope of our field, creating an architecture of surprising juxtapositions and scrambled inputs and outputs—an architecture in which data and material alike can be used in a raw, fluid state, and where a jumble of information can be just as handy as a pile of sand. This is the opposite of optimization; rather than narrowing design down to a single solution, we hope to open it up, finding those moments when architecture exceeds any given frame or system, where it is manifestly formless.
Formlessfinder was created by Garrett Ricciardi and Julian Rose and exists as the nexus of their ongoing collaboration. The studio received the 2012 AIA NY New Practices award, a 2012 National Endowment for the Arts grant, and was selected as a finalist for the MOMA/PS1 Young Architects Program in 2011. Their design work, ranging from residential additions to public pavilions, has been exhibited at institutions such as The Museum of Modern Art in New York, the MAXXI in Rome, The Storefront for Art and Architecture, Design Miami, and featured in publications including Architecture Record, Domus, Surface, Metropolis, and W Magazine. Formlessfinder recently published the book Formless Manifesto with Lars Muller and Storefront for Art and Architecture.