In November 2008, shortly in the aftermath of what has largely been labeled the “economic crisis” resulting from the sub-prime mortgage market’s collapse, the then chairman of IBM, Sam Palmisano, delivered a speech at the Council on Foreign Relations in New York City. The Council, which is one of the foremost think tanks in the United States, is comprised of senior figures in government, the intelligence community, including the CIA, business leaders, financiers, lawyers and senior media figures. However, Palmisano was not there to discuss the fate of the economy. He was there to introduce his corporation’s vision of the future. In glowing terms, Palmisano laid out a future of fiber optic cables, high bandwidth infrastructure, seamless supply chain and logistical capacity, clean environments and eternal economic growth through a new discourse of “smartness.” IBM, he argued, would lead the world to the next frontier, a network beyond social networks and mere twitter chats; a future in which humans and machines are integrated into a seamless “Internet of Things” capable of generating data to make decisions, organize production and labor and enhance marketing. An “Internet of Things” with the ability to precipitate greater democracy and prosperity, and perhaps most importantly guarantee the very existence and survival of the human species:1 bandwidth would come to mean, quite literally, survival.
What is most remarkable about this emerging discourse is the argument that corporations will be able to facilitate a new level of complexity and global integration by “infusing intelligence into decision making.”2 One must immediately ask what forms of instrumentation will produce this “decision making?” What methods, techniques and processes are being suggested by these global corporate entities? The so-called “smart mandate” has become perhaps one of the governing notions of global polity—creating novel infrastructures, methods and instruments, which now underpin design practice across many fields and have come to organize environment, economy, energy, supply chains, food and medicine distribution, and of course, security.
So how has “smartness” become a technology? In order to contemplate this question and contemporary concerns over “smartness,” as well as rethink the very nature of “instrument,” one must examine how reason was reformulated into logical networks at the middle of the twentieth century.
THE LOGICAL CALCULUS OF THE NERVOUS NET
One place to start is with the reconceptualization of memory and intelligence introduced by the sciences of communication and control that emerged from World War II. Within these sciences is a genealogy of methods, tools and practices that have also been said to underpin the “smart” world. Cybernetics is one common way to rethink the instrumentation and method in architecture related to computing. As is well documented, cybernetics emerged at the Massachusetts Institute of Technology (MIT) from work at the Radiation Lab on anti-aircraft defense and servo-mechanisms during the Second World War. The MIT mathematician Norbert Wiener, working with MIT trained electrical engineer Julian Bigelow, and physiologist Arturo Rosenblueth, reformulated the problem of shooting down planes in terms of communication: between an airplane pilot and the anti-aircraft gun. Rather than attempting to model simply the future location of the plane, they assumed there was a feedback loop between the two entities. They postulated that under stress airplane pilots would act repetitively and therefore display algorithmic behaviors both analogous to servo-mechanisms and amenable to mathematical modeling and analysis. This understanding meant that all entities be “black boxed” so as to be studied behaviorally and statistically.3 Cybernetics has no language for representation or presence. Entities are not described ontologically, but rather pragmatically. Cybernetics is a discourse for anticipating future behaviors from past data.
Inspired by the idea that machines and minds could be understood together through the language of logic and mathematics, psychiatrist Warren McCulloch and logician Walter Pitts attempted to demonstrate the possibility of a logic emerging from neurons at the University of Illinois at Urbana-Champaign in 1943. The pair would later move to MIT at 1952 at Norbert Wiener’s behest.4 Their article “A Logical Calculus of Ideas Immanent in Nervous Activity,” which appeared in the Bulletin of Mathematical Biophysics, is now one of the most commonly referenced pieces in cognitive science, philosophy and computer science. The article provided a series of novel moves by which neurons could be made equivalent to logic gates, and therefore “thought” made materially realizable from the physiological actions of the brain. These moves reformulated psychology, but they also demonstrated a broader transformation in the constitution of evidence and truth in science.
This model of the neural net has two characteristics that are critical in producing contemporary ideas of rationality, and transforming ideas of methodology.5 The first is that every neuron firing has a “semiotic character;” that is, it may be rendered mathematically as a proposition.6 To support this claim, Pitts and McCulloch imagined that each neuron operates on an “all or nothing” principle when firing electrical impulses over synaptic separations. The pair understood that neurons possess action potentials and delays to be equivalent to the ability to effect a discrete decision. The effect affirms or denies a fact (or activation). From this, neurons can be thought of as signs (true/false), and nets as semiotic situations or communication structures (just like the structured scenarios of communication theory). 7
This discrete decision (true or false, activate or not) also made neurons equivalent to logical propositions and Turing machines.
The second element of the model that is important here is the adoption of a strictly probabilistic and predictive temporality. Neuronal nets are determinate in terms of the future (they are predictive), but indeterminate in terms of the past.
Given a net in a particular time state (T), one can predict only the future action of the net (T+1), not the past action. From within the net, it is also impossible to determine which neuron fired to excite the current situation.
As an example, McCulloch offered the model of a circular memory neuron [Fig.2] activating itself with its own electrical impulses. What results as a conscious experience of memory is not the recollection of the activation of the neuron, but merely the awareness that it was activated in the past, at an in-determinant time. The firing of a signal, or the suppression of one, can only be known as “true” or “false” declarations—true, there was an impulse, or false, there was no firing—not as an interpretative statement of context or meaning that might motivate such firing.
Within neural nets it is impossible know at any moment which neuron sent the message, when the message was sent, or whether the message is the result of a new stimulus or merely a misfire. In this model the net cannot determine with any certitude whether a stimulus comes from within or from without the circuit, or whether it is a fresh input or simply a recycled “memory.” Put another way, from within a net (or network) the boundary between perception and cognition, the separation between interiority and exteriority and the organization of causal time are indifferentiable. Rather than consider this a disadvantage, McCulloch and Pitt brilliantly saw this as an advantage for the capacity of a neural net.
“Logical Calculus” ends on a triumphant note. McCulloch and Pitt announce an aspiration for a subjective science: “thus our knowledge of the world, including ourselves, is incomplete as to space and indefinite as to time. This ignorance, implicit in all our brains, is the counterpart of the abstraction which renders our knowledge useful.”8 If subjectivity had long been the site of inquiry for the human sciences, now, perhaps, it might—in its very lack of transparency to itself or its incompleteness—become an explicit technology. A technology to be programmed as machine learning; eventually, built into our social networks and financial systems.
McCulloch went even further to propose that logic was not reasonable, but actually, psychotic. He proudly announced that the nature of computing is analogous to a psychotic mind: “what we thought we were doing (and I think we succeeded fairly well) was treating the brain as a Turing machine; that is, as a device which could perform the kind of functions which a brain must perform if it is only to go wrong and have a psychosis.”9
Speaking at a conference on circuits and brains in Pasadena California, to a room full of the most prominent mathematicians, psychologists and physiologists of the day, McCulloch sought to provoke his respectable audience by offering them a seemingly counter-productive analogy. Finite state automata—the models of calculative and computational reason, the templates for programming, the very seats of repetition, reliability, mechanical, logical and anticipatable behavior—might be just as “psychotic” as brains can sometimes be.
These statements should not be thought of in terms of human subjectivity or psychology. While trained as a psychiatrist, McCulloch was not discussing psychosis in relation to patients in mental clinics. Rather, he was responding to a famous paper delivered by the mathematician John von Neumann on logical automata.10 McCulloch was not intending to argue about the essential characteristics, the ontology, of machines or minds. He recognized that computers were not the same as organic brains. The question of equivalence was not at stake.
What was at stake was a set of methodologies and practices in the human sciences, statistics, and engineering: the epistemology, if we will, that might build new machines—whether organic or mechanical. McCulloch and von Neumann aimed to develop a new form of logic, an epistemology they labeled “psychotic” and “rational.” Such an epistemology, they argued, might make processes usually assigned to analytic functions of the brain, perhaps associated with consciousness and psychology, amenable to technical replication.
McCulloch and Pitts were explicit that their work was a Gedankenexperiment: a thought experiment that produces a way of doing things, a methodological machine. They discussed this logical reasoning as an experiment, a machine like the one described by Deleuze and Guattari in Thousand Plateaus—a machine that does not describe a reality, but rather helps scientists and engineers envision new types of brains and machines, as well as challenge pre-existing knowledge of mental processes. Cheerily, McCulloch admitted that this was an enormous “reduction” of the actual operations of the neurons:11 “but one point must be made clear: neither of us conceives the formal equivalence to be a factual explanation. Per contra!” At no point should one assume that neural nets are an exact description of a “real” brain.12 Nets are not representations; they are methodological models and processes.13
Reductive or not, the pair had established that the capacity for logic and sophisticated problem solving could emerge from small physiological units such as neurons linked up in circuits. In doing so, and by exploiting the amnesia of these circuits, McCulloch and Pitts were able to not only make neural nets analogous to communication channels, but also shift the dominant terms for dealing with human psychology and consciousness to communication, cognition and capacities. Their conception of the neural net informs a change in the attitude towards psychological processes that makes visible an epistemological transformation in what constituted truth, reason and evidence in science.
This new epistemology rests on three seemingly unimportant points that when looked at together join the history of logic, engineering practices and human sciences into a new assemblage. The first is that logic is now both material and behavioral, and agents can be an- or non-ontologically defined or “black boxed.” Second, cybernetic attitudes rely on the repression of all concerns over documentation, indexicality, archive, learning and historical temporality. And third, the temporality of the net is pre-emptive. It always operates in the future perfect tense, without necessarily defined endpoints or contexts.14 Nets are about T+1. The past is indeterminate. McCulloch regularly dismissed the actual context or specific stimulus that incited trauma in patients or systems.15 Together, these points redefine rationality as both embodied and affective, and suggest that good science is not the production of certitude, but rather the account of chance and indeterminacy. For neural net researchers, the determining question was not how similar or different minds and machines are, but rather, as anthropologist Joseph Dumit has said, “what difference does it make to be in one circuit or another?”16 According to Dumit, cybernetics does not ask what a mind or machine is, but instead asks what could one do with a mind (or a machine)? What can one do with different types of networks or circuits?
Significantly, for us, McCulloch and Pitts inverted the question posed by the original negative proof of the entscheidungsproblem, or “decision problem,” and answered by the Turing machine. Throughout the nineteenth and earlier twentieth centuries, an army of mathematicians and philosophers struggled to infinitely extend the limits of logical representation to which the Turing machine is a negative proof demonstrating the impossibility of fully representing statements in first-order logic. However, McCulloch and Pitts proposed a different epistemology and frame.17 Accepting that many things are incomputable, McCulloch inverted Turing’s proposition. Instead of seeking an absolutely reasonable foundation for mathematical thought—attempted by other logicians and mathematicians including Gottlob Frëge, Kurt Gödel, David Hilbert, Bertrand Russell, Alfred North Whitehead and Alan Turing—McCulloch and Pitts asked a different question. What if mental function could be demonstrated to emanate from the physiological actions of multitudes of logic gates? Instead of what could be proven, McCulloch and Pitts asked, what could be built? Similarly, rather than seeking the limits of calculation, the problem became about examining the possibilities for logical nets. What had been an absolute limit to mathematical logic became an extendable threshold for engineering. McCulloch implied that humans should accept our partial and incomplete perspectives, our inability to know ourselves, and consider this “psychosis” an “experimental epistemology.”18
These experiments were central in reformulating older ideas of agency and consciousness into agents. As numerous science historians have demonstrated, cybernetic and communicative concepts of mind were part of a broader reconceptualization of reason, psychology and consciousness;19 informing everything from finance and options trading equations, to the environmental psychology and urban planning programs headed by individuals like Kevin Lynch and Nicholas Negroponte (Media Lab), as well as MIT’s Architecture Machine Group, to the political science models of Karl Deutsch at Harvard and Herbert Simon’s “bounded rationality,” widely considered the foundation of contemporary finance. The post-war social sciences were repositories of techniques that transformed the question of political economy, value production and the organization of human desire and social relations into problems of circulation and communication by way of a new approach to modeling intelligence and agency.20
If this is true, then our financial instruments, markets, governments, organizations and machines are rational, affective, sensible and preemptive—not reasonable. To recognize the significance of this thinking in the present, it might help to contemplate Brian Massumi’s definition of “preemption.” Preemption, he argues, is not prevention; it is a different way of knowing the world. Prevention, he claims, “assumes an ability to assess threats empirically and identify their causes.” Preemption on the other hand is affective. It lacks representation. It is constant nervous anticipation at a neural if not molecular level, for a never fully articulated threat or future.21
Within ten years from the war, cyberneticians moved from working on anti-aircraft prediction to building systems without clear end points or goals—embracing an epistemology without final objectives or objectivity (though many practitioners have denied this). Even if a neuron can act definitively, nets, taken as systems, are probabilistic scenarios with multiple states and indefinite run times. In cognitive and early neuro-science, the forms of knowledge being espoused were framed in terms of experiment, never definitive conclusions. “Experimental epistemologies,” as McCulloch put it, came to mean ongoing experiments unconcerned with final facts.
These human and social scientists made operative the unknowable space between legibility and emergence, and turned it into a technological impulse to proliferate new tools of measurement, diagrams and interfaces. At the limits of this analysis is the possibility that emergence itself has been automated. The problem of how to act under conditions of uncertainty, or how to define a man or a machine, became instead a pragmatic mandate and a focus on process. Instead of asking what is a circuit, a neuron, or a market, human scientists and cyberneticians began asking, what do circuits do? How do agents act?22 They entangled calculation and life at the level of nervous networks, correlating the nervous system with the financial and political system.
Today, we continue to live in the legacy of this experimental epistemology. Performativity, affect and nervousness networks are still the very instruments we use to architect our machines, environments and perhaps even ourselves.
- 1. From “A Smarter Planet,” speech by Sam Palmisano at the Council on Foreign Relations in New York City on November 6, 2008, accessed April 21, 2016, http://www.cfr.org/technology-and-foreign-policy/smarter-planet-next-leadership-agenda/p17696. ^
- 2. Ibid. ^
- 3. Peter Galison, “The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision,” Critical Inquiry 21 (1994): 228-66; Katherine Hayles, How We Became Post-Human (Chicago: University of Chicago Press, 1999). See also Arturo Rosenblueth, Norbert Wiener, Julian Bigelow, “Behavior, Purpose and Teleology,” Philosophy of Science 10, no. 1 (1943): 18-24. ^
- 4. Lily E. Kay, “From Logical Neurons to Poetic Embodiments of Mind: Warren Mcculloch’s Project in Neuroscience,” Science in Context 14, no. 4 (2001): 591-4. ^
- 5. The model has been reviewed elsewhere, here I am briefly outlining the work with focus on epistemology: Lily E. Kay, “From Logical Neurons to Poetic Embodiments of Mind: Warren Mcculloch’s Project in Neuroscience,” Science in Context 14, no. 4 (2001); Tara Abraham, “(Physio)Logical Circuits: The Intellectual Origins of the Mcculloch-Pitts Neural Networks,” Journal of the History of the Behavioral Sciences 38, no. Winter 2002 (2002). ^
- 6. The logic used in the article was taken from Rudolf Carnap, with whom Pitts had worked. ^
- 7. Warren S. McCulloch and Walter Pitts, “The Logical Calculus of Ideas Immanent in Nervous Activity,” in Embodiments of Mind, (Cambridge: MIT Press, 1965), 21-24. ^
- 8. McCulloch and Pitts, “A Logical Calculus of Ideas Immanent in Nervous Activity,” 35. ^
- 9. John Von Neumann, “The General and Logical Theory of Automata,” in Papers of John Von Neumann on Computing and Computer Theory, ed. William Aspray and Arthur Burks (Pasadena California: MIT Press abd Tomash Publishers, 1948/1986), 422 (my emphasis). For more on automata theory see also: William Aspray, John Von Neumann and the Origins of Modern Computing (Cambridge, Mass.: Cambridge, Mass.: MIT Press, 1990). ^
- 10. Von Neumann, “The General and Logical Theory of Automata,” 391-431. ^
- 11. The pair had derived their assumptions about how neurons work on what, by that time, was the dominantly accepted neural doctrine in neuro-physiology. Using the research of Spanish pathologist, Ramón y Cajal, who first suggested in the 1890’s that the neuron was the anatomical and functional unit of the nervous system and was largely responsible for the adoption of the neuronal doctrine as the basis of modern neuro-science, and the work of Cajal’s student, Lorento de Nó, on action potentials and synaptic delays between neurons and reverberating circuits, McCulloch and Pitts had the neurological armory to begin thinking neurons as logic gates. Santiago Ramón y Cajal, Texture of the Nervous System of Man and the Vertebrates (Wien and New York: Springer, 1999); Warren McCulloch, Embodiments of Mind, 1988 edition (Cambridge: MIT Press, 1965). ^
- 12. McCulloch and Pitts, “A Logical Calculus of Ideas Immanent in Nervous Activity,” 22. ^
- 13. Joseph Dumit, “Circuits in the Mind”, unpublished manuscript, April 2007. ^
- 14. See also: Halpern, “Dreams for Our Perceptual Present: Temporality, Storage, and Interactivity in Cybernetics,” in Configurations, Vol. 13, Number 2, Spring 20015. ^
- 15. See, for example, interviews on treatment of soldiers coming from World War II done in Britain where McCulloch steadfastly spoke against narrative therapy, and proactively promoted drug treatment to rewire circuits in the brain: see British Broadcasting Corporation, “Physical Treatments of Mental Diseases,” in Physical Treatments of Mental Diseases (United Kingdom: BBC, 1953); and the papers of Warren McCulloch, B:M 139: Series III, American Philosophical Society, Philadelphia, PA. ^
- 16. Joseph Dumit, “Circuits in the Mind”, 7. ^
- 17. Compare the following: Alan Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem,” in Proceedings of the London Mathematical Society s2-42, no. 1 (1936); Bertrand Russell, The Principles of Mathematics (London: New York: Routledge, 2009); From Frege to Wittgenstein Perspectives on Early Analytic Philosophy (Oxford: Oxford University Press, 2002); Rebecca Goldstein, Incompleteness: The Proof and Paradox of Kurt Gödel (New York: W.W. Norton, 2005); Kurt Gödel, On Formally Undecidable Propositions of Principia Mathematica and Related Systems (New York: Basic Books, 1962); Alan Mathison Turing, The Essential Turing : Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life, Plus the Secrets of Enigma (Oxford: Clarendon Press; New York: Oxford University Press, 2004); Alan Mathison Turing, A.M. Turing’s Ace Report of 1946 and Other Papers (Cambridge, MA.: Cambridge, Mass.: MIT Press; Los Angeles: Tomash Publishers, 1986). ^
- 18. McCulloch, Embodiments of Mind, 359. ^
- 19. Paul Erickson, Judy L. Klein, Lorraine Daston, Rebecca Lemov, Thomas Sturm, and Michael D. Gordin, How Reason Almost Lost Its Mind: the Strange Career of Cold War Rationality, (University of Chicago Press, 2013). ^
- 20. See: Orit Halpern, Beautiful Data: A History of Vision and Reason (Durham: Duke University Press, forthcoming Fall 2014), chapter 3. See also: Herbert Simon, “A Behavioral Model of Rational Choice,” in The Quarterly Journal of Economics, Vol. 69, Issue 1 (1955): 99-118. p.101. See also: Hunter Crowther-Heyck, Herbert A. Simon : The Bounds of Reason in Modern America (Baltimore: Johns Hopkins University Press, 2005); Economics, Bounded Rationality and the Cognitive Revolution (Northhampton, MA.: E. Elgar Pub. Co., 1992). ^
- 21. Brian Massumi, “Potential Politics and the Primacy of Preemption,” in Theory & Event 10, no. 2 (2007): 4. ^
- 22. Joseph Dumit, “Circuits in the Mind.” ^
Orit Halpern is an Associate Professor in the Department of Sociology and Anthropology and a Strategic Hire in Interactive Design and Theory at Concordia University, Montréal. Her work bridges the histories of science, computing, and cybernetics with design and art practice. Her most recent book Beautiful Data: A History of Vision and Reason since 1945 (Duke Press 2015) is a genealogy of interactivity and our contemporary obsessions with “big” data and data visualization. Currently, she is working on a history and theory of “smartness”. She has also published and created works for a variety of venues including The Journal of Visual Culture, Public Culture, Configurations, C-theory, and ZKM in Karlsruhe, Germany. www.orithalpern.net