hardware ¦ matter

by Cameron Cummings

A critical consideration of the relationship between weightless computation processes and softwares and their material, physical counterparts, as the following texts begin to unpack, is important in the general discussion of theoretical approaches to computation’s role in architectural theory and practice. The texts, which have respectively positioned alternative ways of understanding the materiality of computation, each hover between a philosophical binary that, seemingly, either promotes or laments the role of computation in the processes of architectural making. I think that the concept of the two categories in question, referenced as techno-utopianism and techno-pessimism in Picon’s 2014 essay Robots and Architecture: Experiments, Fiction, Epistemology, embodies a significant and contentious opposition prevalent in contemporary architecture discussions; as its title suggests, techno-utopianism sees the development of technical innovation and computational modes of thinking and making as a crucial player in the progression of architecture and design disciplines. Techno-pessimism, however, is a critique of this school of thinking, and poses the necessity of human agency as a offensive power against a force of current computational architectural methods1. Regardless of which camp each of the following texts frames material and computation within, they share a mutual recognition that, opposite prominent understanding, invisible computational languages are critically engaged with and dependent upon the material bodies which construct them, or those which they produce.

 Daniel Cardoso Llach’s 2015 essay, Software Comes to Matter: Toward a Material History of Computational Design, provides a brief historical record of early computation machines vis-a-vis their material limitations. Although, he writes, it has been generally acknowledged that material formations are effects of data translations (explored, for example, as digital fabrication), the important material history of such ethereal languages has not been explored. However, it is through the understanding of the digital/material relationship that one may locate a design theory of computation’s materialistic possibilities. Llach’s essay is historically centered in Cold War-era America, where the development of computation and digital technologies was directly associated with both military and industry worlds. This connection is important, as it was a potential origin of the focus on digital automation that computation research continues to hold today: “Software started to become both a vehicle for and an expression of a technical and conceptual reconfiguration of design, linked to the manipulation of materials, engineering efficiency, and militaristic control”2. In previous explorations of computation machines, the connection between logic and material limitations were historically much more visible than they have become; Llach identifies two historically significant iterations of digital machines that directly responded to their physical bodies: Joseph Marie Jacquard’s programmable loom, developed in the 18th century, and the numerically controlled milling machine, developed in the 1950s. In both these cases, Llach points out, the software is based in materiality: “In both the milling machine and the loom, materiality and physical constraints in the storage media determined both the kind of information stored and the range of material actions they were able to prescribe”3. These technologies would soon-after be developed and replaced by machines which concealed more and more their materiality, and automate their processes of making.

The concept of automation is important in the historical uncovering of design ideologies, and is pertinent in contemporary discussions of digital fabrication and architectural progress. When automated design becomes a research goal, what becomes the role of designer? As a subsequent text by Tom Ingold will discuss, the shifting of creative agency from human designer to automated machine, and the removal of designer in the process of design and manufacturing, poses major theoretical questions of object-agency and material life4. It seems appropriate to compare Llach’s historical research of the ambition of automation to Tom Ingold’s 2010 essay, Bringing Things To Life: Creative Entanglements in a World of Materials, which is deeply concerned with the physical existence of materials and their processes. Despite avoiding the direct discussion of computation, Ingold positions themselves against the ambitions of automation through software and computation that dominated the previous text: if the process of making (form-giving) is life5, how are forms without a process of making, or forms that have had their material production collapsed by digital fabrication machines, able to contain life-ness? Ingold’s inclusion of philosophers Gilles Deleuze, Felix Guattari, and Henri

 Lefebvre, is notable, as they may act as theoretical markers for the readers to better position Ingold’s text in the discussion of automation in architecture and design. Specifically, Marxist thinker Lefebvre famously advocated for an approach to architecture and city-building which directly resulted from human creation6. This heavily popularized anti-commodification/capitalist understanding of urbanism is the antithesis to modes of automation in architectural thinking and practice. Further, if we follow Ingold’s conceptual differentiation between objects and things, we may more fully understand the possibilities of digital automation in architecture: is architecture an object or a thing? Can the design and fabrication of objects be automated? Can the same be said for things? These questions intend to prod the concept of machine agency through an architectural understanding.

In Knight and Vardouli’s introduction to the 2015 Special Issue of Design Studies, Computational Making, the attachment of the term making to concepts of action and process evokes the inherent relationship between making and human agency. As the authors explain, the texts that comprise Computational Making are responses to the question of the relationship between computation, design, and making, and clearly synthesize contemporary advances and ambitions in this field. Making is a prominent focus of contemporary design discussions, and digital fabrication and computation is closely tied to these discussions7. Easily grasped from this text, there is no obvious limit to the questions about how making and computation may come together.

However, here, there is an interest in making through an exploration of the relationships between makers and their tools and technologies. The authors index characteristics which they believe to outline an approach to making: such a process is dynamic, improvisational (indeterminate) contingent, and embodied8. Through this understanding of making, the question of the relationship between digital technologies and making is expanded to questions of technology’s openness to indeterminacy, and its flexibility to the human body.

I believe that this concise idea presented by Knight and Vardouli offers an interesting response to many of the much broader questions posed throughout the Llach, Ingold, and Picon texts: computational methods perhaps are best understood as tools which depend on a human user. Processes of manufacturing in industrial and military fields in the mid 20th century saw computation and technologically advanced modes of material production through their anti-human capitalist potentials; the field of architecture and design may find the most exciting possibilities once removed from this history.

  1. Picon, Antoine. “Robots and Architecture: Experiments, Fiction, Epistemology.” Architectural Design 84, no. 3 (May 1, 2014): 56
  2. Daniel Cardoso Llach. “Software Comes to Matter: Toward a Material History of Computational Design.” Design Issues 31, no. 3 (Summer 2015): 41
  3. Daniel Cardoso Llach. “Software Comes to Matter: Toward a Material History of Computational Design.” Design Issues 31, no. 3 (Summer 2015): 43
  4. Tim Ingold. “Bringing Things to Life: Creative Entanglements in a World of Materials.” NCRM Working Paper. Realities / Morgan Centre, University of Manchester. 2010. 2
  5. Tim Ingold. “Bringing Things to Life: Creative Entanglements in a World of Materials.” NCRM Working Paper. Realities / Morgan Centre, University of Manchester. 2010. 3
  6. Lefebvre, Henri, and Kanishka Goonewardena. 2008. Space, Difference, Everyday Life : Reading Henri Lefebvre. New York: Routledge.
  7. Terry Knight and Theodora Vardouli. “Computational Making,” Design Studies Special Issue: Computational Making 41, part A (2015): 1-7.
  8. Terry Knight and Theodora Vardouli. “Computational Making,” Design Studies Special Issue: Computational Making 41, part A (2015): 1-7.

behaviors ¦ ⌈inter⌉actions

by Michael Carosello

_Warren M. Brodey | Soft Architecture

In his text, Brodey discusses the true merit of so-called intelligent environments insofar as their capability to react and adapt creatively in relation to their surroundings. Current efforts result solely in environments that are inherently dumb and lacking in creative flexibility – capable only of repetitive actions and hindering man’s ability to evolve. The use of cybernetic thinking was ushered in with the writing of “Behaviour, Purpose, and Teleology” by Rosenbleuth, Weiner, and Bigelow, who had created a hierarchy of behaviours of an object in relation to its environment. These sets of characteristics ultimately lead to the implication of evolutionary behaviour and a redefinition of environment: “given any object relatively abstracted from its surrounding for study, the behaviouristic approach consists in the examination of the output of the object and of the relations of this output to the input” (Brodey, 1967).

Rosenbleuth et al also encourage a new method of defining an object and its surroundings by considering both aspects to be informing each other, creating an active feedback loop that better implies a constant state of evolution. Brodey suggests that this method of feedback and prediction is key to teaching machines to ascend the established behavioural hierarchy, in the same way a human being must be taught in order to become intelligent. It is also important to differentiate between complicated and complex environments, as complicated environments are incapable of adaptation and rely solely on redundancies and back-up components, therefore remaining stupid. Man cannot evolve in an environment incapable of its own evolution, which ideally must grow alongside the user. This is the soft environment.

  • Is such an environment truly attainable? If so, how immersive or extensive must it be? Does it potentially allow for the manipulation of the evolution of both the environment itself as well as those inhabiting it? The concept of induced evolution/feedback introduced by external force.

_Molly Steenson | A Theory of Architecture Machines

In this section, Steenson discusses Nicholas Negroponte’s ideas of AI and interaction through a theory of architectural machines. Negroponte himself drew from cybernetics and artificial intelligence in order to develop his theories into accessible and interactive interfaces, with a desire for them to manifest into an integrated and fully immersive intelligent environment. This technological symbiotic relationship between man and machine could be described only as a partnership, doing away with the idea of computers as but a slave to human beings. Instead, both are mutually engaged in the creative process, bridging the gap between the human and computational. An open dialogue exists between the two, creating the potential for ideas unrealizable by each individually.

An architecture machine would rely heavily on the use of heuristics – defined as rules of thumb and provisional techniques for problem solving – in order to develop its models of human-computer interaction. These techniques would allow the machine to not only learn how to learn, but also learn the desire to learn, maturing in a way similar to humans. Negroponte envisioned architecture machines as “a continued heuristic modelling of models that learned from one another and that reflected the adaptation of human, machine, and environment” (Steenson, 2011). Building upon human-computer interfaces, the term takes on a new application, with Negroponte picking up on the definition regarding an apparatus designed to connect two scientific instruments so that they can be operated jointly. He felt that the interface acted as a feedback loop between an object to be sensed and a tool with which to sense it.

Working with sensory qualities is key to the architectural process due to the possibilities for creating interactive environments, requiring architecture machines to act as sophisticated sensors. Negroponte believes that computers require a variety of sensory channels in order to accurately experience and manipulate the world, yet describes them as the most sensory-deprived “intellectual engines” in existence. Architecture machine interfaces would instead need to be sensory to an almost human degree – as our bodies are vital to our learning process – allowing them to just as accurately and thoroughly engage with the environment. This would require these machines to be made in the image of human beings, going so far as to potentially inhabit a body similar to our own. Finally, architecture machines must work at an immersive scale, acting as an intelligent environment. Relating back to Brodey’s beliefs, the physical environment should be experienced as an evolving organism capable of flexible creativity rather than its current state of being automated and stupidly complicated.

  • How human must a computer be in order to become sensory? To what extent can a machine become sensory?
  • Can a non human intelligence truly be capable of tasks beyond which it was designed? Can an algorithm ever accurately replicate the subjectivity of human nature?

_Andrew Pickering | Cybernetics and the Mangle

Pickering begins his article by posing the question: why have historians turned to cybernetics? A key reason for this is the appeal of differing from the linear academic approaches with which we are familiar in natural and social sciences. Cybernetics acts as a “shift from epistemology to ontology, from representation to performativity, agency and emergence” (Pickering 2002). The first chapter, and the one of interest here, is that which discusses the work of Ross Ashby and, more specifically, his homeostat. As a current was passed through the device, a rotating needle

acted as both an indicator and controller of the flow which, once amplified, became the output. This output was then used as the input to three other units, which were interconnected through electrical feedback loops. As a result, this system could exist in one of two states: stability – during which the needles on all units were at relative rest in middle ranges – or instability – at which time the needles were driven to and stuck in their extreme positions. This duality was of particular interest to Ashby, who built the devices to randomly reconfigure themselves if the needle departed too far from a central, resting position. As such, the system would recalibrate and once again either reach a state of stability or instability. If it remained unstable, the process would repeat itself until a stable configuration was reached. The inevitability of reaching a stable state was key to the design of the homeostat, an example of what Ashby referred to as an ultrastable system.

What struck itself as unique with respect to this system was the apparent liveliness otherwise absent in the machines of this time. No other was capable of random reconfiguration as a response to inputs. This gave the homeostat a sort of agency, allowing it to act upon internal factors rather than previously specified external factors. The purpose of this was to reiterate the perception of the world as a lively environment instead of one that is static and lends itself exclusively to representation. One could never tell from the outside how the homeostat would reconfigure itself next. It was made to be an electromechanical proto-brain; a model of the brain as an adaptive controller of behaviour, albeit one which would be inefficient in more complex scenarios when given many variables to consider.

  • Pickering states that the attraction of scientists to cybernetics is the mystery involved in its deviation from linear or Newtonian practices. Does the cybernetic or linear approach seem to be better suited to the sciences?

_Lucy A. Suchman | The Problem of Human-Machine Communication

Suchman raises the question regarding the distinction between artifacts and intelligent others. Exploring the impact of computer-based artifacts on a child’s concept of the difference between alive and not alive, or person and machine, it was found that children had the tendency to attribute aliveness to objects with regard to autonomous motion or reactivity. However, they would reserve the perception of humanity only for those showing signs of emotion, speech, or thought. The general view of computational artifacts becomes one of “almost alive”, yet still distinct from human beings. The challenge that then arises is that of the separation between he physical and the social: things that one designs, builds, and uses vs the things with which one communicates. Here too does the definition of interaction now describe that which goes on between people and machines.

Historically, automata, or devices capable of self-regulation in ways commonly associated with live beings, were tied solely to simulated animal forms. Human-like automata have long been present in the forms of statues that had mechanisms allowing for movement or speech, yet have nevertheless been imbued with minds and souls by humans, even those familiar with those internal mechanisms. Julien de la Mettrie argued that vitality relies on the organization of their physical structure. The mind is best viewed when considered an implementable abstract structure. Two points then arise: the first that, given the right form, a machine with finite states can produce infinite behaviours, and the second that what is essential to intelligence can be abstracted and embodied in an unknown range of alternate forms. The result is that reasoning and intelligence are not limited to human cognition, and can then be implemented as intelligent artifacts. The argument that arises that cognition literally is computational, as there is no reason to differentiate between people as “information processors” or “symbol manipulators”. Intelligence is the manipulation of symbols, and the most successful automata, which process the largest amounts of data, are industrial robots made to perform routine tasks and repetitive assembly. Expert systems possess minimal capability for sensory input, often limited to the use of a keyboard or human operator. Industrial robots, however, have relatively more developed sensory systems but are confined to specific tasks within extremely controlled environments. Still, the “intelligence” of these machines, when considering direct interaction with their surroundings, is less than that of a two year old child.

When considering human-computer interactions, the attribution of purpose to computers is due to their immediate reaction to actions performed by the user. This contrasts previous systems, on which commands were queued and offered no immediate feedback. Modern advances in technology allow for more immediate interaction between user and computer, giving a sense of real time control. The sociability of computer-based artifacts is then a result of controls and reactions becoming increasingly linguistic as opposed to mechanistic. There are fewer buttons to press or levers to pull, in favour of specifying operations through common language. The key difference between natural language processing and human communication is then the ability to respond to unanticipated circumstances. People bring to computers expectations based in human interaction, which brought on the desire to allow humans to communicate with computers using natural language. The ability to speak and be heard by your devices is now thoroughly integrated into our daily experiences, to the point of potential inconvenience when not available. It was also found that once a computer demonstrates minimally human abilities, users would go on to attribute with them further human abilities of which they were incapable. It is the complexity and opacity of the computer’s inner functionality that invites us to personify, reinforced by apparently mysterious inner workings. The result of combining a somewhat predictable interface with an internal opacity and unanticipated behaviour results in the use of a computer likely seen as an interaction rather than the use of a simple tool.

Self explanatory artifacts

With the creation of increasingly complex technology we introduce the concept of it being usable with lower amounts of prior training. Designers inherently have a view that an object should be self explanatory, but when considering computers, the term implies the artifact itself be the one capable of said explanation (potentially to the extent a human could). This would require the device to be intelligent rather than solely intelligible. Suchman goes on to compare the instructional approach of the coach vs that of the written manual (the former maximizing the use of context sensitivity at cost of the need for constant repetition, while the latter relies on generalization, yet is able to be duplicated and reused with ease). Interactive systems would then imply the possibility for technology to move from manual methods of explanation to that of a coach.

  • At what point, if ever, can we consider an artificial intelligence to be “alive”?
  • Several definitions of “intelligent” environments. What do you define an intelligent environment as? What characteristics would you like to see included in so-called intelligent environments?

virtuality ¦ simulation

by Maxime Leblanc

This week’s readings deal with virtuality and simulation through four critical texts and essays. As is the case with all seminars in this class, these readings offer insight into selected topics of computer theory through historic, academic and critical texts aimed at providing a general understanding of issues and trends pertaining to the topic at hand. This essay will summarize all four readings and provide brief examples and opinions on them.

20/20 VR

The first text, 20/20 VR by MIT Media Lab founder Nicholas Negroponte in his 1995 book Being Digital, begins by questioning the legitimacy of the term virtual reality. He states that it is a blatant example of an oxymoron do to its apparent contradictory nature. He then goes on to tentatively describe the origins of VR and its uses in flight simulations and military training as exemplified by Ivan Sutherland’s 1968 Department of Defense-led prototypes for tank and submarine operators. Negroponte goes on to pessimistically describe the faults of current VR systems, stating: “VR is not yet fast enough [for credible immersion]”, and “Most manufacturers will probably miss the point totally and will market early VR systems that have as much image resolution as possible, at the expense of response time” (Negroponte 1995). According to him, the measure for how real a virtual experience is perceived is directly correlated to the marriage of image quality and response time. In addition, extra-visual sensoria (sound and haptic feedback) seem to significantly contribute to one’s sense of immersion as shown in Russ Neuman’s experiment on image quality and sound.

The Aesthetics of Virtual Worlds: Report from Los Angeles

The second text, The Aesthetics of Virtual Worlds: Report from Los Angeles by Lev Manovich attempts to categorize some tentative propositions on the aesthetics of virtual worlds. Opening with a quasi-dystopian marketing pitch for a fictitious new virtual world, Manovich attempts to share his view of VR as a commodity. He later states that “the reality effect of a digital representation can now be measured in dollars. Realism has become commodity. It can be bought and sold like anything else” (Manovich 1996) Manovich’s 1996 predications aren’t far off from today’s reality of VR marketing and sales. VR headset manufacturers offer differently-priced hardware depending on pixel count and frames-per-second. In addition, online 3D model libraries like Viewpoint DataLabs International (currently purchased and operated by software giant Computer Associates) offers digital objects with price-points that reflect the object’s complexity or polycount, thus commodifying one’s immersive experience. Similar to Negroponte’s text, Manovich provides readers with a brief history of the applications of simulation technologies. Habitat, SIMNET, VRML, Quicktime VR, BattleTech Center, Worldchat, etc. are presented in the first part of this text in order to provide a concise starting point for his visual aesthetic propositions featured in the second part.

Starting with his first proposition, Realism as Commodity, the author argues that given digital media’s inherent connection to numbers, it will become easy to commodify the digital world. For 2D images, spatial and color resolution constitutes its core. For 3D images, 3D resolution (tied to its temporal resolution) becomes the quantifiable units that will likely provide the basis for eventual price determination. Just as sex lines who charge clients on a minute-per-minute basis, “all dimensions of reality will be quantified and priced separately” (Manovich 1996).

The second proposition addresses the industrial method of producing art and how it affects the digital realm. The author argues that “the amount of labor involved in constructing three- dimensional reality from scratch in a computer makes it hard to resist the temptation to utilize pre- assembled, standardized objects” (Manovich 1996). Some creativity is therefore lost due to this pre-packaged, pick-as-you-go method for creating virtual worlds. In addition, Manovich argues that if even professional designers rely on ready-made objects, it will almost guarantee similar behavior by consumers who likely have little to no graphic or programming skills.

In his third point on the aesthetics of virtual worlds, the concept of the artificiality of virtual worlds comes into question. Through the example of web surfing, the author describes how we establish communication through the machine by glancing back and forth between the loading icon and status bar of a web page. It is precisely these blips in continuity that allow the machine to reveal itself to us, thus breaking the illusion of a ‘real’ world. The cyclical hiding and revealing of the machine will become an important consideration when developing hyper-realistic environment. Currently, our 21st century VR devices are attempting to become as inobtrusive as possible with as high a framerate as possible to insure a complete immersive experience.

The final principle attempts to define virtual space in terms of Panofsky’s neologisms: aggregate space and systematic space. Manovich begins by debunking the myth that virtual worlds deals with matters of space by stating that “virtual spaces are not true spaces but collections of separate objects. Or: there is no space in cyberspace” (Manovich 1996). At first, one might associate 3D environments with Panofsky’s concept of systematic space since 3D modeling environment typically deal with empty cartesian perspectival spaces onto which objects are inserted into. However, the superimposing of elements (ex: avatar characters into pre-generated backgrounds) might lead to an aggregate view of virtual worlds. The author summarizes his point by stating: “although computer generated virtual worlds are usually rendered in linear perspective, they are really collections of separate objects, unrelated to each other.

Simulacra and Simulation

Written by Jean Baudrillard in 1981, Simulacra and Simulations addresses concepts of reproducibility and the simulation of hyperreal environments. Baudrillard defines the simulacrum as a state where copies or reproductions have lost their relation to the original (substituting the appearance of the real for actual reality). Beginning his text with an analogy of our society using the example of Jorge Luis Borges’ tale of cartography, Baudrillard asserts that, through out reliance on models and maps, we have lost the true meaning of the world that preceded the map. Throughout his text, Baudrillard defines the three orders of simulacra which deal with increasing levels of dependency towards representative models. The first order of simulacra represents the status quo before the industrial revolution where images were understood to be clear copies of original work. The second order represents a partial blurring of what is reality and what is simulation due to industrialization which accelerated the rate of copies. Images begin to mask reality. The final order of simulacra represents a paradigm shift where simulation/representation precedes reality in a process that Baudrillard calls “precession of simulacra” (Baudrillard 1981). These three orders of simulation are extensively defined as four successive phases of the image. Baudrillard writes: “In the first case, the image is a good appearance: the representation is of the order of sacrament. In the second, it is an evil appearance: of the order of malefice. In the third, it plays at being an appearance: it is of the order of sorcery. In the fourth, it is no longer in the order

of appearance at all, but of simulation” (Baudrillard 1981). Ultimately, we can conclude that simulation is not simulated reality, because it has lost all notion of reality. He gives the example of Disney Land in which the simulation of the theme park competes with our understanding of reality. This tension between real and simulation thus brings the question of proving the validity of a system through its opposite. Baudrillard states that operational negativity leads to validation of a system: “It is always a question of proving the real by the imaginary; proving truth by scandal, proving the law by transgression; proving work by strikes …” (Baudrillard 1981). These concepts, although written in 1981, still hold true today as contemporary digital media constantly challenge or understanding of reality.

What Does Simulation Want?

The final paper is a short chapter called What Does Simulation Want? which appeared in Sherry Turkle’s 2009 book Simulation and Its Discontents. She begins her piece with a conversation that took place over lunch in 1977 at MIT with a former colleague who stated that “students had lost all sense of scale” (Turkle 2009) due to the apparition of the calculator and, later, the personal computer. She then adds that “professionals who voice discontent about simulation in science, engineering and design run the risk of being seen as nostalgic or committed to futile protest” (Turkle 2009). Through this generational gap appears a generalized sense of vulnerability caused by the immersiveness of the simulated world. Turkle concludes this short essay with words of caution in relation to the temptation to disregard reality in pursuit of total immersion. She says: “simulation demands immersion and immersion makes it hard to doubt simulation. The more powerful our tools become, the harder it is to imagine the world without them” (Turkle 2009).


Through these readings it has become evident that the topic of virtual reality and space pose deep philosophical concerns. Although the technology has been around for many decades now, the technology to create true immersiveness has only now begun to find its way to the consumer market. With the advent of such technology, these discussions on simulation, simulacra and commodity are becoming increasingly important. As we shift towards a more digital lifestyles we must continuously question our fundamental understanding of the spaces in which we live.

Cited Works
Baudrillard, Jean. “Simulacra and Simulations.” In Jean Baudrillard, Selected Writings, 166-184. edited by Mark Poster. Stanford: Stanford University Press, 1988.
Manovich, Lev. “1.3: The Aesthetics of Virtual Worlds: Report From Los Angeles.” CTheory (May 22, 1996): 5-22.
Negroponte, Nicholas. “The Daily Me.” In Being Digital. New York, NY: Alfred A. Knopf. Turkle, Sherry. “What Does Simulation Want?” In Simulation and Its Discontents, 3-9. Cambridge,Mass.: The MIT Press, 2009.


by Viktor Holubiev


Virtuality came into the broad culture in recent years with VR helmets, chats and video games as well as Simulation observed in the contemporary cinematography and CGI. However, this field has been studied for the decades and such studies and reflections on the topic are the main themes of this week readings.

 It is worth to start with the famous work of the French sociologist and Philosopher Jean Baudrillard – ‘Simulacra and Simulations’. This book is a product of postmodern philosophy which was ahead of its time with the conceptions which are deeply actual today. The concept of Simulacra is based on the Semiotics or in other words on the symbol. The definition says that it is a copy without an analogy in reality, for example, photography or an artwork which is relative to the Eidos is the copy of the copy.  A contemporary advertisement which is usually not matching with reality such as renders – which are originally idealized models of reality without origins in it or in other words – hyperreal pictures. 

 The simulation in its origins in the process of creating something from nothing or owning without having. Here, Baudrillard draws a parallel with the simulation of illness by producing symptoms. It is something which could be observed in the hypochondria when the unconscious starts to transform the healthy organism into the sick. In this situation, it is very hard to draw the line between the true and false. Whereas the symptoms are true as for the sick as for the observer, in fact, this is nothing as a simulation. “If he acts crazy so well, then he must be mad” – the quote that emphasizes that not only the observer of the simulation could not be confident about its truth but also the person who simulates. That was the problem for the iconoclasts’ whos motivation came from the written above – the simulation. To say it in another way, the graphical description or simulation of the God with icons is a not precise representation and according to the Simulacra definition – simulation of something which does not exist. It is, of course, more comfortable to keep the idea of Good more abstract and transcendent as, for example, Islamic religion do. 

 Baudrillard allocates four successive phases of an image which could be also compared to the types of simulacres. The first one is a simple reflection of reality which could be referred to a photography or filmmaking. The second one alters the reality, however, it could be painting or cartography which is obviously not a precise representation. The third one is hiding the absence of basic reality which, to my way of thinking, could be referred to advertisement or concept art and finally, the last one has no relation to the reality and becomes its own simulacrum. This could be referred to the concept of a simulated world like in the “Matrix” or “13th-floor” movies. This stages also could be named as real, neo-real and hyperreal which is, in general, could be titled as a ‘second-hand truth’.

 We could observe such behaviours and pursuits in the first luna-parks which were established on the Manhattan. The main idea was to create the parallel world behind the fence. When people cross the border they are no longer belong to this land but to the imagination world, the illusion which was supported by every piece of that place. The luna-park is no longer a real but hyperreal place in some way discrediting the reality. Nevertheless, such an approach only strengthen the concrete position of reality, some kind of the negative affirmation of the positive like “proving system by crisis”.

 The frame of the rules which lies on the society could be referred to the second order of simulacrum whereas simulation of a lawful or forbidden act (in this frame) is beyond the false and the truth, therefore could be marked as the third-order simulacrum. That is why it is “always a false problem to restore the truth beneath the simulacrum”.


It is worth to emphasize that digital reality works with digits but not with reality, correspondingly, the ‘reality’ is an idealised mathematical model which usually works looks and acts even better than the real, that is the topic highlighted in the book by Nikolas Negroponte. It is a fact that the car, plane, tank simulators are modelling the most extreme or ‘ideal’ situation to react. That is an understandable idea, however, it is still less impossible in reality. The first implications where used by NASA and constructed by Ivan Sutherland in the far 1968 which was intensively used further in military education.

 The first attempts to put one in the VR simulation had a problem with frames per second which destroyed all the experience. Nevertheless, the resolve of the problem becomes another problem – the quality of the image was too good. That means that resolution and quality were many times higher than the origin organ which VR should simulate – an eye. This has resulted in the few attempts to simulate the eye contact or in other words, extend the telecommunication by implementing the presence factor. The attempts to do that where barren, however, it has resulted in a series of interesting experiments involving our perceptual and psychological factors.

 Overall, the experiments with VR and the presence simulation show that our sight is not the only thing, which takes part in the process of seeing. That is to say, that the reality perception requires more sensory drivers than the sight, which is up to now the main aim of virtual reality developers.


The attempt to anticipate the existence of such tools is brightly described in the publication by Lev Manovich dated November 1995. From the first page we see how author tries to represent all his thoughts about the possibilities and capacities of Virtual Reality in the advertisement which is fairly inspired by the final application of VR in daily life which is eventually become possible  after years of theories and conferences, inevitably it is “a new way to work, communicate and play”.

 First virtual environments where constructed in the 80th where the participants could interact with the world as well as with each other “graphical representations – avatars”. One of the first was SIMNET developed by DARPA the first three-dimensional graphical environment with the possibility to interact with each other. Some of the Atari workers even said that the cyberspace originates from the game industry, however, the military simulators where rapidly retransformed into the entertainment systems those days. 

 The main trend was the spatialisation of data, creating the 3 dimensions pace for web users. One of the most noticeable attempts where done with Virtual Reality Modeling Language. Using this language users could construct 3D scenes which were connected to the web and as a result created the new type of media. The aim was to achieve the spatialisation of WWW or in other words, giving data the sense and the ability to percept it. Nonetheless, it was still impossible to interact between each other and, for that purpose the WorldChat and VirtualPlaces where created. They were the first attempts to connect and place people in the world of data.

 Such technologies helped game designers to return to the ancient forms of narrative telling the plot by the interaction and personalization of the character unlike the previous games concentrated on the moving through space.

 As could be observed, the increasing interaction and spatialisation where the main trends of that time. So where it all comes to the creation of the world there should some kind of aesthetics of the world be created. The first and foremost is realism. Realism becomes a value in the new reality, the value which simultaneously has a money equivalent. It is evident that this is the situation even nowadays. For example, high polygon models are always worth more than the low polygon ones. Also, with the new era of 3d, we could observe the “death of the author”. All the contemporary 3d artworks are mostly assembled with the ready stock models. The profession of visualizer is most of the time connected with composing the already made models in the 3d scenes. That means that the fantastic work of some author is, in reality, the work of multiple authors just gathered in one picture. This is also clearly seen in the 3d programs where you already have the set of 3d objects, textures, lights, effects etc. However, the virtual creation created the sublayers of creators which did not exist previously. For example, there are famous modellers which create the objects from the zero, like the 3d sculptors working in Zbrush while simultaneously there are the famous artists who take their models and combine them in the scenes making the separate artwork assembled from the parts. This could be compared to famous photography and famous collage. This all lead people for creating a system of creation –“you push the button we do the rest, has become you push the button, we create your world”.  

 The temporal dynamic is also something which inevitably becomes the part of the virtual world and a world web. We could see it while waiting for the page download, while the computer will address the information. Translating this into the 3d world means that the reality as a portion of information should be downloaded each second in the distance of perception. That means that it is highly rational to show and detail only that information which could be perceived. That observation has a connection with the virtual world conception where the light speed is the analogue of the spatial download capacity or in other words the virtual border to prevent the observation of “not downloaded universe”. Nonetheless, our experience in the VR is each time shifting by the menus and icons – the means of interaction, so the live experience is turning into the roller-coaster of reality and hyperreality. The author asks the question – “could the Brecht and Hollywood be married ?”, up to now, the answer is –yes. The new era of interaction movies come and we could observe such game masterpieces as “Detroit Become Human” by “Quantic Dream” which are nothing but the interaction movie where all the actions are the plot forming acts.

 “There is no space in cyberspace” – or in other words – “computers where born dimensionless and thus imageless”. We must keep in mind that a computer is not the “visual medium”. Everything we got is the illusion precisely modelled with the reference to our perceptions. That is why, it is so complicated to create the environment, the atmosphere between objects, one spatial frame that connects everything in one play of interaction. Hence, there is a new definition of an object in the virtual world – it is “something which can be clicked, moved, opened”. For that reason, we could paraphrase Descartes phrase applying it to the Virtual World as “I can be clicked on, therefore I exist”.

The whole picture of this world could be first observed in the “Toy Story” where the whole animated world where created. However, the author provides an example from reality –Los Angeles – the place where this cartoon was created. The city structure resembles the Virtual world by the absence of hierarchy. The locations are defined by the addresses rather than landmarks. The famous places are located in the middle of nowhere which is the result of a generic city structure, just as in the virtual world.


Everything of the previously said could be titled as the paraphrased Churchill words – “first we make our technologies and our technologies shape us”. “What the brick want” – once the Louis Kahn asked. As the “brick” of contemporary technology is a simulation, we could fairly ask what the simulation want? That is the question asked by the MIT professor Sherry Turkle. 

 The key point here is to produce inside ourselves the new abilities for the new informational age. To doubt. It is obvious that the new design skills open the new possibilities for “research and learn”, however, to master your tools you should be aware of their presence. As the simulation creates immersion it is hard to doubt it because – “The more powerful or tools become, the harder it is to imagine the world without them”. However, it is the vital skill in this century – to doubt.

graphics ¦ surfaces

by Lydia Liang

Hidden Surface Problems: On the Digital Image as Material Object by Jacob Gaboury (2015) argues that computer graphics should not be considered as a genealogy of vision or a progression of earlier visual mediums but rather a form of data that materialize from restriction. Gaboury contends that computer graphics is not only a visual medium but a material object. It is both image and object. In fact, the creation of computer graphics was based on an ontological approach with the focus on the simulation of a three dimensional object. In his historical narrative of computer graphics, Gaboury identifies a common, and probably the most significant challenge in early computer graphic studies of how to simulate a digital object: what parts of the object should be visible and what should be hidden. The focus of the field at that time was to find a way to restrict an image so that it is only showing which is sensible to a viewing subject. In this way, the specificity of computer graphics is not about “reproducing a way of seeing” but “constructing the absence”. The materiality of the computer graphics lies in the process of the solution to the hidden surface problem, where a complete graphical world of objects has to exist before the rendered output, and a mean of removing the irrelevant is executed to make the world legible. The rendered image becomes one form the data contained in that graphical world and an interface for vision.

Gaboury’s insight of the nature of computer graphics as the “construction of the absence” is in my opinion, analogous to the architecture making of space. Many consider the making of space is rather an exercise in the making of a void, construction a relationship of “nothingness” to the other architectural elements.

Friedreich A. Kittler offers another insight to the nature of computer graphics in his work Computer Graphics: A Semi-Technical Introduction (2001). He attempts to answer the question of “What are computer images?” by focusing on the ways computer graphics can replicate our perception of the world. It puts forward that computer graphics are a deception to the eye and can be easily forged. Through softwares and algorithms, all that we can perceive optically can be virtualize, and are only limited in resolution and detail by the RAM space of the machine and in aesthetics by the optic option which are available. In his text, two optic modes are discussed: raytracing and radiosity. These two modes are fundamentally contradictory to each other; one focusing on locality where the other dependent on globality, respectively, thus resulting in aesthetics that are mutually exclusive of each other. They each contribute an aspect of truth of our world as we received it to computer graphics, be it the sharp highlights of an object or diffuse edge of a shadow. Efforts are being made to bring the two optic options together, however it is proven to be difficult because of the contradicting mechanisms. Can a perfect world picture be produced then? According to Kittler, there is always going to be digital compromises, and computer graphics, as part of its function, is to “forge compromise from mutual exclusivity”. Finally, Kittler brings up notion of “images appearing in accordance with injustice”, that is when you are concerned with one aspect of the image, you alway neglect the other. What is a computer graphic justice then? He leads the answer with the idea of the “rendering equation”, and closes with the idea that computer graphics can only be called such if only it can render what cannot be perceived by our vision but that exists. Kittler’s final remark is especially intriguing to me in a sense that, if combined with Gaboury’s reading, can one say that computer graphics ultimately should be “Projecting the invisible through hiding the invisible?”

In What Designers Do that Computers Should, George Stiny (1990) identifies two things that are essential to the computer if it were to serve as a design partner creatively and inventively: ambiguity and parallel descriptions. Stiny argues that ambiguity is an important aspect of design because it “fosters imagination and creativity, and encourages multilayered expression and response”. However, it is difficult to achieve ambiguity in design with computer aided drawings due to their structured nature, which is not the case for drawing with ink on paper. In order to produce shapes that can be decomposed and manipulated as desired by the designer in the same manner as conventional drawings, Stiny proposes an algebra of shapes with no inherent structure that allows for decomposition and manipulation using established rules. Alongside with the algebra of shapes, descriptions of different domains are also essential in the creation of a complete design, not unlike the idea of combing plans, sections and elevations to grasp at a three dimensional relationship done by designers regularly in practice. Thus Algebra of shapes should be combined and computed in parallel with other algebras, namely the algebra of labeled shapes. These computation are also a part of the discourse of what Stiny termed the shape grammar.

It is important to note that the approach outlined by Stiny above assumed the use of the computer drawings as a creative tool rather than a representation tool, as it is now days. In the earlier days of research in the area of CAD, the computer is intended as a problem-solver rather than drafting and representation (Celani 2002). Now days such concept remains only in the theoretical and experimental realm. The shift occurs when the simplification of the CAD system happened in order to appeal to the market of personal computers. Another reason, not far from what’s proposed by Stiny in his article, is the computer’s inability to recognize emergent or arbitrary shapes that are not defined by the programmer, where a designer can do easily in his/her head (Celani 2002). Shapes grammars can amplify the amount of combinations of shapes and be used as a generative tool, but it is still far from being used as a creative tool.

In Visualizing of Beautiful data, A history of Vision and Reason since 1945, Orit Halpern (2015) describes the ways in which the nature of image, perception, and observation has been reformulated by cybernetic concepts. Citing the work of prominent designers in the field, Halpern states that a new form of perception emerges, in which it is technically transformed, interactive, data-driven and autonomous.

Vision and cognition becomes amalgamated resulting in a form of communication that is algorithmic, technological and autonomous. Senses becomes a critical part of education instead of language, and language becomes environment. The image thus can be perceived not from its visual quality but mentally through psychology and memory. Halpern also argues that these designers have re-conceptualized the screen and push forward the notion of space as interface, replacing the discourse of structure, class and race to that of the environment, sensation and time. With the emergence of a new kind of observer, one that is both isolated and interactively networked, Halpern tries to illuminate its relationship to a new kind of knowledge which rises from the a sea of limitless information, or a “communicative objectivity”. It was hoped that in the discourse of temporality in perception, a new kind of sense optics could also be introduced, an extend perception that could bring together the future and the past. Halpern concludes by asking that “…what other dreams for our perceptual future emerge from our interfaces.…where webelieve sense, perception, and cognition have been compressed into process, program, and algorithm to be regularly fed-back to us at every click and choice made at the screen.”

The overarching theme in all of these readings are concern with perception. In a digitalized world with computer graphics and data overload, what informs our perception? Our perception struggles between the visible and invisible, visual and cognitive, directed and arbitrary. Is a new sense of perception required then? A perception that, rather than visually-dominated, deciphers instead. Again it brings back the question of coding and decoding as previously discussed. If the graphics we perceived are codify, with pixels, lines, shapes neatly structured and displayed, instead of merely the result of the optic laws, should our perception not only perceive them but also decode them?

Celani, M. (2002). Beyond Analysis and Representation in CAD: a New Computational Approach to Design Education. Retrived from http://papers.cumincad.org/data/works/att/c7e0.content.01381.pdf.
Gaboury, Jacob. “Hidden Surface Problems: On the Digital Image as Material Object.” Journal of Visual Culture 14, no. 1 (April 2015): 40–60. https://doi.org/10.1177/1470412914562270.
Halpern, Orit. “Visualizing. Design, Communicative Objectivity, and the Interface.” In Beautiful Data: A History of Vision and Reason since 1945. Durham: Duke University Press Books, 2015
Kittler, Friedrich A., and Sara Ogger. “Computer Graphics: A Semi-Technical Introduction.” Grey Room (January 1, 2001): 30–45. https://doi.org/10.1162/152638101750172984.
Stiny, George. “What Designers Do That Computers Should.” In The Electronic Design Studio: Architectural Knowledge and Media in the Computer Era, edited by Malcolm McCullough, William J. Mitchell, and Patrick Purcell, 17-30. Cambridge, Mass.: The MIT Press, 1990.

object ¦ class

by Joel Friesen

The following paper presents a brief analysis of the concepts of objects and classes as they have been articulated in multiple contexts, through an examination of a series of selected texts ranging from the fields of software design to metaphysics. The analysis will proceed from the speculative to the explicitly applicable. The arguments and positions taken by these various authors will be synthesized and examined critically in relation to one another, and in their broader intellectual context, with a view towards how these concepts might have implications for architectural practice and software development.

When the philosopher of technology Gilbert Simondon penned the first of his primary theses, On the Mode of Existence of Technical Objects, he did so with the resolute belief that humanity could only foster a harmonious relationship to technology if we took seriously the task of understanding technical objects on their own terms.1 Though Simondon was writing in the middle part of the 20th-century, his plea for an evolved understanding of objects rings true today, especially as we confront the increasingly black-boxed context of computation. What exactly are these entities that we are interacting with and that have come to have such a dominant effect on our lives? If one is to ask the question of the nature of these so-called digital objects, one must begin by probing their ontology, their way of being in the world, their mode of existence. In a text whose title is clearly drawn from Simondon, Yuk Hui critiques the ontological presumptions often made in computer science through a treatment of various metaphysical systems. He demonstrates how Brian Cantwell Smith, for example, links phenomenology and computation, refuting the idea that semantics can be separated from syntax, and ultimately claiming that “computational data are like sense data” and computation “acts on this flux to categorically create the objective form of it.”2 Heidegger’s rebuke to modern science is also given attention, though Hui believes the two oppositions are reconcilable and necessary for a full picture of the mode of existence of digital objects.3 The particulars here are less important than the consideration of digital objects in a mode of analysis that seeks to understand them on their own terms, apart from their use value in our everyday lives.

Hui treats categories as ontological assumptions in their own right, as always inextricable from the object is its class, which we might rudimentarily consider here to be a notion of category, or that which subsumes objects according to certain predefined characteristics. In The Order of Things, Michel Foucault wrote on how we classify our world, claiming that the epistemological shift (or the episteme, the dominant paradigm for the possibility of thought in a certain historical era) from the Baroque to Classical rationalism involved a change in the way we order things.

While the universe was previously yoked together through observed similarities between disparate elements, measurement and deduction came to the fore, and as a consequence, “the entire episteme of Western culture found its fundamental arrangements modified.”4 Through semiotics, order was now given to thought and enumeration, which served to differentiate between elements, not invoke similarities. Foucault’s text highlights the importance that signs and classes have on our modes of thought, something which can be observed even into the 20th– century, when classes, and the objects that fall under their purview, became of central importance to a certain strain of computer programming.

Object-oriented programming (OOP) is an approach to computer programming that involves the definition of computational “objects” placed into relation with one another, as opposed to a linear, sequential sequence of code to be executed. The first such instance of an object-oriented language was SIMULA, developed in Norway in the 1960s.5 In such languages, pieces of code known as classes provide a general template from which objects can be derived and further specified in a certain context. As with much software, the earlier object-oriented programming languages sought to model real-world processes, but as Matthew Fuller and Andrew Goffey explain, in this context modelling cannot be equated with representation, as “the complex dynamic systems and user relationships that object-oriented design models are those that are made possible by the silicon explosion,” effectively “creating models of things that don’t otherwise exist.”6 This has consequences for how users interact with the software as well,because our agency is entering into a dynamic relationship with the agency of the object, conditioning how we use the software and what operations we perform. Fuller and Goffey note that the machine can actually begin to model the user—and with that the line between object and subject becomes unnervingly blurry. The classes from which these objects derive exert their own sense of agency on the programmer as well. That the objects come pre-designed with their own sense of structure may be efficient, but may also begin to dictate how they are used. Combined with the concept of encapsulation—the black boxing of computational objects—we find ourselves increasingly estranged from a proper understanding of technology.7

It was not long after the advent of OOP that digital objects would find a place in architecture. By 1975, Charles M. Eastman detailed the C-MU Building Description System, an experimental software that aspired to overcome the shortcomings of hand drawing and physical models by facilitating the design of three-dimensional computer models of buildings, out of which all necessary drawings and information could be easily extracted.8 The objects discussed by Fuller and Goffey find an obvious connection here: the basis for the software was a series of classes of objects, such as wide-flange beams and gyp-rock, out of which a building could be formed. The same dangers discussed above are also present here, the agency of such objects and their encapsulation remaining mostly unconsidered. Today we call these building information modelling systems, and find them proliferating in architectural offices around the world. While they have surely refined the building coordination process, without a critical eye to what is beyond the black box we will become further estranged from the technology we create. My intention here is not one of resistance to technological change, only to remain vigilant and take seriously “the belief that ignorance begets oppression and understanding freedom.”9

End Notes

  1. Gilbert Simondon, On the Mode of Existence of Technical Objects, trans. Cecile Malaspina and John Rogove (Minneapolis: University of Minnesota Press, 2017).
  2. Yuk Hui, On the Existence of Digital Objects (Minneapolis, MN: University of Minnesota Press, 2016), 12.
  3. Ibid., 16.
  4. Michel Foucault, The Order of Things: An Archaeology of the Human Sciences (London and New York: Routledge, 1966), 60.
  5. Matthew Fuller and Andrew Goffey, “The Obscure Objects of Object Orientation,” in How To Be a Geek: Essays on the Culture of Software (Cambridge, UK: Polity, 2017.), 17.
  6. Ibid., 20.
  7. Ibid., 27
  8. Charles M. Eastman, “The Use of Computers Instead of Drawings In Building Design,”AIA Journal 63, no. 3 (1975): 46.
  9. Paul Dumouchel, “Gilbert Simondon’s Plea for a Philosophy of Technology,” Inquiry 35, no. 3–4 (September 1992): 408.

what is software

by Alexia Lavoie Landry

For a better understanding of software, we must ask the question of how it started and with whom it started with. Some answer to that question can be found in a text from Michael S. Mahoney, an historian of science that published an article about the history of software. In his article, he emphasis on the fact that software should not be seen as a single device, but more as schemes provided with instructions that we, humans give. Software has been designed by humans for their own purpose, and still, we often see it as something that “happened to us”. If we want to understand how software was created, we need to acknowledge that its creation happened simultaneously in various communities that were in the need of greater research capacities in their respective fields. We must be aware that the evolution of software didn’t happen in a linear perspective, but in a long-lasting process of accumulation of new paradigms, theoretical frameworks in which theories are formulated. Those communities were mathematicians, scientists, production teams, maintenance people, government structures, military divisions, designers, etc. Each of them had its own specific needs and perceptions (interpretation) of their field of study and developed their respective languages and diagnostic compilers, as well as representation techniques to interpret data.

When we think about computation we need to stop separating the subjective from the theory, because in fact, it has much to do with the vision the communities had of the world at the moment they created them. Software is the product of our own decision process, will and capacity to structure data. « There were never indications on how the machine should formally look like or be programmed »1. We are the one that designed and programmed, in order to create the symbols which have the capacity to lead to actions. But those symbols do not mean anything for the computer, we still end up interpreting them.

Beyond it’s relatively complex heritage, how can we describe software? Software can be difficult to explain because it is « virtual », or not « physical » like computer hardware is. Instead, software consists of non-tangible lines of code, written by computer programmers that are compiled into a software program. The software program and it’s codes is stored as binary data, a numeric system that only uses two digits — 0 and 1, on a computer’s hard drive. The message from the software is a priory intangible and needs to be transmitted to the user, this is also the role of the software. The actual image that we will read and interpret on the computer is made by signals passing through the tangible elements of the machine. It creates voltage-contrast images that we see through a user-interface, which is also part of the software. When we buy a software program, it comes on a CD-ROM, DVD, or other type of media, but it is only a physical way of storing the software. In other words, it is not the actual software that is tangible, it is only the medium that is used to distribute the software.

In one of his articles, Alan Kay, an American computer scientist uses the analogy of the human body to help us understand the concept of software. Computer hardware relies on many physical systems such as shafts, cards, transitors, circuits, chips, etc. Those are the tangible elements and we could compare them to our bones, veins, artery, tissues that form organs. We could then compare our body to our own computer, “the sommation of our human hardware”, accumulating data from our senses (the user interface). At the center of the loop, our brain retrieves information from our senses and produces an interpretation of them that leads to actions. Our neurone as well as our senses is our “human software”.

Why is that concept so hard to understand? S. Mahoney explains that computing and software has not one, but “many histories”, and that it is part of what makes it so difficult to represent. In order to study the history of computation, he suggests to turn the question of « how we have put the world into computers » to « how we have come to view the world through a computational lens ». We have now an indications of the primary motivations to it, we must recognize that It is much later that automating programming environment allowed more people to use software. Automating programming environment restricts the amount of choices the programmer has and direct his tasks. It has brought democratization of the process and small teams and individuals were able to start exploring computation.

The user-interface in time has also evolved a lot. User-interface is used in order to amplify the stimulation on the user. It is important here to define the concept of interface in the context of a software and how it impacts on society. An interface is an abstractions that helps us read and interpret the informations. It is presented to us under various forms. For example, the same application will be appearing differently on an apple watch than on a website, even if they allow both the same function. User interface has been a great tool in the context of capitalism because it has participated into democratizing access to software and allows to whom are behind them to influence the user. The way an interface is designed impacts on a high level the lecture of data. In that sense, it has a big role in setting the trends and raising the awareness. Through it’s creation and selection of the content, designers have the power to transform the way we see things.

Since 1850, software has evolved from the desire to come up with operational unified theories to the development of complex communication patterns between humans and media. We are now far from the low-level programming called coding, developed by research communities. We have now developed many languages and ways of programming, all of them in the will of increasing efficiency and minimizing mistakes. Whit it’s relative democratization, software occupies a central place in society since it is at the heart of contemporary decision- making strategies. It’s implementation comes upstream in the process and repercussions are extremely difficult to control and it has led to new contemporary challenges such as data storage and management.

Finally, with a narrative structured around human agency, how can we consider a possibility for software to interpret themselves, learn and reason? If computers can only do what they are programmed to do, we could have a doubt about the possibility of artificial intelligence. Alan Kay argues; « it will be the same for a fertilized egg, and yet, it will develop an intelligence »2.
Biology demonstrates that simple materials can be formed into exceedingly complex organizations that can interpret themselves and change themselves dynamically. « Some of them even appear to think »3.

1 Mahoney, M. S. “What Makes the History of Software Hard.” IEEE Annals of the History of Computing 30, no. 3 (July 2008): 8–18
2 Kay, Alan. “Computer Software.” Scientific American 251, no. 3 (September 1984): 52–59.
3 Idem


by Alice Cormier-Cohen

The term software is defined by the Oxford English dictionary as “the programs and other operating information used by a computer.” Software is written in a certain language by a programmer and translated into machine language by a device. It tells the hardware what to do. In a computer, software is the variable and hardware is the invariable. Software is the changeable part of a system.

The history of software is often told in the sequence of evolution of its form. According to Michael S. Mahoney in What Makes the History of Software, it should not be about how the computer itself has changed activities, but how people have used it to change their activities. It is not as if people have been waiting for the computer to appear. “The history of software is the history of how various communities of practitioners have put their portion of the world into the computer. That has meant translating their experience and understanding of the world into computational models, which in turn has meant creating new ways of thinking about the world computationally and devising new tools for expressing that thinking in the form of working programs.”1
In this sense, a common misconception about software is what we think it does versus what it really does. What it can do relies entirely on its instructions, on the narrative it was given. Software has the potential of doing anything, it only depends on what people want it to do and if they make themselves understood by the machine. Engineers bring a new universe to life by designing. Software is full of potential and open to imagination but has become a commodity. It is seen as a tool, something that you do something with. We often just adopt them instead of improving them. Software should be used in a way that is changeable, or hackable as Matthew Fuller writes in Software Studies, in a way that “allows and encourages those networked or enmeshed within it to gain traction on its multiple scales of operation.”2

The mix of data and information at different scales and the freedom that its design allows makes software so powerful and interesting. “It is a medium that can dynamically simulate the details of any other medium, including media that cannot exist physically, it is not a tool, although it can act like many tools. It is the first metamedium, and as such it has degrees of freedom for representation and expression never before encountered and as yet barely investigated.” 3 Research and development has led to so many new ideas. Alan Kay in Computer Software argues that they must become a part of our knowledge and valued as reading and writing are for personal and societal growth because they have such an important role.

There is a wide variety of what we can program machines to do. To understand software, we need to understand not only what it does, but where it comes from and anticipate what it can do. The idea of change is very important in software, as most are born out of improvements. It is the designer that decides what to develop and what to change. Change is an important part of designing as most software is based off existing symbols and scripts. Designing software is about understanding sequences and transforming and modifying existing ones. This might cause problems as the complexity of software sometimes makes it too complicated (time wise and cost wise) or impossible to modify. Also, with time, older software is hard to use because of changes in technologies and therefore cannot be experienced and understood properly. The legacy of a software is a challenge we will have to live with and learn to deal with as it changes so rapidly.

Software “provides a way of talking about the materiality of abstraction.”4 A lot of literature mentions the flexibility, creativity and freedom software allows and how it makes its design close to art. Software has entered the world of art like many other fields. Software is operative and, as art is, is about experiencing. It allows to trigger actions and create new worlds. It is a medium that opens the door to the ‘invisible’. There is a fine line between art and software.

As in art, in architecture software can allow the development of new design strategies or new ways of showing what one really wants to do. The field of architecture has been very slow to embrace digital technologies compared to other branches and in other ways than drafting. Architecture selected the more conservative aspect of technology. Today, most architecture offices use drafting and modeling software as tools. Building information modeling (BIM) has helped construction professionals visualize, plan and design together on a central model more efficiently. These examples show how software, in architecture as in many fields, is used as a tool. In A-Ware, Keller Easterling argues that software could be used in a more complex way in architecture. Other fields have used it to multiplicate information and things creating diverse and complex networks. Architects, for example, should employ these strategies to multiply geometric components with sites and data for a more complex understanding of the territory. A good example is Zaha Hadid Architects who use existing software, but also develop their own software tools and scripting techniques to achieve their design goals.

It appears today that we cannot live without software which for Jack Burnham in Notes on art and information processing, is more dangerous and problematic than the dilemmas they are designed to solve because we are used to using it as a tool. It has become a necessity for many tasks. It is clear though that software occupies an integral part of our lives at many levels and that it is important to understand it, use it and change it. It is a skill that should be learned by all because of the possibilities it offers. Software is designed by us, for us. With the place they have in our lives, would it be dangerous if everyone was able to hack or change a software?

1 Michael S. Mahoney, What Makes the History of Software Hard.
2 Matthew Fuller, Software Studies.
3 Alan Kay, Computer Software.
4 Matthew Fuller, Software Studies.



Timeline of Computer History

Alan Turing “On Computable Numbers” (1936)

Demonstration of a Turing Machine:

Time Magazine Interview of Alan Kay (2013)

Seymour Papert on LOGO and coding literacy:


function ¦ program

by Harriet Strachan

A function, according to Robinson, is “an abstract replica of causality” where the same input must result in the same output every time. There can be one or many different inputs that result in the same output, however this cannot be reserved. The same cannot be said about one input giving you many outputs. He uses logic to explain this theory where “If A implies B, and A, then B” but this is not to say that “If A implies B, and B, then A”, that would require inverting the index. Functions return a value. These inverse problems act as a filter or a lens but they do not result in a function but rather a relation, where one single value is mapped to many sets or lists of values. Different software takes data ‘inputs’ and creates certain ‘outputs’ using and analyzing this information. However, they do not experience the results of these output actions.

In Alexander’s text “The Program”, he writes about the process of design, how designers are faced with a problem, and which also faces independent subsystems of different variables. He believes it’s not possible to replace the designer with mechanically computation decisions. Design requires more than an output made from a body of data. However, the designer is limited to solving these problems himself. He creates three schemes of the designer’s role in the process of design, which differentiate form (design artifact) and context (environment). The unselfconscious situation, the designer acts as the agent in the process and can manipulate the design in the case of any misfits. The selfconscious scheme is different to the later, where the designer engages in the process, iterating between ideas/drawings/diagrams of the form and context as a mental image. The third scheme seems like a solution to overcome the limitations of the unselfconscious process, by creating a formal picture of what exists in the imagination, resulting in a method (Ralph, 2015). Overall, a problem presents itself with a set of various misfits that need to be avoided when dealing with form and a context. We can refer to this as program, providing directions or instructions to the designer.

In the text “Computer Synthesis”, Cross discusses computer-aided design, the use of the computer in the synthesis stage of design problems and compares this with the ability to actually generate design solutions. Referencing programs, starting in the 1960s, associated with the automation of building design and simulation software, including space-allocation, optimizing floor plans, walking distances, etc. These programs create an interactive dialogue between the machine and its user. Similarly, ASHRAE and DIVA are both simulation software I used in my Architecture studies as a tool to aid in the optimization of building function. The interactive design process can be described in three different stages. The synthesis stage, usually associated with the human designer, is when a subset of variables, guidelines are created and recognized by the designer. The analysis stage, involving a program, is where this set of variables is applied. Finally, the evaluation stage is where the user and the machine iterate back and forth until the result satisfies the specifications and the designer (Encarnação, 1990). It involves the exchange of data between the external environment and the machine/program, creating a feedback loop (Carpo, 2015). However, these building optimization tools, have little impact on the success of the overall design, therefore how can machines understand the task of design? Is architectural design a consequence of the tools we use to make it happen? Does the influence of simulation software take away from the designer’s overall creative expression? During my undergrad, the different software tools would have an influence on student’s design ideas. For example, Revit was better for more boxy buildings, where as Rhino was accommodating for more complex, organic forms. Will designers continue to adjust their work according to the capabilities of the software they are using?

A contemporary sense of program involves “the exploration of digital analysis and synthesis, in the increasing interest in the formal and spatial potential of new materials and structures, and above all in the migration of the exploration of social and cultural forms from the domain of art installation to public architecture” (Vidler, 2003). In his text “Toward a Theory of the Architectural Program”, Vidler is interested in the Archigram Group and their formal strategy and contemporary, digital approach to the architectural program. Dating back to the 1960s, in particular for Banham, program would require a more scientific approach including “aesthetics of perception, human response, technologies of the environment and the like” (Vidler, 2003), incorporating both form and function, not the 20th century modernist perception that ‘form follows function’. For Koolhaas, science doesn’t offer solutions, it offers pre-existing knowledge. Architects do not invent the program, their role is in “identifying its raw material” in its given context and actuality.

Functionality in the design process can involve the computer through computerisation, as a representational tool, and computation, as a tool for discovery and experimentation in the design (Zarei, 2012). Will there be a time where the program is no longer just an aid in the design but rather acts as a tool for exploring what is not already understood? Can programs translate data into a meaningful form? Can the machine tackle the task of design? In that case, who is then considered the designer, the machine or its user? These tools and programs inspire architects to imagine unprecedented solutions. Today, as Mario Carpo states “a meaningful building in the digital age is not just a building that was designed and built using digital tools: it is one that could not have been either designed or built without them” (Carpo, 2015). The relationship between architects and their tools is changing, the boundary between architect and machine is becoming more blurry and our dependence on software is increasing. Patrik Schumacher discusses how, at Zaha Hadid Architects, their use of technology has become as much of a requirement as it is a choice. They develop/tweak software according to what is necessary for the intended design concept (Sisson, 2016). Similarly, the design of Frank Gehry’s Guggenheim required the development of a new iteration of a software, CATIA, due to its geometric complexity (Chang, 2015). Grasshopper, the Rhino plug-in, is an example of a program used in Architecture School and many firms. It allows designers to map sets of design relationships graphically and programmatically into an interactive system. Virtual reality is also becoming a popular interactive tool in the process of architectural design. Being able to walk through a space at a one to one scale allows users to understand how the building functions. The connection between machine and human interaction is increasing, but when will come the point where the machine becomes the critical thinker, the designer?

Alexander, C. (1964). The Program. Notes on the Synthesis of Form, 73-84.
Carpo, M. (2013). The digital turn in architecture 1992-2012 (AD reader). Chichester: Wiley. Chang, L. (2015, May 12). The Software Behind Frank Gehry’s Geometrically Complex
Architecture. Retrieved from https://priceonomics.com/the-software-behind-frank-gehrys- geometrically/
Cross, N. (2001). Can a machine design? Design Issues, 17(4), 44-50. Cross, N. (1977). Computer Synthesis The Automated Architect, 73-84.
Encarnação, J., Lindner, R., & Schlechtendahl, E. (1990). Computer aided design : Fundamentals and system architectures(Second, rev. and Extended edition. ed., Symbolic computation, computer graphics – systems and applications). Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-84054-8
Robinson, D. (2008). Functions and Logic. Software Studies: A Lexicon, 105-7.
Sisson, P. (2016, May 16). Zaha Hadid’s Influence on Engineering in Design. Retrieved from https://www.curbed.com/2016/5/16/11683794/zaha-hadid-architects-patrik-schumacher- engineering-legacy
Vidler, A. (2003). Toward a Theory of the Architectural Program. 59-74. doi:10.1162/016228703322791025.
Zarei, Y. (2012). The Challenges of Parametric Design in Architecture Today: Mapping the Design Practice. 4-100.


by Amélie Savoie-Saumure

The four texts written about program and function vary in tonality and approach towards their meaning in architecture. The introduction of computers has indisputably revolutionized the way we analyze, work and think about design. Although seen often in a pessimistic way and accompanied by a certain fear towards the future of the profession, the benefits and the evolution have been remarkable, a process of trial and errors in a world on unknown variables and possibilities.

Toward a Theory of the Architectural Program by Anthony Vidler explores the concept of program and its evolution from the 60s until today. More specifically, he analyses Reyner Banham and John Summerson theories of architecture and the effect of technology on the discipline. The shift that occurred by the use of software, transitioning to a scientific methodology on how we tackle design, resulted in a change of the meaning of program. It is being argued that program worked hand in hand with function, where architecture used to be about functionalism and the purpose and use of space. Nowadays, architecture has become a spectacle, a tool to convey a symbol or an emblem as a whole rather than being centered on the occupancy. Although quite the generalization, it is impossible to not acknowledge this tendency when looking at contemporary projects like the Bird’s Nest in Beijing by Herzog & de Meuron, where architecture becomes iconic to a city, bringing tourism and international fascination. With the freedom that technology brings to the field comes many explorations on the way we express through images and other mediums in order to clarify the tormented relationship between the architect and technology. The comparison of Archigram and Rem Koolhaas is particularly interesting by their own reflections on how technology has been and will contribute to architecture.

Computer synthesis by Cross focuses on the development of architectural software in the 60s and 70s. Although more descriptive approach on the different purpose and performance of each program, there is an overall will of using the computer as a more direct aid in the synthesis stage of design problems. The search for a mathematical value of spaces, a way to quantify different elements of what is considered a ‘good’ space is a common theme with Vidler’s text. Cross’s examples make us understand a different view on software, where 50 years ago, the computer was seen as a tool to evaluate and spark creativity to the architect. Each software tackled different constraints, whether it is optimization, costs, movement and so on. There was a clear dialogue between the designer and the machine, a constant back and forth of evaluation and modification until satisfaction by the user. We can easily compare this approach to today’s vision of software in architecture, where they are now integrated and an unavoidable part of the architect’s life. They propelled design to limits never achievable otherwise, pushing the boundaries of construction and design, making architecture incredibly faster and structurally complex. They also have a clear imprint on the architecture they help produce. Taking Revit as an example, the software facilitates the construction drawing process and became a must in every major architectural firm. The benefits are countless and simplify in many ways coordination in between disciplines. The major down side comes with the limitations that it has, being a software oriented toward engineering rather than designing, and affects deeply the way we conceive buildings. Instead of relying on software as complementary tools to the architect, we have entered a dangerous era where software shapes the design as much as the architect. In effect, the fear of the autonomous machine like Banham dreaded, the robot architect of the last decades is not quite what has been anticipated. It can definitely have us wonder if it is still a possibility, if we will ever reach a time where the architect is no longer needed.

This phenomenon is argued and decorticated in The Program by Christopher Alexander. Contrary to Banham, he discusses the improbability of the machine taking over. A main component of the design process is invention, which is in his opinion not obtainable by technology. Although surpassing the human in terms of analyzing data and processing information, the human has a limited inventive capacity in that manner; the number of variables is too high and unclear to replace the action of a trained designer. The design process is feasible in three schemes, all of which revolves around this idea of design being based on the architect’s view of a context. There lies the main basis of his hypothesis, where technology is impartial, and consequently cannot answer a given situation with an adequate response. If that is the case, we can ask ourselves if a shift needs to happen in the way we developed software, where it is not developed as an extra hand to the human, but rather as a process solution to the computation of selected variables. It is possible to connect this idea to Rem Kolhaas’ Wired magazine publication, where he described contemporary architecture not by its history or theories, but rather with facts of the world and culture surrounding the architecture. In that sense, technology and this day and age has allowed an opening of the profession, previously being much centered on its own past and concepts, where as now it is incorporating at a conceptual level politically, economically, environmentally and socially of the world it is surrounded by.

Functions and Logic by Robinson, although oriented towards a mathematical logic view of technology, comes to the same conclusion of Alexander’s text. Computers have a unidirectional and linear way of processing information, where a function results to an output or and an action. The capacity of logic, being able to read between the lines, to assimilate information in order to conclude on a problem is unique to humans. This process, although to a certain extend feasible by computers, is described not as a function, but as a relation. Google’s scripting, creation of a matrix and ability to connect words to a result is a new definition of logic, one specifically created for and by computers. There is then an introspection that is necessary to re-evaluate what function and program means nowadays by the use of software in architecture, as technology utterly changed design.

form ¦ formalism

By Dina Abdul-Malak

Formalism and Forms of Practice
There is not one definition of formalism, only arguments and examples from philosophy of the 19th and 20th centuries applied to practices like literature, music, visual arts and architecture. It first came up in German philosophical aesthetics where it meant the property of seeing the objects. In terms of architecture, Hays claims that the formalist position is about how the parts have been put together and how the integrated system can be understood without the external references in which the building exists. Formalism is clearly understood when compared to contextual criticism, which focuses on the object being in a broader network of human relationships within society as opposed to formalism which is not concerned with the context. Russian formalism, concerned with poetry, developed in reaction to the failure of formalism in interpreting texts by relating them to the historical context and politics of the time. In German formalism, form was no longer thought of as an expression of content but as co-existing with the idea. Hence, throughout the 20th century, formalism and contextural hermeneutics in architecture formed a constant debate, and some tried to form a synthesis between the formalist autonomy and the societal change to be addressed. For Mieke Bal, aesthetics is also a context, so formalism necessarily fails. Formalism then became a way of experiencing art and architecture rather than being a criticism. For Greenberg, the visual aspects of a painting outweigh its narrative content and he dismisses works engaging the environment. In architecture, form and formalism always returns to the link between form and function, as the latter is the force through which form emerges. Architects like Wagner and van der Rohe argue that form is a by-product of construction. Those opposing formalism in architecture focus on the denial of the architect’s ethical responsibilities, arguing that buildings like Eisenman’s are not architectures; architecture must have a meaning and show it, some emphasizing the transparency of the building in relation to its purpose. There has also been experiments in formlessness that blur boundaries between inside and outside and that sparked debates on how it attempts to undo form and formalism. Formalism is ironic in that it is constantly re-worked with new theories and practices of the era involved; thus it is not a fixed argument.

The Science of Design: Creating the Artificial
Two to three decades after WWII, the natural sciences, which is concerned with how natural things are and how they work, practically took away the sciences of the artificial, which is concerned with how to make artifacts that have desired properties and how to design in the professional training. The main cause is that universities seeked academic respectability and chose subjects that are intellectually tough as opposed to design and artificial sciences which were intellectually soft. The text argues that it is possible to merge natural sciences and the artificial sciences on a professional level, and presents the topics needed to incorporate such theory of design in curricula: 1) Utility theory and statistical decision theory as a logical framework for rational choice among given alternatives. 2) The body of techniques for actually deducing which of the available alternatives is the optimum. 3) Adaptation of standard logic to the search for alternatives. 4) The exploitation of parallel, or near-parallel, factorization of differences. 5) The allocation of resources for search to alternative, partly explored action sequences.

Weird Formalism
This text is about the logic of computation where algorithms are not instructions to be performed but actualities that select, evaluate, transform and produce data, and its ingression into culture. It argues that incompleteness in axiomatics is at the core of computation. The addition of computational randomness to finite procedures allows a semi open architecture of axioms and an automated processing that is not predeterminate but tends to new determinations. It puts forward a new digital space that no longer matches striate space that is linear and where points do not change over time, one that draws on morphogenesis and the curvilinear shapes of blob architecture possible through mereotopology, which reveals that infinity is intrinsic to parts, and that infinity is random quantities of data reprogramming the algorithmic procedures in digital design. With prehensions, algorithms become actualities, prehending the formal system in which they are scripted and the data inputs they receive; the degree of prehension of algorithms then characterizes computational culture. This algorithmic production of digital spatiotemporalities defines 1) that logic is becoming an aesthetic operation and 2) that computational aesthetics is characterized by the algorithmic prehension of incomputable data. It also claims that digital architecture is unable to produce spatiotemporal experience qualities because it only deals in quantities. Finally, these algorithms become actual modes thoughts, soft thoughts concerned with the existence of a mode of thought, decision making and mentality that does not have a direct relation to human thinking.

Languages of Architectural Form
This text introduces rules for the combination of shapes in architecture. One way is through the grammatical combination of parts, like Alberti who avoided the combination of arch and columns. The simplest grammatical rule is to display various examples of what to do and what not to do, like Vitruvius did. Another approach is to state generalized prescriptive rules like the Renaissance theorists. Others used diagrams to show how walls and entrance should be treated and substituted. These substitution rules become more interesting when done repeatedly like in the Taj Mahal. An elaboration of this technique considers a sentence which always consists of a noun phrase followed by a verb phrase. Through the replacement rule and a set of rules establishing the properties of a noun and verb phrases and the variables, the result is a derivation that is grammatically correct. Reductions are when the rules are applied in reverse to determine whether the string is a sentence in the language. These syntactic rules puts an architectural type and the goal of the designer is to introduce the type appropriately to the moment and context; and the process of finding a solution to a design problem is one of trial-and-error to determine whether these solutions are acceptable.

A Boolean description of a class of built form
This text looks at Boolean algebra and exposes its shared concepts with architectural form which are inevitable although not explicit. With algebra, mathematical encoding of shapes can be relevant to architecture since elements such as bringing building components together, laying out planning and structural grids, and organizing space all have their equivalents in mathematical algebra, and because both mathematicians and designers have an aesthetic desire to systemize and to order. The text goes through different technical encoding details showing how we can go from numbers to simple rectangular shapes, to 2D architectural plans, to finally 3D volumes of buildings, keeping in mind that the encoding can work with some non-rectangular shapes like hexagonal and triangular as well.

What makes an architecture meaningful, the object in itself, what it represents, or how it is represented in its context?
Does form always need to follow the function in architecture? Where is the limit of how much architecture can represent something other than itself?
How much do you think the McGill School of Architecture is successful in incorporating the science of the artificial in the curriculum?
What can soft thought bring to the design field, and how much can it replace cognitive thought in design?
Do you think that these rules limit the design process of the architect?
What defines universal rules that could be applied to design itself?
Can a set of rules in designing result in a generic architecture?
How limiting is mathematical encoding in terms of design? The fact that you are designing a space but not by thinking about it in terms of its spatial qualities but rather quantitatively with algebra?
How limiting is to design in a virtual world where the limitations of the real-world practice is not taken into account (scale, materials, engineering), where there is no connection to the real-world?
How flexible is designing with computation or computer-aided design?


more on “form”

Adrian Forty on Form and Formalism (full reference: Forty, Adrian. 2012. Words and Buildings: A Vocabulary Of Modern Architecture. London: Thames and Hudson)

Sam Rose on the Significance of Form (full reference: Rose, Sam. 2017. “The Significance of Form.” Nonsite.Org).

more on generative design

Bill Mitchell on The Automated Generation of Architectural Form (full reference: Mitchell, William J. 1971. “The Automated Generation of Architectural Form.” In Proceedings of the 8th Design Automation Workshop, 193–207. DAC ’71. New York, NY, USA: ACM).

Herbert Simon on The Architecture of Complexity

Old “new” ideas:

language ¦ syntax

by Adriana Mogosanu

The readings address linguistic theory’s emergence as a highly influential system of thought during 20th century, from which architects derived a methodological framework to produce theories and objects. Theoretical developments in linguistics influence structuralist and deconstructivist movements in architecture, notably Saussure’s “system” based on humanistic thought, Chomsky’s generative grammar or Derrida’s deconstructionist notions. In linguistics, the paradigm shift starts with Saussure’s view of laws as providing meaningful, credible explanations, satisfying models of thought. Jameson’s “The Linguistic Model” situates Saussure’s thought in the context of pre-war, descriptive linguistics under the hegemony of New Grammarians, and tracks the evolution of his methodology to arrive at the notion of a “system” comprised of synchronic and diachronic dimensions. Saussure challenges the credibility of contemporary methods of investigation rooted in historicism and its codification, as he begins to distinguish between causes that are external to a phenomenon, and those that are intrinsic. In his search for meaningful explanations to the phenomenon of language, and in the absence of an object of study, he arrives at a distinct view which encapsulates previous methods and produces a counterpart that is liberated from historicism. By introducing the notion of a “system”, based on values rather than units, complete at all times, with synchronic and diachronic dimensions, “he is able to function within the realm of two mutually exclusive forms of understanding”, addressing both the structural and the historic. The System is understood in terms of values and relationships, based on perception of identities and differences. “What distinguishes a sign is what constitutes it”.

Patin’s “From Deep Structure to an Architecture in Suspense” follows the transition in Eisenman’s architecture and design theory, from his concern with creating an autonomous architecture, rooted in the syntactic dimensions coined via Chomsky to reaching a state of conflict, aporia, leading to a decentralized notion of architecture. Eisenman’s early work seeks to generate meaning by structuring space though the inherent logic of forms. His early architecture relies on the absorption and interpretation of linguistic notions into design through his theoretical writings, which become “descriptive and prescriptive”. As his design process veers towards post structuralism, his work becomes a fictional text, dependent on a misreading. No longer seeking autonomy, his architecture aims to be generative, revealing and actively participating in the interpretation of social construct and cultural context. The text is further concerned with how architectural theory bridges concepts from linguistic analysis to speculate on notions of meaning, culture, power, looking at aesthetics and autonomy in art. As minimalist art achieved its goal of autonomy, independence from context, its interpretation as an art object became increasingly dependent on institutionalization. The museum curates, plays an active and coercive role in the viewer’s experience of space and interpretation of art, but is itself an invisible apparatus.
Eisenman looked to Derrida’s deconstructivist theories to formulate an architecture that is able to reveal the museum apparatus in the Wexner centre, to make the internal conflicts visible, to challenge the autonomy of aesthetics in objects presented.

Dutta’s “Linguistics, Not Grammatology” broadly situates the discussion of linguistic models in architectural discourse and education in the postwar period, with an overview of the streams of thought generated through introduction of new concepts and vocabulary stemming from the study of language. These surround the tension between the natural and artificial, creativity and technology, the reduction of nature to machine, cybernetics and notions of organism. Architectural design is placed uneasily as a boundary science, concerned with using the machine for design but not designing the machine itself. The discussion gets centred from the broad sphere of debates surrounding the linguistic model to land on architectural education, particularly at MIT, where academic trends at the institution were permeated by notions of behaviourism, belief in organic grammars, and a general sense of pedagogical confusion.
It is fitting then, to look at Discourse, Porter’s dissertation at MIT as an applied example of problem solving tendencies in design schools during that period. Discourse is a computer language, a programmable assistant for designers. Porter finds that distilling the process of design into a usable descriptive code that can be the translated into program becomes as problematic as writing and developing the program itself. Tension arises at the movable boundary between human and computer fields of action, where the action of the designer itself is not well defined or pre-determined, and the process of the designer is mutable and will be influenced by the computer. Discourse’s intended capacity to help the designer achieve a sense of mastery, to aid the designer’s communication with himself comes somewhat in contradiction to the necessary disruption provoked within design behaviour by externalizing part of a typically interior process. Furthermore, design is not only concerned with manipulating and decision making based on data, but also contains a visual investigative and productive dimension. How does the visual exist within such a codified symbolic realm of data and concise descriptive entities? Porter describes in depth the process of design, but acknowledges there is a limitation to how much those visual tools can be translated into the program.

Due to the absence of an object of study per se, the study of language called for a distinct methodological approach and produced particular analytical models distinguishable from other disciplines. Its’ methods for parsing intrinsic phenomena down into morphological parts, systems thinking, transformational rules, generative capacity, processes based on signs, signifiers found a particular appeal in a field of design concerned with aesthetic and formal autonomy, in the context of intellectual movements shifting from descriptive processes based on extrinsic, contingent influences and historicism, to intrinsic, universalist laws and finally tending towards their deconstruction. Assimilating linguistic tenets into architectural theory was perhaps symptomatic of a concern for creation based on a nuanced understanding of the world that is not subordinate to external causality, a search for methods and vocabulary to conceive of creation as stemming from intrinsic laws, in a relational manner.

code ¦ script

by Alexander Bove

In the article Code (or, How You Can Write Something Differently), Friedrich Kittler employs a historical trace of codes, more precisely, how these sequences of signals evolved. Factual evidence implies that functional codes were preceded by true alphabets, dating back to the times of ancient civilizations. At various moments in time, the evolution of code was observed, whether it be in cryptanalysis during the Roman Empire, across Alberti’s polyalphabetic code, or the earliest iterations of ciphering and deciphering intrinsic to Morse code. Moreover, this historical regression, provided by Kittler empowers us with both knowledge and perspective, allowing us to question the relationship that exists between mathematics and encryption today. Through his theory, Alan Turing answers this question in presenting the notion of computable numbers, which posits that “a finite quantity of signs belonging to a numbered alphabet can be reduce to zero and one” (Kittler 2003).

Turing also vehemently believed that the primary purpose for which computers were created is to decode plain human language. Kittler closes on several stimulating ideas, one of which remarks “how far computing procedures [can] stray from their design engineers” (2003). This statement in particular begs the question whether designers retain autonomy and control over architecture when it is subject to code-generated computer software? Also, Kittler states that “technology puts code into the practice of realities […], it encodes the world” (2003), which leads us to question if codification and scripting are intrinsic parts of present-day society, how will they begin to affect the social aspects of our daily lives?

Through the summary of salient points in a panel discussion titled “Vivre et Parler”, the text Language, Life, and Code by Alexander Galloway analyzes the relationship between the three entities in its title. The aforementioned subject is discussed by four guests; an anthropologist, molecular biologist, linguist, and geneticist. During the interview, the panel shares their differing views on the links between language and DNA. Jakobson and Lévi-Strauss shared the most interesting views on this matter, the former stating that both “operate through permutations of a relatively small number of basic units” (Galloway 2006), while the latter speculated that DNA cannot be understood as a language because with it, there is no significance. Instead, Lévi-Strauss stressed that one should question if the meaning of DNA was in the code itself, or in the relationship between coding elements? Lévi-Strauss’ view on language and DNA was of particular interest when reading the text because he implies that language is symbolic, whereas DNA is simply an inert molecule.

While language is understood and ‘decoded’ via a human cognitive process, DNA merely computes whether through the living cell, or genetic algorithms. Nevertheless, Galloway emphasizes in his article that code is omnipresent in both language and life. In light of these discussion points, are humans and their cognitive de-codification process a necessary interface between computer language and the physical world? Without them (humans), is computer language meaningless? More importantly, how would a liberated computer language, free from the finite nature of linguistic possibilities, affect/change the relationship between DNA (humans) and computer language (technology)?

Written by the first Autodesk product manager for architecture in the mid 1980s, Scripting by Malcolm McCullough discusses the progress of scripting and coding, and its relevance in architecture today. According to the author, it is simply too much work to conceive and construct every design element uniquely, and for this reason, user-friendly design software – where the programming work has been executed by programmers in the background – are emerging as tools of choice in the marketplace. McCullough makes clear in his text that one of the most revolutionary impacts of scripting is its ability to add a whole extra level to design thinking. In doing so, it also combats many of the misunderstandings people may have about it. Thus, instead of hampering creativity, an expressive medium which employs scripting with well-established constraints is where the richest iterations occur, much like dabbling in form on a computer is not a distraction, but the use of software accelerates and articulates conjecture and tangible design speculation. In disproving these misconceptions, McCullough illustrates that “the role of computers in design is seldom one of automation” (2006), and more so, a design world that needs to be explored and tweaked with finesse. Historically, the origins of scripting can be traced back to AutoCAD, the earliest example of this technology in a host program. Eventually, the evolution of scripting led to the generation of shape grammar, and the respectable parametric software which ensued. In doing so, design computing was no longer just a tool, as it influenced the designer’s choices and by extension, the resulting form. On the foundations of these technological advancements, “programming culture has been rediscovered in architecture” (McCullough 2006). The author cites for its resurgence, including digital fabrication, cultural expression in form, and craft personalization, where the latter is the most distinct. In my opinion, it is precisely the ability to customize one’s workspace and scripting patterns that make programming so enticing for architects and designers. Based on this, how can architectural software become an open-source design tool amongst industry professionals? In terms of efficiency and pragmatics, would a universal coding language allow greater communication and collaboration across disciplines, or would it simply constrain the freedom of the designer?

In the article Digital Style, Mario Carpo examines digital tools in the context of architecture, and how the former has affected architectural thinking and the style as a whole. While introducing the topic, Carpo puts forward the notion of collective intelligence by comparing wealth of knowledge and response accuracy to a statistical anecdote where people guess an ox’s weight at an auction. He posits that collective intelligence is the root of code/scripting. Despite the convenience and searchability this provides, collectivity also gives rise to other important questions raised by Carpo in his text. Whereas documents in print are fixed entities, digital notations can change anytime and thus, are in a “permanent state of interactive variability” (Carpo 2011). In theory, this fluid state sounds promising as it insinuates unlimited possibilities for design proposals. However, what impact does interactive variability have on tangible aspects of the built environment? According to Carpo, despite some users occasionally introducing faulty script into an open-source environment, the volume of interactions will eventually correct the code, creating an autonomous system. Moreover, this state of permanent drift has resulted in the “open-ended logic of ‘aggregatory’ design” (Carpo 2011). In addition to affecting aspects of our social life, ‘aggregatory’ digital making has influenced building design through parametricism, and participatory design software (i.e. BIM). In discussing both of these, Carpo tackles the question of authorship in design, as these forms of software are participatory in nature and intrinsically imply hybrid agency. In concluding the text, Mario Carpo interestingly speculates on the future of the design profession, proposing a scheme where “one agency or person initiates the design process, then monitors, prods and curbs, and occasionally censors the interventions of others” (2011). In such a scheme, with whom does the authorship of the resulting design lie? How is this form of working at times synonymous, and at others different from methods of practice currently being carried through in architectural firms?

Several parallels exist between the assigned texts, notably the idea that code and language share a complex, interdependent relationship, that structured organization is inherent to a functional bit of code, that scripting adds an extra layer to design thinking, that code and aggregatory thinking affect human social life, in addition to many more. When considered holistically, the points raised by each respective author provides us with perspective, and a contextual understanding of both the benefits of code and scripting at the epicenter of our professional work, as well as dangers we should be wary of. While some can simply be remedied by maintaining a broad, objective point of view, others are much more difficult to evade as code cements itself into present-day realities.


by Daniel Campanella

In Mario Carpo’s Digital Style, the author discusses design through digital tools in architecture. He starts his text by introducing the statistical concept of accuracy of averages where precision is proportional to the number of means collected. This is similar to market prices. Hence, he argues that the design of objects must follow the same participatory methodology. Furthermore, a digital asset belongs to collective design and “cannot be reduced to the expression of a vote or a number”(Carpo 2011). If the assumption of an open source design is accepted, we must assume that any digital object be non-stable. Carpo relates softwares to digitally designed architecture through participatory design. In fact, he contends that architects are slowly losing authorship over their formal creations. Thus, with the advent of BIM and parametric design, buildings will become designed via committee. Architecture will become an approximation of different means.

Scripting is written by the first Autodesk product manager Malcolm McCullough in the 1980’s. McCullough states that thanks to the improvement of user interfaces, designers are now scripting unconsciously. He says, “it makes no more sense to design by drawing each line and modelling every surface than it does to drive an aeroplane down the highway” (McCullough 2006). There is no longer a need to be specialized to use software. Moreover, these modern tools are similar to the codification of the alphabet where a finite number of characters are set and other possibilities are excluding. In this way, digital design tools may only create a determinate number of possible creations. Thus, computers have to create their own language.

Alexander Galloway’s article Language, Life, Code discusses the notions of language, information, and DNA through a panel consisting of an anthropologist, a molecular biologist, a linguist, and a geneticist. The first point of conversation is the advent of cryptography in the Second World War. During the war, swapping letters created coded messages for an alphabetical counterpart that was a set number away on the list. The conversation wanted to elaborate on the cross discipline uses of code. The concept of code starts with organization. This can be seen both in language and in DNA. However, “DNA cannot be understood as a language” as signification does not exist (Galloway 2006). Moreover, language is symbolic, but can that be said about letters? Scripting has added a new dimension of design thinking. The author states, “[f]irst you set up some rules for generating forms, then you play them to see what kind of design world they create, and then you go back and tweak the rules” (2006). In architecture, composition of scripts submitted to improvisation. In this way, software could create a discontinuity between the object and the designer as “parametrics work better in domains whose subject is engineered form itself” (2006).

In Friedrich Kittler’s Code (or, How You Can Write Something Differently), the author examines the historical foundation of codes through signal sequencing. Codes started out as communication technology. Moreover, codification became viable through the finite set of characters or alphabet. Code was first used in Roman times and it was synonymous with cryptography. However this method would still use semantics. Hence, the advent Morse code represented an optimized code. Today, the code that is processed by computers must pass Kolmogorov’s test where the input should be shorter than the output. This was possible with the Touring machine in 1936, which was able to process finite whole number to infinite long numbers. The author ponders on the idea of a non-human language and whether or not humans would be able to understand them. Code would have to be modeled for syntax and semantics. In present time, code is used to organize the world. Hence, it simulates reality. The author ends by questioning if computers would be able to create their own code from their environment.

The common thread in these articles is the need for organization through scripting. In fact, coding/scripting seems to be cautiously bond by a finite set of parameters not allowing computers to create their own language. Would computers creating their own language be comprehensible to man? Would that accelerate simulation of reality? The concept of organization bleeds out into the domain of digital architecture where this is a preconceived notion that creativity is bound by the software. However, in my opinion, creativity is bound by the learning curve of this programs and the subsequent lack of scripting. Architecture has been a profession that strove for the total understanding of buildings/objects through drawings, models, and intuition. Currently, architectural practice is lagging behind the digital tools. This failure to grasp new tools creates new specialists within the domain. Design by committee does not have to exist if the architect is the master of all.

model ¦ environment

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.