behaviors ¦ ⌈inter⌉actions

by Michael Carosello

_Warren M. Brodey | Soft Architecture

In his text, Brodey discusses the true merit of so-called intelligent environments insofar as their capability to react and adapt creatively in relation to their surroundings. Current efforts result solely in environments that are inherently dumb and lacking in creative flexibility – capable only of repetitive actions and hindering man’s ability to evolve. The use of cybernetic thinking was ushered in with the writing of “Behaviour, Purpose, and Teleology” by Rosenbleuth, Weiner, and Bigelow, who had created a hierarchy of behaviours of an object in relation to its environment. These sets of characteristics ultimately lead to the implication of evolutionary behaviour and a redefinition of environment: “given any object relatively abstracted from its surrounding for study, the behaviouristic approach consists in the examination of the output of the object and of the relations of this output to the input” (Brodey, 1967).

Rosenbleuth et al also encourage a new method of defining an object and its surroundings by considering both aspects to be informing each other, creating an active feedback loop that better implies a constant state of evolution. Brodey suggests that this method of feedback and prediction is key to teaching machines to ascend the established behavioural hierarchy, in the same way a human being must be taught in order to become intelligent. It is also important to differentiate between complicated and complex environments, as complicated environments are incapable of adaptation and rely solely on redundancies and back-up components, therefore remaining stupid. Man cannot evolve in an environment incapable of its own evolution, which ideally must grow alongside the user. This is the soft environment.

  • Is such an environment truly attainable? If so, how immersive or extensive must it be? Does it potentially allow for the manipulation of the evolution of both the environment itself as well as those inhabiting it? The concept of induced evolution/feedback introduced by external force.

_Molly Steenson | A Theory of Architecture Machines

In this section, Steenson discusses Nicholas Negroponte’s ideas of AI and interaction through a theory of architectural machines. Negroponte himself drew from cybernetics and artificial intelligence in order to develop his theories into accessible and interactive interfaces, with a desire for them to manifest into an integrated and fully immersive intelligent environment. This technological symbiotic relationship between man and machine could be described only as a partnership, doing away with the idea of computers as but a slave to human beings. Instead, both are mutually engaged in the creative process, bridging the gap between the human and computational. An open dialogue exists between the two, creating the potential for ideas unrealizable by each individually.

An architecture machine would rely heavily on the use of heuristics – defined as rules of thumb and provisional techniques for problem solving – in order to develop its models of human-computer interaction. These techniques would allow the machine to not only learn how to learn, but also learn the desire to learn, maturing in a way similar to humans. Negroponte envisioned architecture machines as “a continued heuristic modelling of models that learned from one another and that reflected the adaptation of human, machine, and environment” (Steenson, 2011). Building upon human-computer interfaces, the term takes on a new application, with Negroponte picking up on the definition regarding an apparatus designed to connect two scientific instruments so that they can be operated jointly. He felt that the interface acted as a feedback loop between an object to be sensed and a tool with which to sense it.

Working with sensory qualities is key to the architectural process due to the possibilities for creating interactive environments, requiring architecture machines to act as sophisticated sensors. Negroponte believes that computers require a variety of sensory channels in order to accurately experience and manipulate the world, yet describes them as the most sensory-deprived “intellectual engines” in existence. Architecture machine interfaces would instead need to be sensory to an almost human degree – as our bodies are vital to our learning process – allowing them to just as accurately and thoroughly engage with the environment. This would require these machines to be made in the image of human beings, going so far as to potentially inhabit a body similar to our own. Finally, architecture machines must work at an immersive scale, acting as an intelligent environment. Relating back to Brodey’s beliefs, the physical environment should be experienced as an evolving organism capable of flexible creativity rather than its current state of being automated and stupidly complicated.

  • How human must a computer be in order to become sensory? To what extent can a machine become sensory?
  • Can a non human intelligence truly be capable of tasks beyond which it was designed? Can an algorithm ever accurately replicate the subjectivity of human nature?

_Andrew Pickering | Cybernetics and the Mangle

Pickering begins his article by posing the question: why have historians turned to cybernetics? A key reason for this is the appeal of differing from the linear academic approaches with which we are familiar in natural and social sciences. Cybernetics acts as a “shift from epistemology to ontology, from representation to performativity, agency and emergence” (Pickering 2002). The first chapter, and the one of interest here, is that which discusses the work of Ross Ashby and, more specifically, his homeostat. As a current was passed through the device, a rotating needle

acted as both an indicator and controller of the flow which, once amplified, became the output. This output was then used as the input to three other units, which were interconnected through electrical feedback loops. As a result, this system could exist in one of two states: stability – during which the needles on all units were at relative rest in middle ranges – or instability – at which time the needles were driven to and stuck in their extreme positions. This duality was of particular interest to Ashby, who built the devices to randomly reconfigure themselves if the needle departed too far from a central, resting position. As such, the system would recalibrate and once again either reach a state of stability or instability. If it remained unstable, the process would repeat itself until a stable configuration was reached. The inevitability of reaching a stable state was key to the design of the homeostat, an example of what Ashby referred to as an ultrastable system.

What struck itself as unique with respect to this system was the apparent liveliness otherwise absent in the machines of this time. No other was capable of random reconfiguration as a response to inputs. This gave the homeostat a sort of agency, allowing it to act upon internal factors rather than previously specified external factors. The purpose of this was to reiterate the perception of the world as a lively environment instead of one that is static and lends itself exclusively to representation. One could never tell from the outside how the homeostat would reconfigure itself next. It was made to be an electromechanical proto-brain; a model of the brain as an adaptive controller of behaviour, albeit one which would be inefficient in more complex scenarios when given many variables to consider.

  • Pickering states that the attraction of scientists to cybernetics is the mystery involved in its deviation from linear or Newtonian practices. Does the cybernetic or linear approach seem to be better suited to the sciences?

_Lucy A. Suchman | The Problem of Human-Machine Communication

Suchman raises the question regarding the distinction between artifacts and intelligent others. Exploring the impact of computer-based artifacts on a child’s concept of the difference between alive and not alive, or person and machine, it was found that children had the tendency to attribute aliveness to objects with regard to autonomous motion or reactivity. However, they would reserve the perception of humanity only for those showing signs of emotion, speech, or thought. The general view of computational artifacts becomes one of “almost alive”, yet still distinct from human beings. The challenge that then arises is that of the separation between he physical and the social: things that one designs, builds, and uses vs the things with which one communicates. Here too does the definition of interaction now describe that which goes on between people and machines.

Historically, automata, or devices capable of self-regulation in ways commonly associated with live beings, were tied solely to simulated animal forms. Human-like automata have long been present in the forms of statues that had mechanisms allowing for movement or speech, yet have nevertheless been imbued with minds and souls by humans, even those familiar with those internal mechanisms. Julien de la Mettrie argued that vitality relies on the organization of their physical structure. The mind is best viewed when considered an implementable abstract structure. Two points then arise: the first that, given the right form, a machine with finite states can produce infinite behaviours, and the second that what is essential to intelligence can be abstracted and embodied in an unknown range of alternate forms. The result is that reasoning and intelligence are not limited to human cognition, and can then be implemented as intelligent artifacts. The argument that arises that cognition literally is computational, as there is no reason to differentiate between people as “information processors” or “symbol manipulators”. Intelligence is the manipulation of symbols, and the most successful automata, which process the largest amounts of data, are industrial robots made to perform routine tasks and repetitive assembly. Expert systems possess minimal capability for sensory input, often limited to the use of a keyboard or human operator. Industrial robots, however, have relatively more developed sensory systems but are confined to specific tasks within extremely controlled environments. Still, the “intelligence” of these machines, when considering direct interaction with their surroundings, is less than that of a two year old child.

When considering human-computer interactions, the attribution of purpose to computers is due to their immediate reaction to actions performed by the user. This contrasts previous systems, on which commands were queued and offered no immediate feedback. Modern advances in technology allow for more immediate interaction between user and computer, giving a sense of real time control. The sociability of computer-based artifacts is then a result of controls and reactions becoming increasingly linguistic as opposed to mechanistic. There are fewer buttons to press or levers to pull, in favour of specifying operations through common language. The key difference between natural language processing and human communication is then the ability to respond to unanticipated circumstances. People bring to computers expectations based in human interaction, which brought on the desire to allow humans to communicate with computers using natural language. The ability to speak and be heard by your devices is now thoroughly integrated into our daily experiences, to the point of potential inconvenience when not available. It was also found that once a computer demonstrates minimally human abilities, users would go on to attribute with them further human abilities of which they were incapable. It is the complexity and opacity of the computer’s inner functionality that invites us to personify, reinforced by apparently mysterious inner workings. The result of combining a somewhat predictable interface with an internal opacity and unanticipated behaviour results in the use of a computer likely seen as an interaction rather than the use of a simple tool.

Self explanatory artifacts

With the creation of increasingly complex technology we introduce the concept of it being usable with lower amounts of prior training. Designers inherently have a view that an object should be self explanatory, but when considering computers, the term implies the artifact itself be the one capable of said explanation (potentially to the extent a human could). This would require the device to be intelligent rather than solely intelligible. Suchman goes on to compare the instructional approach of the coach vs that of the written manual (the former maximizing the use of context sensitivity at cost of the need for constant repetition, while the latter relies on generalization, yet is able to be duplicated and reused with ease). Interactive systems would then imply the possibility for technology to move from manual methods of explanation to that of a coach.

  • At what point, if ever, can we consider an artificial intelligence to be “alive”?
  • Several definitions of “intelligent” environments. What do you define an intelligent environment as? What characteristics would you like to see included in so-called intelligent environments?

Leave a Reply

Your email address will not be published. Required fields are marked *

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.