graphics ¦ surfaces

by Lydia Liang

Hidden Surface Problems: On the Digital Image as Material Object by Jacob Gaboury (2015) argues that computer graphics should not be considered as a genealogy of vision or a progression of earlier visual mediums but rather a form of data that materialize from restriction. Gaboury contends that computer graphics is not only a visual medium but a material object. It is both image and object. In fact, the creation of computer graphics was based on an ontological approach with the focus on the simulation of a three dimensional object. In his historical narrative of computer graphics, Gaboury identifies a common, and probably the most significant challenge in early computer graphic studies of how to simulate a digital object: what parts of the object should be visible and what should be hidden. The focus of the field at that time was to find a way to restrict an image so that it is only showing which is sensible to a viewing subject. In this way, the specificity of computer graphics is not about “reproducing a way of seeing” but “constructing the absence”. The materiality of the computer graphics lies in the process of the solution to the hidden surface problem, where a complete graphical world of objects has to exist before the rendered output, and a mean of removing the irrelevant is executed to make the world legible. The rendered image becomes one form the data contained in that graphical world and an interface for vision.

Gaboury’s insight of the nature of computer graphics as the “construction of the absence” is in my opinion, analogous to the architecture making of space. Many consider the making of space is rather an exercise in the making of a void, construction a relationship of “nothingness” to the other architectural elements.

Friedreich A. Kittler offers another insight to the nature of computer graphics in his work Computer Graphics: A Semi-Technical Introduction (2001). He attempts to answer the question of “What are computer images?” by focusing on the ways computer graphics can replicate our perception of the world. It puts forward that computer graphics are a deception to the eye and can be easily forged. Through softwares and algorithms, all that we can perceive optically can be virtualize, and are only limited in resolution and detail by the RAM space of the machine and in aesthetics by the optic option which are available. In his text, two optic modes are discussed: raytracing and radiosity. These two modes are fundamentally contradictory to each other; one focusing on locality where the other dependent on globality, respectively, thus resulting in aesthetics that are mutually exclusive of each other. They each contribute an aspect of truth of our world as we received it to computer graphics, be it the sharp highlights of an object or diffuse edge of a shadow. Efforts are being made to bring the two optic options together, however it is proven to be difficult because of the contradicting mechanisms. Can a perfect world picture be produced then? According to Kittler, there is always going to be digital compromises, and computer graphics, as part of its function, is to “forge compromise from mutual exclusivity”. Finally, Kittler brings up notion of “images appearing in accordance with injustice”, that is when you are concerned with one aspect of the image, you alway neglect the other. What is a computer graphic justice then? He leads the answer with the idea of the “rendering equation”, and closes with the idea that computer graphics can only be called such if only it can render what cannot be perceived by our vision but that exists. Kittler’s final remark is especially intriguing to me in a sense that, if combined with Gaboury’s reading, can one say that computer graphics ultimately should be “Projecting the invisible through hiding the invisible?”

In What Designers Do that Computers Should, George Stiny (1990) identifies two things that are essential to the computer if it were to serve as a design partner creatively and inventively: ambiguity and parallel descriptions. Stiny argues that ambiguity is an important aspect of design because it “fosters imagination and creativity, and encourages multilayered expression and response”. However, it is difficult to achieve ambiguity in design with computer aided drawings due to their structured nature, which is not the case for drawing with ink on paper. In order to produce shapes that can be decomposed and manipulated as desired by the designer in the same manner as conventional drawings, Stiny proposes an algebra of shapes with no inherent structure that allows for decomposition and manipulation using established rules. Alongside with the algebra of shapes, descriptions of different domains are also essential in the creation of a complete design, not unlike the idea of combing plans, sections and elevations to grasp at a three dimensional relationship done by designers regularly in practice. Thus Algebra of shapes should be combined and computed in parallel with other algebras, namely the algebra of labeled shapes. These computation are also a part of the discourse of what Stiny termed the shape grammar.

It is important to note that the approach outlined by Stiny above assumed the use of the computer drawings as a creative tool rather than a representation tool, as it is now days. In the earlier days of research in the area of CAD, the computer is intended as a problem-solver rather than drafting and representation (Celani 2002). Now days such concept remains only in the theoretical and experimental realm. The shift occurs when the simplification of the CAD system happened in order to appeal to the market of personal computers. Another reason, not far from what’s proposed by Stiny in his article, is the computer’s inability to recognize emergent or arbitrary shapes that are not defined by the programmer, where a designer can do easily in his/her head (Celani 2002). Shapes grammars can amplify the amount of combinations of shapes and be used as a generative tool, but it is still far from being used as a creative tool.

In Visualizing of Beautiful data, A history of Vision and Reason since 1945, Orit Halpern (2015) describes the ways in which the nature of image, perception, and observation has been reformulated by cybernetic concepts. Citing the work of prominent designers in the field, Halpern states that a new form of perception emerges, in which it is technically transformed, interactive, data-driven and autonomous.

Vision and cognition becomes amalgamated resulting in a form of communication that is algorithmic, technological and autonomous. Senses becomes a critical part of education instead of language, and language becomes environment. The image thus can be perceived not from its visual quality but mentally through psychology and memory. Halpern also argues that these designers have re-conceptualized the screen and push forward the notion of space as interface, replacing the discourse of structure, class and race to that of the environment, sensation and time. With the emergence of a new kind of observer, one that is both isolated and interactively networked, Halpern tries to illuminate its relationship to a new kind of knowledge which rises from the a sea of limitless information, or a “communicative objectivity”. It was hoped that in the discourse of temporality in perception, a new kind of sense optics could also be introduced, an extend perception that could bring together the future and the past. Halpern concludes by asking that “…what other dreams for our perceptual future emerge from our interfaces.…where webelieve sense, perception, and cognition have been compressed into process, program, and algorithm to be regularly fed-back to us at every click and choice made at the screen.”

The overarching theme in all of these readings are concern with perception. In a digitalized world with computer graphics and data overload, what informs our perception? Our perception struggles between the visible and invisible, visual and cognitive, directed and arbitrary. Is a new sense of perception required then? A perception that, rather than visually-dominated, deciphers instead. Again it brings back the question of coding and decoding as previously discussed. If the graphics we perceived are codify, with pixels, lines, shapes neatly structured and displayed, instead of merely the result of the optic laws, should our perception not only perceive them but also decode them?

References
Celani, M. (2002). Beyond Analysis and Representation in CAD: a New Computational Approach to Design Education. Retrived from http://papers.cumincad.org/data/works/att/c7e0.content.01381.pdf.
Gaboury, Jacob. “Hidden Surface Problems: On the Digital Image as Material Object.” Journal of Visual Culture 14, no. 1 (April 2015): 40–60. https://doi.org/10.1177/1470412914562270.
Halpern, Orit. “Visualizing. Design, Communicative Objectivity, and the Interface.” In Beautiful Data: A History of Vision and Reason since 1945. Durham: Duke University Press Books, 2015
Kittler, Friedrich A., and Sara Ogger. “Computer Graphics: A Semi-Technical Introduction.” Grey Room (January 1, 2001): 30–45. https://doi.org/10.1162/152638101750172984.
Stiny, George. “What Designers Do That Computers Should.” In The Electronic Design Studio: Architectural Knowledge and Media in the Computer Era, edited by Malcolm McCullough, William J. Mitchell, and Patrick Purcell, 17-30. Cambridge, Mass.: The MIT Press, 1990.

Leave a Reply

Your email address will not be published. Required fields are marked *

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.