In the following, I would like to overview the solutions offered by computers to users for the execution of operations, and in the light of these facts, to analyse the affordances that prevail in this environment with regard to artistic work carried out with computers.
The artist’s access to the computer on which he/she works is based on the graphical user interface (GUI) and the specialized software based on this interface. These constitute the environment in which one can execute operations.
A fundamental characteristic of this environment is that it is built on metaphors and simulations. When we execute an operation on a computer, what we really do is release algorithms onto a given set of data, yet what we see is not that, but what the GUI visualizes on the display.
Even the basic metaphor, the term “desktop” with its reference to office work, generally used in the world of operating systems, is a simulation, used in order to eliminate the need for specialized engineering skills to operate a computer, and thus the device becomes accessible to the layman.
More precisely, it is not even just a device: according to Alan Kay, one of the pioneers of developing the graphical user interface, the computer is not a device (although it integrates a multitude of devices), but a “meta-medium” – in other words, a system that generates new media devices and new types of media. (1) Ted Nelson, another pioneer, named the same system “hypermedia”.
Specialized software serve the special handling of different types of data (text, image, motion picture, etc.), but they communicate with the user in a similarly metaphorical language.
In a certain respect, the very representation of data can be considered a simulation, as it comprises the reconstruction of already existing media and methods (printing, photography, video, animation, for instance) in a digital environment. All of this gives rise to the question: is the criteria of artistic work devoid of imitation sustainable in terms of computer operations?
Lev Manovich emphasizes that the engineers who developed the methods of using computer-based media (Kay, Engelbart, Nelson and their colleagues) (2) intended not merely to simulate existing media, but to create “a new kind of media with a number of historically unprecedented properties”, “which can be used for learning, discovery, and artistic creation”. (3) Simulation in this sense means the implementation of familiar methods – with new functions added.
It is important to emphasize these new functions: while some of the commands were integrated into the digital toolset as a result of the implementation of existing (analogue) communication techniques in a computer environment, other commands (such as copy, paste, delete, move, switch view) were born already in this environment and can be universally executed in any software and data format.
According to Manovich, future techniques will be affected just as much as already existing ones, as the sets of commands and the related data formats are brought together in new combinations that were not possible before.
Manovich calls this process hybridization, distinguishing it from multimedia, in the case of which different forms of media appear on a common platform, but they retain their own languages and characteristics. According to Manovich, the process of hybridization is an expansion of possibilities. (4)
All of this can be considered the opposite of the modernist paradigm, dominant in the early 20th century – while back then, the focus of interest was the discovery of unique languages characteristic of each medium, now, the focus is on the compatibility of different languages.
This consistency can be observed with regard to the user interfaces of operating systems and other specialized software. I will take the Macintosh operating system (OS X) as an example (midway between the pioneering Smalltalk developed by the Xerox PARC lab, who were the actual developers of the graphical user interface, and Microsoft Windows, today’s most widespread OS), because it was this company that motivated the development of the most widely used graphic software, and because it has defined the design principles of software development in a documented form from the very beginning. (5)
These design principles cultivated by Apple (keywords: metaphors, mental models, direct manipulation, feedback, consistency, forgiveness, WYSIWYG) (6) define the affordances a given software offers its users.
According to van den Muijsenberg, these affordances are as follows: “operations regardless of the laws of physics, representation through metaphorical objects and processes (simulating a direct interaction through visible feedback and the WYSIWYG), learnability and reversibility”. (7)
Consequently, one of the reasons for simulation is visual feedback in order to make the user sure that the execution of the process has begun and the required result will soon be available.
In another respect, ensuring reversibility can also be considered simulation, since any given operation only becomes final after the “save” command. The representation of three-dimensional space on a two-dimensional surface also counts as simulation, but this is a phenomenon familiar from the history of images – the new element here is the possibility of moving the image.
Simulation can occur as a model situation as well (in case of “operations regardless of the laws of physics”, for instance), but it is similarly possible to observe, process, and link the data of actual processes with others, in real time.
The development of Adobe Photoshop, the most widespread image processing software, began on the Macintosh platform, which is why its set of affordances converges with the design principles laid down by Apple. According to van den Muijsenberg, this software principally invites the manipulation of pixels within image files.
The manipulation of pixels, however, can have diverse direction and extent. Applying the clone stamp, for instance, results in a radical intervention into the image structure (because of its concealing nature), while the fine tuning of tones (a procedure familiar from the darkroom) causes no substantial change in the composition of the image.
Layers, today a fundamental function of Photoshop, interfere with the image structure in a different way. (8) This function makes it possible to spatially extend the image by treating the management of distinct image elements or settings separately and making it possible to regroup them. In other words, it operates on the premise that an image has spatial composition as well; it is made up of layers – similarly to the technique of analogue cel animation or multi-channel sound recording. This logic is implemented in nonlinear video editing as well, where layers are referred to as channels. (9)
According to Manovich, the computer simulation of already existing physical media and operations involves their enhancement and augmentation (with new properties and functions). Each simulation (such as filters, whose name metaphorically refers to their precursors in arts or engineering) can be controlled by entering numerical values, but these have a certain threshold to “simulating” a given effect, beyond which they begin distorting the image – that is, they exceed their metaphorical boundaries and make their unique, “algorithmic” nature all the more visible. (10)
The use of metaphors is a general phenomenon in computer interfaces as well as human language. According to cognitive linguistics, “our abstract concepts are metaphorical projections from sensorimotorial experiences with our own body and the surrounding physical world.” (11) On the level of the interface, metaphors similarly support man-machine communication.
Simulation in a computerized environment can thus stand for the concealing or imitation of something, but the system also allows for self-reflection, revealing the true nature of digital processes. Ultimately, the process takes either one or the other direction along the lines of the – performatively implemented – decisions taken by the user.
(1) (1) Alan Kay: Computer Software, Scientific American (September 1984), cited by Manovich in: Software Takes Command, Bloomsbury Academic, New York, 2013.
(2) Manovich emphasizes the difference in the path taken: while Engelbart and colleagues were developing an office environment, Kay and his team were developing media design methods.
(3) Lev Manovich: Software Takes Command. Bloomsbury Academic, New York, 2013. manuscript:
http://softwarestudies.com/softbook/manovich_softbook_11_20_2008.doc
(4) Op. cit.
(5) OSX Human Interface Guidelines
(6) What You See Is What You Get
http://hu.wikipedia.org/wiki/WYSIWYG
(7) Herman van den Muijsenberg: Identifying affordances in Adobe Photoshop, MA Thesis, Utrecht University, 2012.
http://igitur-archive.library.uu.nl/student-theses/2013-0128-200846/Thesis%20Herman%20van%20den%20Muijsenberg.pdf
(8) Layers first appeared in the third distribution; today the program can handle thousands of layers.
(9) Lev Manovich: Inside Photoshop, Computational Culture, November 2011
http://computationalculture.net/article/inside-photoshop
(10) Op. cit.
(11) George Lakoff & Mark Johnson, Metaphors We Live By. University of Chicago Press, 1980, cited by Manovich: Inside Photoshop, Computational Culture, November 2011