Deep Remixability: Difference between revisions

From XPUB & Lens-Based wiki
No edit summary
(replaced quot entities with proper double quote chars)
 
Line 3: Line 3:
Published online, spring 2006
Published online, spring 2006


During the heyday of debates on post-modern, at least one critic in America noticed the connection between post-modern pastiche and computerization. In his book ''After the Great Divide'' (1986), Andreas Huyssen writes: &quot;All modern and avantgardist techniques, forms and images are now stored for instant recall in the computerized memory banks of our culture. But the same memory also stores all of pre-modernist art as well as the genres, codes, and image worlds of popular cultures and modern mass culture.&quot; <ref>Andreas Huyssen, &quot;Mapping the Postmodern,&quot; in After the Great Divide (Bloomington and Indianapolis: Indiana University Press, 1986), 196.</ref> His analysis is accurate - except that these &quot;computerized memory banks&quot; did not really became commonplace for another fifteen years. Only when the Web absorbed enough of the media archives it became this universal cultural memory bank accessible to all cultural producers. But even for the professionals, the ability to easily integrate multiple media sources within the same project - multiple layers of video, scanned still images, animation, graphics, and typography - only came towards the end of the 1990s.
During the heyday of debates on post-modern, at least one critic in America noticed the connection between post-modern pastiche and computerization. In his book ''After the Great Divide'' (1986), Andreas Huyssen writes: "All modern and avantgardist techniques, forms and images are now stored for instant recall in the computerized memory banks of our culture. But the same memory also stores all of pre-modernist art as well as the genres, codes, and image worlds of popular cultures and modern mass culture." <ref>Andreas Huyssen, "Mapping the Postmodern," in After the Great Divide (Bloomington and Indianapolis: Indiana University Press, 1986), 196.</ref> His analysis is accurate - except that these "computerized memory banks" did not really became commonplace for another fifteen years. Only when the Web absorbed enough of the media archives it became this universal cultural memory bank accessible to all cultural producers. But even for the professionals, the ability to easily integrate multiple media sources within the same project - multiple layers of video, scanned still images, animation, graphics, and typography - only came towards the end of the 1990s.


In 1985 when Huyssen book was in preparation for publication I was working for what was then one of the few computer animation companies in the world ''Digital Effects'' <ref>See Wayne Carlson, A Critical History of Computer Graphics and Animations. Section 2: The Emergence of Computer Graphics Technology http://accad.osu.edu/%7Ewaynec/history/lesson2.html .</ref>. Each computer animator had his own interactive graphics terminal that could show 3D models but only in wireframe and in monochrome; to see them fully rendered in color, we had to take turns as the company had only one color raster display which we all shared. The data was stored on bulky magnetic tapes about a feet in diameter; to find the data from an old job was a cumbersome process which involved locating the right tape in tape library, putting it on a tape drive and then searching for the right part of the tape. We did not had a color scanner, so getting &quot;all modern and avantgardist techniques, forms and images&quot; into the computer was far from trivial. And even if we had one, there was no way to store, recall and modify these images. The machine that could do that - Quantel ''Paintbox'' - cost over USD 160,000, which we could not afford. And when in 1986 Quantel introduced Harry, the first commercial non-linear editing system which allowed for digital compositing of multiple layers of video and special effects, its cost similarly made it prohibitive for everybody except network television stations and a few production houses. Harry could record only eighty seconds of broadcast quality video. In the realm of still images, things were not much better: for instance, digital still store ''Paintbox'' released by Quantel in 1990 could hold only 500 broadcast quality images and it cost was similarly very high.
In 1985 when Huyssen book was in preparation for publication I was working for what was then one of the few computer animation companies in the world ''Digital Effects'' <ref>See Wayne Carlson, A Critical History of Computer Graphics and Animations. Section 2: The Emergence of Computer Graphics Technology http://accad.osu.edu/%7Ewaynec/history/lesson2.html .</ref>. Each computer animator had his own interactive graphics terminal that could show 3D models but only in wireframe and in monochrome; to see them fully rendered in color, we had to take turns as the company had only one color raster display which we all shared. The data was stored on bulky magnetic tapes about a feet in diameter; to find the data from an old job was a cumbersome process which involved locating the right tape in tape library, putting it on a tape drive and then searching for the right part of the tape. We did not had a color scanner, so getting "all modern and avantgardist techniques, forms and images" into the computer was far from trivial. And even if we had one, there was no way to store, recall and modify these images. The machine that could do that - Quantel ''Paintbox'' - cost over USD 160,000, which we could not afford. And when in 1986 Quantel introduced Harry, the first commercial non-linear editing system which allowed for digital compositing of multiple layers of video and special effects, its cost similarly made it prohibitive for everybody except network television stations and a few production houses. Harry could record only eighty seconds of broadcast quality video. In the realm of still images, things were not much better: for instance, digital still store ''Paintbox'' released by Quantel in 1990 could hold only 500 broadcast quality images and it cost was similarly very high.


In short, in the middle of the 1980s neither we nor other production companies had anything approaching the &quot;computerized memory banks&quot; imagined by Huyssen. And of course, the same was true for the visual artists that were then associated with post-modernism and the ideas of pastiche, collage and appropriation. In 1986 the BBC produced a documentary ''Painting with Light'' for which half a dozen well-known painters including Richard Hamilton and David Hockney were invited to work with a Quantel ''Paintbox''. The resulting images were not so different from the normal paintings that these artists were producing without a computer. And while some artists were making references to &quot;modern and avantgardist techniques, forms and images,&quot; these references were painted rather than being directly loaded from &quot;computerized memory banks.&quot; Only in the middle of the 1990s, when relatively inexpensive graphics workstations and personal computers running image editing, animation, compositing and illustration software became commonplace and affordable for freelance graphic designers, illustrators, and small post-production and animation studios, could the situation described by Huyssen start to become a reality.
In short, in the middle of the 1980s neither we nor other production companies had anything approaching the "computerized memory banks" imagined by Huyssen. And of course, the same was true for the visual artists that were then associated with post-modernism and the ideas of pastiche, collage and appropriation. In 1986 the BBC produced a documentary ''Painting with Light'' for which half a dozen well-known painters including Richard Hamilton and David Hockney were invited to work with a Quantel ''Paintbox''. The resulting images were not so different from the normal paintings that these artists were producing without a computer. And while some artists were making references to "modern and avantgardist techniques, forms and images," these references were painted rather than being directly loaded from "computerized memory banks." Only in the middle of the 1990s, when relatively inexpensive graphics workstations and personal computers running image editing, animation, compositing and illustration software became commonplace and affordable for freelance graphic designers, illustrators, and small post-production and animation studios, could the situation described by Huyssen start to become a reality.


The results were dramatic. Within about five years, modern visual culture was fundamentally transformed. Previously separate media - live action cinematography, graphics, still photography, animation, 3D computer animation, and typography - started to be combined in numerous ways. By the end of the decade, the &quot;pure&quot; moving image media became an exception and hybrid media became the norm. However, in contrast to other computer revolutions such as the rise of World Wide Web around the same time, this revolution was not acknowledged by popular media or by cultural critics. What received attention were the developments that affected narrative filmmaking - the use of computer-produced special effects in Hollywood feature films or the inexpensive digital video and editing tools outside of it. But another process which happened on a larger scale - the transformation of the visual language used by all forms of moving images outside of narrative films - has not been critically analyzed. In fact, while the results of these transformations have become fully visible by about 1998, at the time of this writing (early 2006) I am not aware of a single theoretical article discussing them.
The results were dramatic. Within about five years, modern visual culture was fundamentally transformed. Previously separate media - live action cinematography, graphics, still photography, animation, 3D computer animation, and typography - started to be combined in numerous ways. By the end of the decade, the "pure" moving image media became an exception and hybrid media became the norm. However, in contrast to other computer revolutions such as the rise of World Wide Web around the same time, this revolution was not acknowledged by popular media or by cultural critics. What received attention were the developments that affected narrative filmmaking - the use of computer-produced special effects in Hollywood feature films or the inexpensive digital video and editing tools outside of it. But another process which happened on a larger scale - the transformation of the visual language used by all forms of moving images outside of narrative films - has not been critically analyzed. In fact, while the results of these transformations have become fully visible by about 1998, at the time of this writing (early 2006) I am not aware of a single theoretical article discussing them.


One of the reasons is that in this revolution, no new media per se were created. Just as ten years ago, the designers were making still images and moving images. But the aesthetics of these images was now very different. In fact, it was so new that, in retrospect, the post-modern imagery of just ten years ago that at the time looked strikingly different, now appears as a barely noticeable blip on the radar of cultural history.
One of the reasons is that in this revolution, no new media per se were created. Just as ten years ago, the designers were making still images and moving images. But the aesthetics of these images was now very different. In fact, it was so new that, in retrospect, the post-modern imagery of just ten years ago that at the time looked strikingly different, now appears as a barely noticeable blip on the radar of cultural history.
Line 24: Line 24:
In 1989 the former Soviet satellites of Central and Eastern Europe peacefully liberated themselves from the Soviet Union. In the case of Czechoslovakia, this event came to be referred to as the Velvet Revolution - to contrast it to typical revolutions in modern history that were always accompanied by bloodshed. To emphasize the gradual, almost invisible pace of the transformations which occurred in moving image aesthetics between approximately 1993 and 1998, I am going to appropriate the term Velvet Revolution to refer to these transformations. Although it may seem presumptuous to compare political and aesthetics transformations simply because they share the same non-violent quality, as we will see in the later article, the two revolutions are actually related. But we can only make this connection after we analyse in detail how the aesthetics and the very logic of moving images changed during this period.
In 1989 the former Soviet satellites of Central and Eastern Europe peacefully liberated themselves from the Soviet Union. In the case of Czechoslovakia, this event came to be referred to as the Velvet Revolution - to contrast it to typical revolutions in modern history that were always accompanied by bloodshed. To emphasize the gradual, almost invisible pace of the transformations which occurred in moving image aesthetics between approximately 1993 and 1998, I am going to appropriate the term Velvet Revolution to refer to these transformations. Although it may seem presumptuous to compare political and aesthetics transformations simply because they share the same non-violent quality, as we will see in the later article, the two revolutions are actually related. But we can only make this connection after we analyse in detail how the aesthetics and the very logic of moving images changed during this period.


Although the Velvet Revolution I will be discussing involved many technological and social developments - hardware, software, production practices, new job titles and new professional fields - it is appropriate to highlight one software package as being in the center of the events. This software is ''After Effects''. Introduced in 1993, ''After Effects'' was the first software designed to do animation, compositing, and special effects on the personal computer. <ref>Actually, The NewTeck Video ''Toaster'' released in 1990 was the first PC based video production system that included a video switcher, character generation, image manipulation, and animation. Because of their low costs, Video ''Toaster'' systems were extremely popular in the 1990s. However, in the context of my article, ''After Effects'' is more important because, as I will explain below, it introduced a new paradigm for moving image design that was different from the familiar video editing paradigm supported by systems such as ''Toaster''.</ref> Its broad effect on moving image production can be compared to the effects of ''Photoshop'' and ''Illustrator'' on photography, illustration, and graphic design. Although today (2006) media design and post-production companies continue to rely on more expensive &quot;high-end&quot; software such as ''Flame'', ''Inferno'' or ''Paintbox'' that run on specialized graphics workstations from SGI, because of its affordability and length of time on the market ''After Effects'' is the most popular and well-known application in this area. Consequently, ''After Effects'' will be given a privileged role in this text as both the symbol and the key material foundation which made the Velvet Revolution in moving image culture possible - even though today other programs in the similar price category such as Apple's ''Motion'', Autodesk's ''Combustion'', and Macromedia's ''Flash'' have challenged ''After Effects''' dominance.
Although the Velvet Revolution I will be discussing involved many technological and social developments - hardware, software, production practices, new job titles and new professional fields - it is appropriate to highlight one software package as being in the center of the events. This software is ''After Effects''. Introduced in 1993, ''After Effects'' was the first software designed to do animation, compositing, and special effects on the personal computer. <ref>Actually, The NewTeck Video ''Toaster'' released in 1990 was the first PC based video production system that included a video switcher, character generation, image manipulation, and animation. Because of their low costs, Video ''Toaster'' systems were extremely popular in the 1990s. However, in the context of my article, ''After Effects'' is more important because, as I will explain below, it introduced a new paradigm for moving image design that was different from the familiar video editing paradigm supported by systems such as ''Toaster''.</ref> Its broad effect on moving image production can be compared to the effects of ''Photoshop'' and ''Illustrator'' on photography, illustration, and graphic design. Although today (2006) media design and post-production companies continue to rely on more expensive "high-end" software such as ''Flame'', ''Inferno'' or ''Paintbox'' that run on specialized graphics workstations from SGI, because of its affordability and length of time on the market ''After Effects'' is the most popular and well-known application in this area. Consequently, ''After Effects'' will be given a privileged role in this text as both the symbol and the key material foundation which made the Velvet Revolution in moving image culture possible - even though today other programs in the similar price category such as Apple's ''Motion'', Autodesk's ''Combustion'', and Macromedia's ''Flash'' have challenged ''After Effects''' dominance.


Finally, before proceeding I should explain the use of examples in this article. The visual language I am analyzing is all around us today (this may explain why academics have remained blind to it). After globalization, this language is spoken by all communication professionals around the world.You can see for yourself all the examples of the various aesthetics I will be mentioning below by simply watching television in practically any country and paying attention to graphics, or going to a club to see a VJ performance, or visiting the web sites of motion graphics designers and visual effects companies, or opening any book on contemporary design. Nevertheless, I have included references to particular projects below so the reader can see exactly what I am referring to. <ref>I have drawn these examples from three published sources so they are easy to trace. The first is a DVD ''I Love Music Videos'' that contains a selection of forty music videos for well-known bands from the 1990s and early 2000s, published in 2002. The second is an ''onedotzero_select DVD'', a selection of sixteen independent short films, commercial work and a Live Cinema performance presented by the onedotzero festival in London and published in 2003. The third is the Fall 2005 sample work DVD from Imaginary Forces, which is among the most well known motion graphics production houses today. The DVD includes titles and teasers for feature films, and the TV shows titles, stations IDs and graphics packages for cable channels. Most of the videos I am referring to can be also found on the net.</ref> But since my goal is to describe the new cultural language which by now has become practically universal, I want to emphasize that each of these examples can be substituted for numerous others.
Finally, before proceeding I should explain the use of examples in this article. The visual language I am analyzing is all around us today (this may explain why academics have remained blind to it). After globalization, this language is spoken by all communication professionals around the world.You can see for yourself all the examples of the various aesthetics I will be mentioning below by simply watching television in practically any country and paying attention to graphics, or going to a club to see a VJ performance, or visiting the web sites of motion graphics designers and visual effects companies, or opening any book on contemporary design. Nevertheless, I have included references to particular projects below so the reader can see exactly what I am referring to. <ref>I have drawn these examples from three published sources so they are easy to trace. The first is a DVD ''I Love Music Videos'' that contains a selection of forty music videos for well-known bands from the 1990s and early 2000s, published in 2002. The second is an ''onedotzero_select DVD'', a selection of sixteen independent short films, commercial work and a Live Cinema performance presented by the onedotzero festival in London and published in 2003. The third is the Fall 2005 sample work DVD from Imaginary Forces, which is among the most well known motion graphics production houses today. The DVD includes titles and teasers for feature films, and the TV shows titles, stations IDs and graphics packages for cable channels. Most of the videos I am referring to can be also found on the net.</ref> But since my goal is to describe the new cultural language which by now has become practically universal, I want to emphasize that each of these examples can be substituted for numerous others.
Line 30: Line 30:
== Examples ==
== Examples ==


The use of ''After Effects'' is closely identified with a particular type of moving images which became commonplace to a large part because of this software and known as &quot;motion graphics.&quot; Concisely defined by Matt Frantz in his Master Thesis as &quot;designed non-narrative, non-figurative based visuals that change over time,&quot; <ref>Matt Frantz (2003), &quot;Changing Over Time: The Future of Motion Graphics&quot; http://www.mattfrantz.com/thesisandresearch/motiongraphics.html.</ref> motion graphics today include film and television titles, TV graphics, dynamic menus, the graphics for mobile media content, and other animated sequences. Typically, motion graphics appear as parts of longer pieces: commercials, music videos, training videos, narrative and documentary films, interactive projects.
The use of ''After Effects'' is closely identified with a particular type of moving images which became commonplace to a large part because of this software and known as "motion graphics." Concisely defined by Matt Frantz in his Master Thesis as "designed non-narrative, non-figurative based visuals that change over time," <ref>Matt Frantz (2003), "Changing Over Time: The Future of Motion Graphics" http://www.mattfrantz.com/thesisandresearch/motiongraphics.html.</ref> motion graphics today include film and television titles, TV graphics, dynamic menus, the graphics for mobile media content, and other animated sequences. Typically, motion graphics appear as parts of longer pieces: commercials, music videos, training videos, narrative and documentary films, interactive projects.


While motion graphics definitely exemplify the changes that took place during this Velvet Revolution, these changes are more broad. Simply put, the result of the Velvet Revolution is ''a new hybrid visual language of moving images in general''. This language is not confined to particular media forms. And while today it manifests itself most clearly in non-narrative forms, it is also often present in narrative and figurative sequences and films.
While motion graphics definitely exemplify the changes that took place during this Velvet Revolution, these changes are more broad. Simply put, the result of the Velvet Revolution is ''a new hybrid visual language of moving images in general''. This language is not confined to particular media forms. And while today it manifests itself most clearly in non-narrative forms, it is also often present in narrative and figurative sequences and films.
Line 36: Line 36:
For example, a music video may use live action while also employing typography and a variety of transitions done with computer graphics (example: the video for ''Go'' by Common, directed by Convert / MK12 / Kanye West, 2005). Or it may embed the singer within the animated painterly space (video for Sheryl Crow's ''Good Is Good'', directed by Psyop, 2005.) A short film may mix typography, stylized 3D graphics, moving design elements, and video (''Itsu'' for Plaid, directed by Pleix collective, 2002 <ref>Included on ''onedotzero_select DVD'' 1. Online version at http://www.pleix.net/films.html.</ref>).
For example, a music video may use live action while also employing typography and a variety of transitions done with computer graphics (example: the video for ''Go'' by Common, directed by Convert / MK12 / Kanye West, 2005). Or it may embed the singer within the animated painterly space (video for Sheryl Crow's ''Good Is Good'', directed by Psyop, 2005.) A short film may mix typography, stylized 3D graphics, moving design elements, and video (''Itsu'' for Plaid, directed by Pleix collective, 2002 <ref>Included on ''onedotzero_select DVD'' 1. Online version at http://www.pleix.net/films.html.</ref>).


In some cases, the juxtaposition of different media is clearly visible (examples: music video for ''Don't Panic'' by Coldplay; main title for ''The Inside'' by Imaginary Forces, 2005). In other cases, a sequence may move between different media so quickly that the shifts are barely noticeable (GMC Denali &quot;Holes&quot; commercial by Imaginary Forces, 2005). In yet other cases, a commercial or a movie title may feature a continuous action shot on video or film, with the image periodically changing from a more natural to a highly stylized look.
In some cases, the juxtaposition of different media is clearly visible (examples: music video for ''Don't Panic'' by Coldplay; main title for ''The Inside'' by Imaginary Forces, 2005). In other cases, a sequence may move between different media so quickly that the shifts are barely noticeable (GMC Denali "Holes" commercial by Imaginary Forces, 2005). In yet other cases, a commercial or a movie title may feature a continuous action shot on video or film, with the image periodically changing from a more natural to a highly stylized look.


While the particular aesthetic solutions vary from one piece to the next and from one designer to another, they all share the same logic: the appearance of multiple media simultaneously in the same frame. Whether these media are openly juxtaposed or almost seamlessly blended together is less important than the fact of this co-presence itself.
While the particular aesthetic solutions vary from one piece to the next and from one designer to another, they all share the same logic: the appearance of multiple media simultaneously in the same frame. Whether these media are openly juxtaposed or almost seamlessly blended together is less important than the fact of this co-presence itself.


Today such hybrid visual language is also common to a large proportion of short &quot;experimental&quot; (i.e. non-commercial) films being produced for media festivals, the web, mobile media devices, and other distribution platforms. <ref>In December 2005 I attended thee Impakt media festival in Utrecht and I asked the festival director what percentage of submissions they received this year featured hybrid visual language as opposed to &quot;straight&quot; video or film. His estimate was about one half. In January 2006 I was part of the review team that judged graduating projects of students in SCI-ARC, a well-known research-oriented architecture school in Los Angeles. According to my informal estimate, approximately half of the projects featured complex curved geometry made possible by ''Maya'' that is modeling software now commonly used by architects. Given that both ''After Effects'' and ''Maya's'' predecessor ''Alias'' were introduced the same year - 1993 - I think that this quantitative similarity in the proportion of projects that use the new languages made possible by these software is quite telling.</ref> The large percentage of the visuals created by VJs and Live Cinema artists are also hybrid, combining video, layers of 2D imagery, animation, and abstract imagery generated in real time. (<ref>For examples, consult ''The VJ book, VJ: Live Cinema Unraveled'', or web sites such as www.vjcentral.com and www.live-cinema.org. Paul Spinrad, ed., ''The VJ Book: Inspirations and Practical Advice for Live Visuals Performance'' (Feral House, 2005); Timothy Jaeger, ''VJ: Live Cinema Unraveled'' (available from http://www.vj-book.com).</ref>) In the case of narrative feature films and TV programs, while they still rarely mix different graphical styles within the same frame, many now feature highly stylized aesthetics which would previously be identified with illustration rather than filmmaking - for instance, the TV series ''CSI'', George Lucas's latest ''Star Wars'' films, or Robert Rodriguez's ''Sin City''.
Today such hybrid visual language is also common to a large proportion of short "experimental" (i.e. non-commercial) films being produced for media festivals, the web, mobile media devices, and other distribution platforms. <ref>In December 2005 I attended thee Impakt media festival in Utrecht and I asked the festival director what percentage of submissions they received this year featured hybrid visual language as opposed to "straight" video or film. His estimate was about one half. In January 2006 I was part of the review team that judged graduating projects of students in SCI-ARC, a well-known research-oriented architecture school in Los Angeles. According to my informal estimate, approximately half of the projects featured complex curved geometry made possible by ''Maya'' that is modeling software now commonly used by architects. Given that both ''After Effects'' and ''Maya's'' predecessor ''Alias'' were introduced the same year - 1993 - I think that this quantitative similarity in the proportion of projects that use the new languages made possible by these software is quite telling.</ref> The large percentage of the visuals created by VJs and Live Cinema artists are also hybrid, combining video, layers of 2D imagery, animation, and abstract imagery generated in real time. (<ref>For examples, consult ''The VJ book, VJ: Live Cinema Unraveled'', or web sites such as www.vjcentral.com and www.live-cinema.org. Paul Spinrad, ed., ''The VJ Book: Inspirations and Practical Advice for Live Visuals Performance'' (Feral House, 2005); Timothy Jaeger, ''VJ: Live Cinema Unraveled'' (available from http://www.vj-book.com).</ref>) In the case of narrative feature films and TV programs, while they still rarely mix different graphical styles within the same frame, many now feature highly stylized aesthetics which would previously be identified with illustration rather than filmmaking - for instance, the TV series ''CSI'', George Lucas's latest ''Star Wars'' films, or Robert Rodriguez's ''Sin City''.


== Media Remixability ==
== Media Remixability ==
Line 46: Line 46:
What is the logic of this new hybrid visual language? ''This logic is one of remixability: not only of the content of different media or simply their aesthetics, but their fundamental techniques, working methods, languages, and assumptions''. United within the common software environment, cinematography, animation, computer animation, special effects, graphic design, and typography have come to form a new metamedium. A work produced in this new metamedium can use all the techniques which were previously unique to these different media, or any subset of these techniques.
What is the logic of this new hybrid visual language? ''This logic is one of remixability: not only of the content of different media or simply their aesthetics, but their fundamental techniques, working methods, languages, and assumptions''. United within the common software environment, cinematography, animation, computer animation, special effects, graphic design, and typography have come to form a new metamedium. A work produced in this new metamedium can use all the techniques which were previously unique to these different media, or any subset of these techniques.


If we use the concept of &quot;remediation&quot; to describe this new situation, we will misrepresent this logic - or the logic of media computing in general. <ref>Jay David Bolter and Richard Grusin, ''Remediation: Understanding New Media'' (The MIT Press, 1999.)</ref> The computer does not &quot;remediate&quot; particular media. Instead, ''it simulates all media''. And what it simulates are not the surface appearances of different media but all the techniques used for their production and all the methods of viewing and interaction with the works in these media.
If we use the concept of "remediation" to describe this new situation, we will misrepresent this logic - or the logic of media computing in general. <ref>Jay David Bolter and Richard Grusin, ''Remediation: Understanding New Media'' (The MIT Press, 1999.)</ref> The computer does not "remediate" particular media. Instead, ''it simulates all media''. And what it simulates are not the surface appearances of different media but all the techniques used for their production and all the methods of viewing and interaction with the works in these media.


Once all types of media met within the same digital environment - and this was accomplished by the middle of the 1990s - they started interacting in the ways that could never be predicted nor even imagined previously. For instance, while particular media techniques continue to be used in relation to their original media, they can also be applied to other media. (This is possible because the techniques are turned into algorithms, all media is turned into digital data stored in compatible file formats, and software is designed to read and write files produced by other programs.) Here are a few examples: motion blur is applied to 3D computer graphics, computer generated fields of particles are blended with live action footage to give it an enhanced look, a virtual camera is made to move around the virtual space filled with 2D drawings, flat typography is animated as though it is made from a liquid like material (the liquid simulation coming from computer graphics field), and so on. And while this &quot;cross-over&quot; use by itself constitutes a fundamental shift in media history, today a typical short film or a sequence may combine many such pairings within the same frame. The result is a hybrid, intricate, complex, and rich visual language - or rather, numerous languages that share the basic logic of remixabilty.
Once all types of media met within the same digital environment - and this was accomplished by the middle of the 1990s - they started interacting in the ways that could never be predicted nor even imagined previously. For instance, while particular media techniques continue to be used in relation to their original media, they can also be applied to other media. (This is possible because the techniques are turned into algorithms, all media is turned into digital data stored in compatible file formats, and software is designed to read and write files produced by other programs.) Here are a few examples: motion blur is applied to 3D computer graphics, computer generated fields of particles are blended with live action footage to give it an enhanced look, a virtual camera is made to move around the virtual space filled with 2D drawings, flat typography is animated as though it is made from a liquid like material (the liquid simulation coming from computer graphics field), and so on. And while this "cross-over" use by itself constitutes a fundamental shift in media history, today a typical short film or a sequence may combine many such pairings within the same frame. The result is a hybrid, intricate, complex, and rich visual language - or rather, numerous languages that share the basic logic of remixabilty.


I believe that the &quot;media remixability&quot; which begins around the middle of the 1990s constitutes a fundamentally new stage in the history of media. It manifests itself in different areas of culture and not only moving images - although the later does offer a particularly striking example of this new logic at work. Here software such as ''After Effects'' became a petri dish where computer animation, live cinematography, graphic design, 2D animation and typography started to interact together, creating new hybrids. And as the examples mentioned above demonstrate, the result of this process of remixability are new aesthetics and the production of new media species which cannot be reduced to the sum of media that went into them. Put differently, the interactions of different media in the same software environment create cultural species.
I believe that the "media remixability" which begins around the middle of the 1990s constitutes a fundamentally new stage in the history of media. It manifests itself in different areas of culture and not only moving images - although the later does offer a particularly striking example of this new logic at work. Here software such as ''After Effects'' became a petri dish where computer animation, live cinematography, graphic design, 2D animation and typography started to interact together, creating new hybrids. And as the examples mentioned above demonstrate, the result of this process of remixability are new aesthetics and the production of new media species which cannot be reduced to the sum of media that went into them. Put differently, the interactions of different media in the same software environment create cultural species.


Media remixability does not necessarily lead to a collage-like aesthetics which foregrounds the juxtapositions of different media and different media techniques. As a very different example of what media remixability can result in, consider a more subtle aesthetics well captured by the name of the software under discussion - ''After Effects''. If, in the 1990s, computers were used to create highly spectacular special effects or &quot;invisible effects,&quot; <ref>An &quot;invisible effect&quot; is the standard industry term. For instance, in 1997 the film Contact directed by Robert Zemeckis was nominated for 1997 VFX HQ Awards in the following categories: Best Visual Effects, Best Sequence (The Ride), Best Shot (Powers of Ten), Best Invisible Effects (Dish Restoration) and Best Compositing. http://www.vfxhq.com/1997/contact.html www.vfxhq.com/1997/contact.html</ref> by the end of this decade we see something else emerging: a new visual aesthetics which goes &quot;beyond effects.&quot; In this aesthetics, the whole project - music video, commercial, short film, or a large part of a feature film - displays a hyper-real look where the enhancement of live action material is not completely invisible but at the same time it does not call attention to itself the way special effects usually did (examples: Reebok I-Pimp ''Black Basketball'' commercial, ''The Legend of Zorro'' main title, both by Imaginary Forces, 2005.) This new hyper-real aesthetics is yet another example of how in the hands of designers the petri dish of software containing all the media creation and manipulation techniques created during human history now produces new hybrids. In fact, it produces only hybrids.
Media remixability does not necessarily lead to a collage-like aesthetics which foregrounds the juxtapositions of different media and different media techniques. As a very different example of what media remixability can result in, consider a more subtle aesthetics well captured by the name of the software under discussion - ''After Effects''. If, in the 1990s, computers were used to create highly spectacular special effects or "invisible effects," <ref>An "invisible effect" is the standard industry term. For instance, in 1997 the film Contact directed by Robert Zemeckis was nominated for 1997 VFX HQ Awards in the following categories: Best Visual Effects, Best Sequence (The Ride), Best Shot (Powers of Ten), Best Invisible Effects (Dish Restoration) and Best Compositing. http://www.vfxhq.com/1997/contact.html www.vfxhq.com/1997/contact.html</ref> by the end of this decade we see something else emerging: a new visual aesthetics which goes "beyond effects." In this aesthetics, the whole project - music video, commercial, short film, or a large part of a feature film - displays a hyper-real look where the enhancement of live action material is not completely invisible but at the same time it does not call attention to itself the way special effects usually did (examples: Reebok I-Pimp ''Black Basketball'' commercial, ''The Legend of Zorro'' main title, both by Imaginary Forces, 2005.) This new hyper-real aesthetics is yet another example of how in the hands of designers the petri dish of software containing all the media creation and manipulation techniques created during human history now produces new hybrids. In fact, it produces only hybrids.


== Layers, Transparency, Compositing ==
== Layers, Transparency, Compositing ==


Let us now look at the details of new visual language of moving images which emerged from the Velvet Revolution and the material and social conditions - software, user interface, design workflow - which make remixability possible. Probably the most dramatic among the changes that took place during 1993-1998 was the new ability to combine together ''multiple levels of imagery with varying degree of transparency via digital compositing''. If you compare a typical music video or a TV advertising spot circa 1986 with their counterparts circa 1996, the differences are striking. (The same holds for still images.) As I already noted, in 1986 &quot;computerized memory banks&quot; were very limited in their storage capacity and prohibitively expensive, and therefore designers could not quickly and easily cut and paste multiple image sources. But even when they would assemble multiple visual references, a designer only could place them next to, or on top of each other. She could not modulate these juxtapositions by precisely adjusting for instance, the transparency levels of different images. Instead, she had to resort to the same photocollage techniques popularized in the 1920s. In other words, the lack of transparency restricted the number of different images sources that can be integrated within a single composition without it starting to look like many photomontages of John Heartfield, Hannah Hoch, or Robert Rauschenberg - a mosaic of fragments without any strong dominant. <ref>In the case of video, one of the main reasons which made the combination of multiple visuals difficult was the rapid degradation of the video signal when an analog video tape was copied more than a couple of times. Such a copy would no longer meet broadcasting standards.</ref>
Let us now look at the details of new visual language of moving images which emerged from the Velvet Revolution and the material and social conditions - software, user interface, design workflow - which make remixability possible. Probably the most dramatic among the changes that took place during 1993-1998 was the new ability to combine together ''multiple levels of imagery with varying degree of transparency via digital compositing''. If you compare a typical music video or a TV advertising spot circa 1986 with their counterparts circa 1996, the differences are striking. (The same holds for still images.) As I already noted, in 1986 "computerized memory banks" were very limited in their storage capacity and prohibitively expensive, and therefore designers could not quickly and easily cut and paste multiple image sources. But even when they would assemble multiple visual references, a designer only could place them next to, or on top of each other. She could not modulate these juxtapositions by precisely adjusting for instance, the transparency levels of different images. Instead, she had to resort to the same photocollage techniques popularized in the 1920s. In other words, the lack of transparency restricted the number of different images sources that can be integrated within a single composition without it starting to look like many photomontages of John Heartfield, Hannah Hoch, or Robert Rauschenberg - a mosaic of fragments without any strong dominant. <ref>In the case of video, one of the main reasons which made the combination of multiple visuals difficult was the rapid degradation of the video signal when an analog video tape was copied more than a couple of times. Such a copy would no longer meet broadcasting standards.</ref>


Compositing also made trivial another operation which was previously very cumbersome. Until the 1990s, different media types such as hand-drawn animation, lens-based recordings, i.e. film and video, and typography practically never appeared within the same frame. Instead, animated commercials, publicity shorts, industrial films, and some feature and experimental films that did include multiple media usually placed them in separate shots. A few directors have managed to build whole aesthetic systems out of such temporal juxtapositions - most notably, Jean-Luc Godard. In his 1960s films such as ''Week End'' (1967) Godard cut bold typographic compositions in between live action creating what can be called &quot;media montages.&quot; Also in the 1960s pioneering motion graphics designer Pablo Ferro who has appropriately called his company Frame Imagery created promotional shorts and TV graphics that played on juxtapositions of different media replacing each other in a rapid succession. <ref>Jeff Bellantfoni and Matt Woolman, ''Type in Motion'' (Rizzoli, 1999), 22-29.</ref> In a number of Ferro's spots, static images of different letterforms, line drawings, original hand painted artwork, photographs, very short clips from newsreels, and other visuals would come one after another with machine gun speed.
Compositing also made trivial another operation which was previously very cumbersome. Until the 1990s, different media types such as hand-drawn animation, lens-based recordings, i.e. film and video, and typography practically never appeared within the same frame. Instead, animated commercials, publicity shorts, industrial films, and some feature and experimental films that did include multiple media usually placed them in separate shots. A few directors have managed to build whole aesthetic systems out of such temporal juxtapositions - most notably, Jean-Luc Godard. In his 1960s films such as ''Week End'' (1967) Godard cut bold typographic compositions in between live action creating what can be called "media montages." Also in the 1960s pioneering motion graphics designer Pablo Ferro who has appropriately called his company Frame Imagery created promotional shorts and TV graphics that played on juxtapositions of different media replacing each other in a rapid succession. <ref>Jeff Bellantfoni and Matt Woolman, ''Type in Motion'' (Rizzoli, 1999), 22-29.</ref> In a number of Ferro's spots, static images of different letterforms, line drawings, original hand painted artwork, photographs, very short clips from newsreels, and other visuals would come one after another with machine gun speed.


Within cinema, the superimposition of different media within the same frame were usually limited to the two media placed on top of each other in a standardized manner - i.e., static letters appearing on top of still or moving lens-based images in feature film titles. Both Ferro and another motion graphics pioneer Saul Bass have created a few title sequences where visual elements of different origin were systematically overlaid - such as the opening for Hitchcock's ''Vertigo'' designed by Bass (1958). But I think it is fair to say that such complex juxtapositions of media within the same frame (rather than in edited sequence) were rare exceptions in the overwise &quot;unimedia&quot; universe where filmed images appeared in feature films and hand drawn images appeared in animated films. The only twentieth century feature film director I know of who has build his unique aesthetics by systematically combining different media within the same shot is Czech Karel Zeman. A typical shot by Zeman may contain filmed human figures, an old engraving used for background, and a miniature model. <ref>While of course special effects in feature films often combined different media, they were used together to create a single illusionistic space, rather than juxtaposed for the aesthetic effect such as in films and titles by Godard, Zeman, Ferro and Bass.</ref>
Within cinema, the superimposition of different media within the same frame were usually limited to the two media placed on top of each other in a standardized manner - i.e., static letters appearing on top of still or moving lens-based images in feature film titles. Both Ferro and another motion graphics pioneer Saul Bass have created a few title sequences where visual elements of different origin were systematically overlaid - such as the opening for Hitchcock's ''Vertigo'' designed by Bass (1958). But I think it is fair to say that such complex juxtapositions of media within the same frame (rather than in edited sequence) were rare exceptions in the overwise "unimedia" universe where filmed images appeared in feature films and hand drawn images appeared in animated films. The only twentieth century feature film director I know of who has build his unique aesthetics by systematically combining different media within the same shot is Czech Karel Zeman. A typical shot by Zeman may contain filmed human figures, an old engraving used for background, and a miniature model. <ref>While of course special effects in feature films often combined different media, they were used together to create a single illusionistic space, rather than juxtaposed for the aesthetic effect such as in films and titles by Godard, Zeman, Ferro and Bass.</ref>


The achievements of these directors and designers are particularly remarkable given the difficulty of combing different media within the same frame during the film era. To do this required the services of a special effects departments or separate companies which used optical printers. The techniques that were cheap and more accessible such as double exposure were limited in their precision. So while a designer of static images could at least cut and paste multiple elements within the same composition to create a photomontage, to create the equivalent effect with moving images was far from trivial.
The achievements of these directors and designers are particularly remarkable given the difficulty of combing different media within the same frame during the film era. To do this required the services of a special effects departments or separate companies which used optical printers. The techniques that were cheap and more accessible such as double exposure were limited in their precision. So while a designer of static images could at least cut and paste multiple elements within the same composition to create a photomontage, to create the equivalent effect with moving images was far from trivial.
Line 70: Line 70:
In short, digital compositing now allowed the designers to easily ''mix any number of visual elements regardless of the media in which they originated'' and to control each element in the process. We can make an analogy between multitrack audio recording and digital compositing of moving images. In multitrack recording, each sound track can be manipulated individually to produce the desired result. Similarly, in digital compositing each visual element can be independently modulated in a variety of ways: resized, recolored, animated, etc. Just as the music artist can focus on a particular track while muting all other tracks, a designer often turns off all visual tracks except the one she is currently adjusting. Similarly, both a music artist and a designer can at any time substitute one element of a composition by another, delete any elements, and add new ones. Most importantly, just as multitrack recording redefined the sound of popular music from the 1960s onward, once digital compositing became widely available during the 1990s, it changed the visual aesthetics of moving images in popular culture.
In short, digital compositing now allowed the designers to easily ''mix any number of visual elements regardless of the media in which they originated'' and to control each element in the process. We can make an analogy between multitrack audio recording and digital compositing of moving images. In multitrack recording, each sound track can be manipulated individually to produce the desired result. Similarly, in digital compositing each visual element can be independently modulated in a variety of ways: resized, recolored, animated, etc. Just as the music artist can focus on a particular track while muting all other tracks, a designer often turns off all visual tracks except the one she is currently adjusting. Similarly, both a music artist and a designer can at any time substitute one element of a composition by another, delete any elements, and add new ones. Most importantly, just as multitrack recording redefined the sound of popular music from the 1960s onward, once digital compositing became widely available during the 1990s, it changed the visual aesthetics of moving images in popular culture.


This brief discussion has only scratched the surface of my subject in this section, i.e. layers and transparency. For instance, I have not analyzed the actual techniques of digital compositing and the fundamental concept of an alpha channel which deserves a separate and detailed treatment. I have also did not go into the possible media histories leading to digital compositing, nor its relationship to optical printing, video keying and the video effects technology of the 1980s. These histories and relationships were discussed in the &quot;Compositing&quot; chapter (1999) in my ''The Language of New Media'', but from a different perspective than the one used here. At that time I was looking at compositing from the point of view of the questions of cinematic realism, practices of montage, and the construction of special effects in feature films. Today, however, it is clear to me that in addition to disrupting the regime of cinematic realism in favor of other visual aesthetics, compositing also had another, even more fundamental effect.
This brief discussion has only scratched the surface of my subject in this section, i.e. layers and transparency. For instance, I have not analyzed the actual techniques of digital compositing and the fundamental concept of an alpha channel which deserves a separate and detailed treatment. I have also did not go into the possible media histories leading to digital compositing, nor its relationship to optical printing, video keying and the video effects technology of the 1980s. These histories and relationships were discussed in the "Compositing" chapter (1999) in my ''The Language of New Media'', but from a different perspective than the one used here. At that time I was looking at compositing from the point of view of the questions of cinematic realism, practices of montage, and the construction of special effects in feature films. Today, however, it is clear to me that in addition to disrupting the regime of cinematic realism in favor of other visual aesthetics, compositing also had another, even more fundamental effect.


By the end of the 1990s digital compositing has become the basic operation used in creating ''all'' forms of moving images, and not only big-budget features. So while compositing was originally developed in the context of special effects production in the 1970s and early 1980s <ref>Thomas Porter and Tom Duff, &quot;Compositing Digital Images,&quot; ACM ''Computer Graphics'' vol. 18, no. 3 (July 1984): 253-259.</ref>, it had a much broader effect on contemporary visual and media cultures. Compositing played a key part in turning the digital computer into an experimental lab where different media can meet and where their aesthetics and techniques can be combined to create new species. In short, digital compositing was essential in enabling the development of a new hybrid visual language of moving images which we see everywhere today. In other words, compositing enabled media remixability in moving image. Thus, compositing which was at first a particular digital technique designed to integrate two particular media of live action film and computer graphics become a &quot;universal media integrator.&quot; And, although compositing was originally created to support the aesthetics of cinematic realism, over time it actually had an opposite effect. Rather that forcing different media to fuse seamlessly, compositing led to the flourishing of numerous media hybrids where the juxtapositions between live and algorithmically generated, two dimensional and three dimensional, raster and vector are made deliberately visible rather than being hidden.
By the end of the 1990s digital compositing has become the basic operation used in creating ''all'' forms of moving images, and not only big-budget features. So while compositing was originally developed in the context of special effects production in the 1970s and early 1980s <ref>Thomas Porter and Tom Duff, "Compositing Digital Images," ACM ''Computer Graphics'' vol. 18, no. 3 (July 1984): 253-259.</ref>, it had a much broader effect on contemporary visual and media cultures. Compositing played a key part in turning the digital computer into an experimental lab where different media can meet and where their aesthetics and techniques can be combined to create new species. In short, digital compositing was essential in enabling the development of a new hybrid visual language of moving images which we see everywhere today. In other words, compositing enabled media remixability in moving image. Thus, compositing which was at first a particular digital technique designed to integrate two particular media of live action film and computer graphics become a "universal media integrator." And, although compositing was originally created to support the aesthetics of cinematic realism, over time it actually had an opposite effect. Rather that forcing different media to fuse seamlessly, compositing led to the flourishing of numerous media hybrids where the juxtapositions between live and algorithmically generated, two dimensional and three dimensional, raster and vector are made deliberately visible rather than being hidden.


== From &quot;Time-based&quot; to a &quot;Composition-based&quot; ==
== From "Time-based" to a "Composition-based" ==


My thesis about media remixability applies both to cultural forms and the software used to create them. Just as the moving image media made by designers today mix the formats, assumptions, and techniques of different media, the toolboxes and interfaces of the software they use are also remixes.. Let us again use ''After Effects'' as the case study to see how its interface remixes previously distinct working methods of different disciplines.
My thesis about media remixability applies both to cultural forms and the software used to create them. Just as the moving image media made by designers today mix the formats, assumptions, and techniques of different media, the toolboxes and interfaces of the software they use are also remixes.. Let us again use ''After Effects'' as the case study to see how its interface remixes previously distinct working methods of different disciplines.
Line 82: Line 82:
The center of this interface is a ''Composition'' window conceptualized as a large canvas that can contain visual elements of arbitrary sizes and proportions. When I first started using ''After Effects'' soon after it came out, I remember feeling shocked that software did not automatically resize the graphics I dragged into ''Composition'' window to make them fit the overall frame. The fundamental assumption of cinema that accompanied it throughout its whole history - that film consists from many frames which all have the same size and aspect ratio - was gone.
The center of this interface is a ''Composition'' window conceptualized as a large canvas that can contain visual elements of arbitrary sizes and proportions. When I first started using ''After Effects'' soon after it came out, I remember feeling shocked that software did not automatically resize the graphics I dragged into ''Composition'' window to make them fit the overall frame. The fundamental assumption of cinema that accompanied it throughout its whole history - that film consists from many frames which all have the same size and aspect ratio - was gone.


In the film and video editing paradigms of the twentieth century, the minimal unit on which the editor works on is a frame. She can change the length of an edit, adjusting where one film or video segment ends and another begins, but she cannot interfere with the contents of a frame. The frame as whole functions as a kind of &quot;black box&quot; that cannot be &quot;opened.&quot; This was the task of special effects departments. But in the ''After Effects'' interface, the basic unit is not a frame but a visual element placed in the ''Composition'' window. Each element can be individually accessed, manipulated and animated. In other words, each element is conceptualized as an independent object. Consequently, a media composition is understood as a set of independent objects that can change over time. The very word &quot;composition&quot; is important in this context as it references 2D media (drawing, painting, photography, design) rather than filmmaking - i.e. space as opposed to time.
In the film and video editing paradigms of the twentieth century, the minimal unit on which the editor works on is a frame. She can change the length of an edit, adjusting where one film or video segment ends and another begins, but she cannot interfere with the contents of a frame. The frame as whole functions as a kind of "black box" that cannot be "opened." This was the task of special effects departments. But in the ''After Effects'' interface, the basic unit is not a frame but a visual element placed in the ''Composition'' window. Each element can be individually accessed, manipulated and animated. In other words, each element is conceptualized as an independent object. Consequently, a media composition is understood as a set of independent objects that can change over time. The very word "composition" is important in this context as it references 2D media (drawing, painting, photography, design) rather than filmmaking - i.e. space as opposed to time.


Where does the ''After Effects'' interface came from? Given that this software is commonly used to create animated graphics (i.e., &quot;motion graphics&quot;) and visual effects, it is not surprising that we can find interface elements which can be traced to three separate fields: animation, graphic design, and special effects. In traditional cell animation practice, an animator places a number of transparent cells on top of each other. Each cell contains a different drawing - for instance, a body of a character on one cell, the head on another cell, eyes on the third cell. Because the cells are transparent, the drawings get automatically &quot;composited&quot; into a single composition. While the ''After Effects'' interface does not use the metaphor of a stack of transparent cells directly, it is based on the same principle. Each element in the ''Composition'' window is assigned a &quot;virtual depth&quot; relative to all other elements. Together all elements form a virtual stack. At any time, the designer can change the relative position of an element within the stack, delete it, or add new elements.
Where does the ''After Effects'' interface came from? Given that this software is commonly used to create animated graphics (i.e., "motion graphics") and visual effects, it is not surprising that we can find interface elements which can be traced to three separate fields: animation, graphic design, and special effects. In traditional cell animation practice, an animator places a number of transparent cells on top of each other. Each cell contains a different drawing - for instance, a body of a character on one cell, the head on another cell, eyes on the third cell. Because the cells are transparent, the drawings get automatically "composited" into a single composition. While the ''After Effects'' interface does not use the metaphor of a sdack of transparent cells directly, it is based on the same principle. Each element in the ''Composition'' window is assigned a "virtual depth" relative to all other elements. Together all elements form a virtual stack. At any time, the designer can change the relative position of an element within the stack, delete it, or add new elements.


We can also see a connection between the ''After Effects'' interface and stop motion, another popular twentieth century animation technique. With the stop motion technique, puppets or any other objects are positioned in front of a camera and manually animated one frame at a time. The animator exposes one frame of film, changes the objects a tiny bit, exposes another frame, and so on.
We can also see a connection between the ''After Effects'' interface and stop motion, another popular twentieth century animation technique. With the stop motion technique, puppets or any other objects are positioned in front of a camera and manually animated one frame at a time. The animator exposes one frame of film, changes the objects a tiny bit, exposes another frame, and so on.


Just as with the case of both cell and stop-motion animation, ''After Effects'' does not make any assumptions about the size or positions of individual elements. Rather than dealing with standardized units of time, i.e. film frames containing fixed visual content, a designer now works with separate visual elements positioned in space and time. An element can be a digital video frame, a line of type, an arbitrary geometric shape, etc. The finished work is the result of a particular arrangement of these elements in space and time. In this paradigm we can compare the designer to a choreographer who creates a dance by &quot;animating&quot; the bodies of dancers - specifying their entry and exit points, trajectories through space of the stage, and the movements of their bodies. (In this respect it is relevant that while the ''After Effects'' interface did not evoke this reference, Macromedia ''Director'' which was the key multimedia authoring software of the 1990s did directly use the metaphor of the theatre stage.)
Just as with the case of both cell and stop-motion animation, ''After Effects'' does not make any assumptions about the size or positions of individual elements. Rather than dealing with standardized units of time, i.e. film frames containing fixed visual content, a designer now works with separate visual elements positioned in space and time. An element can be a digital video frame, a line of type, an arbitrary geometric shape, etc. The finished work is the result of a particular arrangement of these elements in space and time. In this paradigm we can compare the designer to a choreographer who creates a dance by "animating" the bodies of dancers - specifying their entry and exit points, trajectories through space of the stage, and the movements of their bodies. (In this respect it is relevant that while the ''After Effects'' interface did not evoke this reference, Macromedia ''Director'' which was the key multimedia authoring software of the 1990s did directly use the metaphor of the theatre stage.)


While we can link the ''After Effects'' interface to traditional animation methods as used by commercial animation studios, the working method put forward by software is more close to graphic design. In commercial animation studios of the Twentieth century all elements - drawings, sets, characters, etc. - were prepared beforehand. The filming itself was a mechanical process. Of course, we can find exceptions to this industrial-like separation of labor in experimental animation practice where a film was typically produced by one person. For instance, in 1947 Oscar Fishinger made an eleven-minute film ''Motion Painting 1'' by continuously modifying a painting and exposing film one frame at a time after each modification. However, because Fishinger was shooting on film, he had to wait a long time before seeing the results of his work. As the historian of abstract animation William Moritz writes, &quot;Fischinger painted every day for over five months without being able to see how it was coming out on film, since he wanted to keep all the conditions, including film stock, absolutely consistent in order to avoid unexpected variations in quality of image.&quot; <ref>Qtd. in Michael Barrier, ''Oscar Fishinger''. Motion Painting No. 1 http:www.michaelbarrier.com/Capsules/Fischinger/fischinger_capsule.htm</ref> In other words, in the case of this project by Fischinger, creating a design and seeing the result were even more separated than in a commercial animation process.
While we can link the ''After Effects'' interface to traditional animation methods as used by commercial animation studios, the working method put forward by software is more close to graphic design. In commercial animation studios of the Twentieth century all elements - drawings, sets, characters, etc. - were prepared beforehand. The filming itself was a mechanical process. Of course, we can find exceptions to this industrial-like separation of labor in experimental animation practice where a film was typically produced by one person. For instance, in 1947 Oscar Fishinger made an eleven-minute film ''Motion Painting 1'' by continuously modifying a painting and exposing film one frame at a time after each modification. However, because Fishinger was shooting on film, he had to wait a long time before seeing the results of his work. As the historian of abstract animation William Moritz writes, "Fischinger painted every day for over five months without being able to see how it was coming out on film, since he wanted to keep all the conditions, including film stock, absolutely consistent in order to avoid unexpected variations in quality of image." <ref>Qtd. in Michael Barrier, ''Oscar Fishinger''. Motion Painting No. 1 http:www.michaelbarrier.com/Capsules/Fischinger/fischinger_capsule.htm</ref> In other words, in the case of this project by Fischinger, creating a design and seeing the result were even more separated than in a commercial animation process.


In contrast, a graphic designer works &quot;in real time.&quot; As the designer introduces new elements, adjusts their locations, colors and other properties, tries different images, changes the size of the type, and so on, she can immediately see the result of her work. <ref>While a graphic designer does not have to wait until film is developed or a computer has finished rendering the animation, the design has its own &quot;rendering&quot; stage - making proofs. With both digital and offset printing, after the design is finished, it is sent to the printer that produces the test prints. If the designer finds any problems such as incorrect colors, she adjusts the design and then asks for proofs again.</ref> ''After Effects'' simulates this working method by making the ''Composition'' window the center of its interface. Like a traditional designer, the ''After Effects'' user interactively arranges the elements in this window and can immediately see the result. In short, the ''After Effects'' interface makes filmmaking into a design process, and a film is re-conceptualized as graphic design that can change over time.
In contrast, a graphic designer works "in real time." As the designer introduces new elements, adjusts their locations, colors and other properties, tries different images, changes the size of the type, and so on, she can immediately see the result of her work. <ref>While a graphic designer does not have to wait until film is developed or a computer has finished rendering the animation, the design has its own "rendering" stage - making proofs. With both digital and offset printing, after the design is finished, it is sent to the printer that produces the test prints. If the designer finds any problems such as incorrect colors, she adjusts the design and then asks for proofs again.</ref> ''After Effects'' simulates this working method by making the ''Composition'' window the center of its interface. Like a traditional designer, the ''After Effects'' user interactively arranges the elements in this window and can immediately see the result. In short, the ''After Effects'' interface makes filmmaking into a design process, and a film is re-conceptualized as graphic design that can change over time.


When physical media are simulated in a computer, we do not simply end with the same media as before. By adding new properties and working methods, computer simulation fundamentally changes the identity of a given media. For example, in the case of &quot;electronic paper&quot; such as a Word document or a PDF file, we can do many things which were not possible with ordinary paper: zoom in and out of the document, search for a particular phrase, change fonts and line spacing, etc. Similarly, the current (2006) online interactive maps services provided by ''Mapquest'', ''Yahoo'', and ''Google'' augment the traditional paper map in multiple and amazing ways - just take a look at ''Google Earth'' <ref>http://earth.google.com/</ref>.
When physical media are simulated in a computer, we do not simply end with the same media as before. By adding new properties and working methods, computer simulation fundamentally changes the identity of a given media. For example, in the case of "electronic paper" such as a Word document or a PDF file, we can do many things which were not possible with ordinary paper: zoom in and out of the document, search for a particular phrase, change fonts and line spacing, etc. Similarly, the current (2006) online interactive maps services provided by ''Mapquest'', ''Yahoo'', and ''Google'' augment the traditional paper map in multiple and amazing ways - just take a look at ''Google Earth'' <ref>http://earth.google.com/</ref>.


A significant proportion of contemporary software for creating, editing, and interacting with media developed in this way - by simulating a physical media and augmenting it with new properties. But if we consider media design software such as ''Maya'' (used for 3D modeling and computer animation) or ''After Effects'' (motion graphics, compositing and visual effects), we encounter a different logic. These software applications ''do not simulate any single physical media that existed previously''. Rather, ''they borrow from a number of different media combining and mixing their working methods and specific techniques''. (And, of course, they also add new capabilities specific to computers - such as, the ability to automatically calculate the intermediate values between a number of keyframes.) For example, 3D modeling software mixes form making techniques which were previously were &quot;hardwired&quot; in to different physical media: the ability to change the curvature of a rounded form as though it is made from clay, the ability to build a structure from simple geometric primitives the way a house can be build from identical rectangular building blocks, etc.
A significant proportion of contemporary software for creating, editing, and interacting with media developed in this way - by simulating a physical media and augmenting it with new properties. But if we consider media design software such as ''Maya'' (used for 3D modeling and computer animation) or ''After Effects'' (motion graphics, compositing and visual effects), we encounter a different logic. These software applications ''do not simulate any single physical media that existed previously''. Rather, ''they borrow from a number of different media combining and mixing their working methods and specific techniques''. (And, of course, they also add new capabilities specific to computers - such as, the ability to automatically calculate the intermediate values between a number of keyframes.) For example, 3D modeling software mixes form making techniques which were previously were "hardwired" in to different physical media: the ability to change the curvature of a rounded form as though it is made from clay, the ability to build a structure from simple geometric primitives the way a house can be build from identical rectangular building blocks, etc.


Similarly, as we saw, ''After Effects'' original interface, toolkit, and workflow drew on the techniques of animation and the techniques of graphic design. (We can also find traces of filmmaking and 3D computer graphics.) But the result is not simply a mechanical sum of all elements that came from earlier media. Rather, as software remixes the techniques and working methods of the various media they simulate, the result are new interfaces, tools and workflow with their own distinct logic. In the case of ''After Effects'', the working method which it puts forward is neither animation, nor graphic design, nor cinematography, even though it draws from all these fields. It is a new way to make moving image media. Similarly, the visual language of media produced with this and similar software is also diffirent from the langauges of moving images which existed previously.
Similarly, as we saw, ''After Effects'' original interface, toolkit, and workflow drew on the techniques of animation and the techniques of graphic design. (We can also find traces of filmmaking and 3D computer graphics.) But the result is not simply a mechanical sum of all elements that came from earlier media. Rather, as software remixes the techniques and working methods of the various media they simulate, the result are new interfaces, tools and workflow with their own distinct logic. In the case of ''After Effects'', the working method which it puts forward is neither animation, nor graphic design, nor cinematography, even though it draws from all these fields. It is a new way to make moving image media. Similarly, the visual language of media produced with this and similar software is also diffirent from the langauges of moving images which existed previously.
Line 104: Line 104:
== 3D Compositing: Three-dimensional Space as a New Platform for Media Design ==
== 3D Compositing: Three-dimensional Space as a New Platform for Media Design ==


As I was researching what the users and industry reviewers has been saying about ''After Effects'', I came across a somewhat condescending characterization of this software as &quot;''Photoshop'' with keyframes.&quot; I think that this characterization is actually quite useful. <ref>Soon after the initial release of ''After Effects'' in January 1993, the company that produced it was purchased by Adobe who are already selling ''Photoshop''.</ref> Think about all the different ways of manipulating images available in ''Photoshop'' and the degree of control provided by its multiple tools. Think also about its concept a visual composition as a stack of, potentially, hundreds of layers each with its level of transparency and multiple alpha channels. The ability to animate such a composition and continue using ''Photoshop'' tools to adjust visual elements over time on all layers independently does indeed constitute a new paradigm for creating moving images. And this is what ''After Effects'' and other animation, visual effects and compositing software make possible today. <ref>''Photoshop'' and ''After Effects'' were designed originally by different people at different time, and even after both were purchased by Adobe (it released ''Photoshop'' in 1989 and ''After Effects'' in 1993), it took Adobe a number of years to build close links between ''After Effects'' and ''Photoshop'' eventually making it easy to back and forth between the two programs.</ref> And while the paradigm of working with a number of layers placed on top of each other itself is not new - consider traditional cell animation, optical printing, photocollage, and graphic design - going from a few non-transparent layers to hundreds and even thousands, each with its controls, fundamentally changes not only how a moving image looks but also what it can say.
As I was researching what the users and industry reviewers has been saying about ''After Effects'', I came across a somewhat condescending characterization of this software as "''Photoshop'' with keyframes." I think that this characterization is actually quite useful. <ref>Soon after the initial release of ''After Effects'' in January 1993, the company that produced it was purchased by Adobe who are already selling ''Photoshop''.</ref> Think about all the different ways of manipulating images available in ''Photoshop'' and the degree of control provided by its multiple tools. Think also about its concept a visual composition as a stack of, potentially, hundreds of layers each with its level of transparency and multiple alpha channels. The ability to animate such a composition and continue using ''Photoshop'' tools to adjust visual elements over time on all layers independently does indeed constitute a new paradigm for creating moving images. And this is what ''After Effects'' and other animation, visual effects and compositing software make possible today. <ref>''Photoshop'' and ''After Effects'' were designed originally by different people at different time, and even after both were purchased by Adobe (it released ''Photoshop'' in 1989 and ''After Effects'' in 1993), it took Adobe a number of years to build close links between ''After Effects'' and ''Photoshop'' eventually making it easy to back and forth between the two programs.</ref> And while the paradigm of working with a number of layers placed on top of each other itself is not new - consider traditional cell animation, optical printing, photocollage, and graphic design - going from a few non-transparent layers to hundreds and even thousands, each with its controls, fundamentally changes not only how a moving image looks but also what it can say.


But innovative as it was, by the beginning of the 2000s the 2D digital compositing paradigm already came to be supplemented by a new one: 3D compositing. The new paradigm has even less connections to previous media than 2D compositing. Instead, it takes the relatively new media that was born with computers in the 1960s - 3D computer graphics - and transforms it into a general platform for moving media design.
But innovative as it was, by the beginning of the 2000s the 2D digital compositing paradigm already came to be supplemented by a new one: 3D compositing. The new paradigm has even less connections to previous media than 2D compositing. Instead, it takes the relatively new media that was born with computers in the 1960s - 3D computer graphics - and transforms it into a general platform for moving media design.


The language used in the professional production milieu today reflects an implicit understanding that 3D graphics is a new medium, unique to computers. When people use terms such as &quot;computer visuals,&quot; &quot;computer imagery,&quot; or &quot;CGI&quot; which is an abbreviation for &quot;computer generated imagery,&quot; everybody understands that they refer to 3D graphics as opposed to any other image source such as &quot;digital photography. But what is my own reason for thinking of 3D computer graphics as a new media - as opposed to considering it as an extension of architectural drafting, projection geometry, or set making? Because it offers a new method for representing physical reality - both what actually exists and what is imagined. This method is fundamentally different from what has been offered by main media of the industrial era: still photography, film recording, and audio recording. With 3D computer graphics, we can represent the three-dimensional structure of the world - this versus capturing only a perspectival image of the world, as in lens-based recording. We can also manipulate our representation, using various tools, with an ease and precision which is qualitatively different to that of a much more limited &quot;manipulability&quot; of a model made from any physical material (although nanotechnology promises to change this in the future.) And, as the case of contemporary architecture makes it clear, 3D computer graphics is not simply a faster way of working with geometric representations such as plans and cross-sections used by draftsmen for centuries. When the generation of young architects and architectural students started to systematically work with 3D software such as Alias in the middle of the 1990s, the ability to directly manipulate a 3D shape (rather than only dealing with its projections as in traditional drafting) quickly led to a whole new language of complex non-rectangular shapes. In other words, designers working with the media of 3D computer graphics started to imagine different things.
The language used in the professional production milieu today reflects an implicit understanding that 3D graphics is a new medium, unique to computers. When people use terms such as "computer visuals," "computer imagery," or "CGI" which is an abbreviation for "computer generated imagery," everybody understands that they refer to 3D graphics as opposed to any other image source such as "digital photography. But what is my own reason for thinking of 3D computer graphics as a new media - as opposed to considering it as an extension of architectural drafting, projection geometry, or set making? Because it offers a new method for representing physical reality - both what actually exists and what is imagined. This method is fundamentally different from what has been offered by main media of the industrial era: still photography, film recording, and audio recording. With 3D computer graphics, we can represent the three-dimensional structure of the world - this versus capturing only a perspectival image of the world, as in lens-based recording. We can also manipulate our representation, using various tools, with an ease and precision which is qualitatively different to that of a much more limited "manipulability" of a model made from any physical material (although nanotechnology promises to change this in the future.) And, as the case of contemporary architecture makes it clear, 3D computer graphics is not simply a faster way of working with geometric representations such as plans and cross-sections used by draftsmen for centuries. When the generation of young architects and architectural students started to systematically work with 3D software such as Alias in the middle of the 1990s, the ability to directly manipulate a 3D shape (rather than only dealing with its projections as in traditional drafting) quickly led to a whole new language of complex non-rectangular shapes. In other words, designers working with the media of 3D computer graphics started to imagine different things.


To come back to our topic of discussion: When the Velvet Revolution of the 1990s made it possible to easily combine multiple media sources in a single moving image sequence via digital compositing, CGI was added to the mix. Today, 3D models are routinely used in media compositions created in ''After Effects'' and similar software, along with all other media sources. But in order to be a part of the mix, they need to be placed on their own 2D layers and thus treated as 2D images. This was the original ''After Effects'' paradigm: all image media can meet as long as they are reduced to 2D. <ref>I say &quot;original&quot; because in the later version of ''After Effects'' Adobe added the ability to work with 3D layers.</ref>
To come back to our topic of discussion: When the Velvet Revolution of the 1990s made it possible to easily combine multiple media sources in a single moving image sequence via digital compositing, CGI was added to the mix. Today, 3D models are routinely used in media compositions created in ''After Effects'' and similar software, along with all other media sources. But in order to be a part of the mix, they need to be placed on their own 2D layers and thus treated as 2D images. This was the original ''After Effects'' paradigm: all image media can meet as long as they are reduced to 2D. <ref>I say "original" because in the later version of ''After Effects'' Adobe added the ability to work with 3D layers.</ref>


In contrast, in the 3D compositing paradigm all media types are placed within a single 3D space. This works as follows. A designer positions all image sources which are two inherently two dimensional - for instance, digital film or digitized film, hand-drawn elements, typography - on separate 2D planes. These planes are situated within a single virtual 3D space. One advantage of this representation is that since 3D space is &quot;native&quot; to 3D computer graphics, 3D models can stay as they are, i.e. three-dimensional. An additional advantage is that the designer can now use all the techniques of virtual cinematography as developed in 3D computer animation. She can define different kinds of lights, fly the virtual camera around and through the image planes at any trajectory, and use depth of field and motion blur effects. <ref>If 2D compositing can be understood as an extension of Twentieth century cell animation where a composition consists from a stack of flat drawings, the conceptual source of the 3D compositing paradigm is different. It comes out from the work on integrating live action footage and CGI in the 1980s done in the context of feature films production. Both film director and computer animator work in a three dimensional space: the physical space of the set in the first case, the virtual space as defined by 3D modeling software in the second case. Therefore conceptually it makes sense to use three-dimensional space as a common platform for the integration of these two worlds. It is not accidental that ''NUKE'', one of the leading programs for 3D compositing today was developed in house at Digital Domain a company which was co-founded in 1993 by James Cameron - the Hollywood director who systematically advanced the integration of CGI and live action in his films such as ''Abyss'' (1989), ''Terminator 2'' (1991), and ''Titanic'' (1997).</ref>
In contrast, in the 3D compositing paradigm all media types are placed within a single 3D space. This works as follows. A designer positions all image sources which are two inherently two dimensional - for instance, digital film or digitized film, hand-drawn elements, typography - on separate 2D planes. These planes are situated within a single virtual 3D space. One advantage of this representation is that since 3D space is "native" to 3D computer graphics, 3D models can stay as they are, i.e. three-dimensional. An additional advantage is that the designer can now use all the techniques of virtual cinematography as developed in 3D computer animation. She can define different kinds of lights, fly the virtual camera around and through the image planes at any trajectory, and use depth of field and motion blur effects. <ref>If 2D compositing can be understood as an extension of Twentieth century cell animation where a composition consists from a stack of flat drawings, the conceptual source of the 3D compositing paradigm is different. It comes out from the work on integrating live action footage and CGI in the 1980s done in the context of feature films production. Both film director and computer animator work in a three dimensional space: the physical space of the set in the first case, the virtual space as defined by 3D modeling software in the second case. Therefore conceptually it makes sense to use three-dimensional space as a common platform for the integration of these two worlds. It is not accidental that ''NUKE'', one of the leading programs for 3D compositing today was developed in house at Digital Domain a company which was co-founded in 1993 by James Cameron - the Hollywood director who systematically advanced the integration of CGI and live action in his films such as ''Abyss'' (1989), ''Terminator 2'' (1991), and ''Titanic'' (1997).</ref>


In 1995 I published the article ''What is Digital Cinema?'' which was my first attempt to describe the changes in the logic of moving image production I was witnessing. In that article I proposed that the logic of hand-drawn animation, which throughout the Twentieth century was marginal in relation to cinema, became dominant in a computer era. Because software allows the designer to manually manipulate any image, regarding its source as though it was drawn in the first place, the ontological differences between different image media become irrelevant. Both conceptually and practically, they all reduced to hand-drawn animation.
In 1995 I published the article ''What is Digital Cinema?'' which was my first attempt to describe the changes in the logic of moving image production I was witnessing. In that article I proposed that the logic of hand-drawn animation, which throughout the Twentieth century was marginal in relation to cinema, became dominant in a computer era. Because software allows the designer to manually manipulate any image, regarding its source as though it was drawn in the first place, the ontological differences between different image media become irrelevant. Both conceptually and practically, they all reduced to hand-drawn animation.


Having discussed the use of layers in 2D compositing using the example of ''After Effects'', I can now add that animation logic moves from the marginal to the dominant position also in another way. The paradigm of a composition as a stack of separate visual elements as practiced in cell animation becomes the default way of working with all images in a software environment - regardless of their origin and final output media. In short, a moving image in general is now understood as a composite of layers of imagery. A &quot;single layer image&quot; such as un-manipulated digital video becomes an exception.
Having discussed the use of layers in 2D compositing using the example of ''After Effects'', I can now add that animation logic moves from the marginal to the dominant position also in another way. The paradigm of a composition as a stack of separate visual elements as practiced in cell animation becomes the default way of working with all images in a software environment - regardless of their origin and final output media. In short, a moving image in general is now understood as a composite of layers of imagery. A "single layer image" such as un-manipulated digital video becomes an exception.


The emergence of the 3D compositing paradigm can be also seen as following the logic of temporal reversal. The new representational structure as developed within the computer graphics field - a 3D virtual space containing 3D models - has gradually moved from a marginal to the dominant role. In the 1970s and 1980s computer graphics were used only occasionally in a dozen or so, feature films such as ''Alien'' (1979), ''Tron'' (1981), ''The Last Starfighter'' (1984), and ''Abyss'' (1989), and selected television commercials and broadcast graphics. But by the beginning of the 2000s, the representational structure of computer graphics, i.e. a 3D virtual space, came to function as an umbrella which can hold all other image types regardless of their origin. An example of an application which implements this paradigm is ''Flame'', enthusiastically described by one user as &quot;a full 3D compositing environment into which you can bring 3D models, create true 3D text and 3D particles, and distort layers in 3D space.&quot; <ref>Alan Okey, post to forums.creativecow.net, Dec 28, 2005 http://forums.creativecow.net/cgi-bin/dev_read_post.cgi?forumid=154&amp;postid=855029 .</ref>
The emergence of the 3D compositing paradigm can be also seen as following the logic of temporal reversal. The new representational structure as developed within the computer graphics field - a 3D virtual space containing 3D models - has gradually moved from a marginal to the dominant role. In the 1970s and 1980s computer graphics were used only occasionally in a dozen or so, feature films such as ''Alien'' (1979), ''Tron'' (1981), ''The Last Starfighter'' (1984), and ''Abyss'' (1989), and selected television commercials and broadcast graphics. But by the beginning of the 2000s, the representational structure of computer graphics, i.e. a 3D virtual space, came to function as an umbrella which can hold all other image types regardless of their origin. An example of an application which implements this paradigm is ''Flame'', enthusiastically described by one user as "a full 3D compositing environment into which you can bring 3D models, create true 3D text and 3D particles, and distort layers in 3D space." <ref>Alan Okey, post to forums.creativecow.net, Dec 28, 2005 http://forums.creativecow.net/cgi-bin/dev_read_post.cgi?forumid=154&amp;postid=855029 .</ref>


This does not mean that 3D animation itself became visually dominant in moving image culture, or that the 3D structure of the space within which media compositions are now routinely constructed is necessary made visible (usually it is not.) Rather, the way 3D computer animation organizes visual data - as objects positioned in a Cartesian space - became the way to work with all moving image media. As already stated above, a designer positions all the elements which go into a composition - 2D animated sequences, 3D objects, particle systems, video and digitized film sequences, still images and photographs - inside the shared 3D virtual space. There, these elements can be further animated, transformed, blurred, filtered, etc. So while all moving image media has been reduced to the status of hand-drawn animation in terms of their manipulability, we can also state that all media have become layers in 3D space. In short, the new media of 3D computer animation has &quot;eaten up&quot; the dominant media of the industrial age - lens-based photo, film and video recording.
This does not mean that 3D animation itself became visually dominant in moving image culture, or that the 3D structure of the space within which media compositions are now routinely constructed is necessary made visible (usually it is not.) Rather, the way 3D computer animation organizes visual data - as objects positioned in a Cartesian space - became the way to work with all moving image media. As already stated above, a designer positions all the elements which go into a composition - 2D animated sequences, 3D objects, particle systems, video and digitized film sequences, still images and photographs - inside the shared 3D virtual space. There, these elements can be further animated, transformed, blurred, filtered, etc. So while all moving image media has been reduced to the status of hand-drawn animation in terms of their manipulability, we can also state that all media have become layers in 3D space. In short, the new media of 3D computer animation has "eaten up" the dominant media of the industrial age - lens-based photo, film and video recording.


This is a good moment to pause and reflect on the very term of our discussion - moving image. When cinema in its modern form was born during the end of the nineteenth century, the new medium was understood as the extension of an already familiar one - that is, as a photographic image which is now moving. This understanding can be found in the press accounts of the day and also in at least one of the official names given to the new medium - &quot;moving pictures.&quot; On the material level, a film indeed consisted of separate photographic frames which when driven through a projector created the effect of motion for the viewer. So the concept used to understand it indeed fit with the material structure of the medium.
This is a good moment to pause and reflect on the very term of our discussion - moving image. When cinema in its modern form was born during the end of the nineteenth century, the new medium was understood as the extension of an already familiar one - that is, as a photographic image which is now moving. This understanding can be found in the press accounts of the day and also in at least one of the official names given to the new medium - "moving pictures." On the material level, a film indeed consisted of separate photographic frames which when driven through a projector created the effect of motion for the viewer. So the concept used to understand it indeed fit with the material structure of the medium.


But is this concept still appropriate today? When we record video and play it, we are still dealing with the same structure: a sequence of frames. But for professional media designers, the terms have changed. The importance of these changes is not just academic, nor purely theoretical. Because designers understand their media differently, they are creating media that looks different and has a new logic.
But is this concept still appropriate today? When we record video and play it, we are still dealing with the same structure: a sequence of frames. But for professional media designers, the terms have changed. The importance of these changes is not just academic, nor purely theoretical. Because designers understand their media differently, they are creating media that looks different and has a new logic.


Consider the conceptual changes, or new paradigms - which at the same time are new ways of designing - we have discussed so far. Theoretically they are not necessary all compatible with each other, but in production practice these different paradigms are used together. A &quot;moving image&quot; became a hybrid which can combine all different visual media invented so far - rather than holding only one kind of data such as camera recording, hand drawing, etc. Rather than being understood as a singular flat plane - the result of light focused by the lens and captured by the recording surface - it is now understood as a stack of separate layers potentially infinite in number. And rather than &quot;time-based,&quot; it becomes &quot;composition-based,&quot; or &quot;object oriented.&quot; That is, instead of being treated as a sequence of frames arranged in time, a &quot;moving image&quot; is now thought of as a two-dimensional composition that consists of a number of objects that can be manipulated independently. And finally, in yet another paradigm of 3D compositing, the designer is working in a three-dimensional space that holds both CGI and lens-recorded flat image sources
Consider the conceptual changes, or new paradigms - which at the same time are new ways of designing - we have discussed so far. Theoretically they are not necessary all compatible with each other, but in production practice these different paradigms are used together. A "moving image" became a hybrid which can combine all different visual media invented so far - rather than holding only one kind of data such as camera recording, hand drawing, etc. Rather than being understood as a singular flat plane - the result of light focused by the lens and captured by the recording surface - it is now understood as a stack of separate layers potentially infinite in number. And rather than "time-based," it becomes "composition-based," or "object oriented." That is, instead of being treated as a sequence of frames arranged in time, a "moving image" is now thought of as a two-dimensional composition that consists of a number of objects that can be manipulated independently. And finally, in yet another paradigm of 3D compositing, the designer is working in a three-dimensional space that holds both CGI and lens-recorded flat image sources


Of course, frame-based representation did not disappear - but it became simply a recoding and output format rather than the space where the actual design takes place. And while the term &quot;moving image&quot; can be still used as an appropriate description for how the output of a design process is experienced by its viewers, it no longer captures how the designers think about what they create, who think today very differently than those of twenty years ago.
Of course, frame-based representation did not disappear - but it became simply a recoding and output format rather than the space where the actual design takes place. And while the term "moving image" can be still used as an appropriate description for how the output of a design process is experienced by its viewers, it no longer captures how the designers think about what they create, who think today very differently than those of twenty years ago.


If we focus on what the different paradigms summarized above have in common, we can say that filmmakers, editors, special effects artists, animators, and motion graphics designers are working on ''a composition in 2D or a 3D space that consists of a number of separate objects''. The spatial dimension became as important as the temporal dimension. From the concept of a &quot;moving image&quot; understood as a sequence of static photographs we have moved to a new concept: ''a modular media composition.''
If we focus on what the different paradigms summarized above have in common, we can say that filmmakers, editors, special effects artists, animators, and motion graphics designers are working on ''a composition in 2D or a 3D space that consists of a number of separate objects''. The spatial dimension became as important as the temporal dimension. From the concept of a "moving image" understood as a sequence of static photographs we have moved to a new concept: ''a modular media composition.''


== Motion Graphics ==
== Motion Graphics ==


Let me invoke the figure of the inversion from marginal to mainstream in order to introduce yet one more paradigmatic shift. Another media type which until the 1990s was even more marginal to live action filmmaking than animation - typography - has now become an equal player along with lens-based images and all other types of media. The term &quot;motion graphics&quot; has been used at least since 1960 when a pioneer of computer filmmaking John Whitney named his new company Motion Graphics. However until the Velvet Revolution only a handful of people and companies had systematically explored the art of animated typography: Norman McLaren, Saul Blass, Pablo Ferro, R. Greenberg, and a few others. <ref>For a rare discussion of motion graphics prehistory as well as an equally rare attempt to analyze the field by using a set of concepts rather than as the usual coffee table portfolio of individual designers, see Jeff Bellantfoni and Matt Woolman, ''Type in Motion'' (Rizzoli, 1999).</ref> But in the middle of the 1990s moving image sequences or short films dominated by moving animated type and abstract graphical elements rather than by live action started to be produced in large numbers. The material cause for motion graphics take off? ''After Effects'' running on PCs and other software running on relatively inexpensive graphics workstations became affordable to smaller design, visual effects, and post-production houses, and soon individual designers. Almost overnight, the term &quot;motion graphics&quot; became well known. The five hundred year old Guttenberg galaxy sprang into motion.
Let me invoke the figure of the inversion from marginal to mainstream in order to introduce yet one more paradigmatic shift. Another media type which until the 1990s was even more marginal to live action filmmaking than animation - typography - has now become an equal player along with lens-based images and all other types of media. The term "motion graphics" has been used at least since 1960 when a pioneer of computer filmmaking John Whitney named his new company Motion Graphics. However until the Velvet Revolution only a handful of people and companies had systematically explored the art of animated typography: Norman McLaren, Saul Blass, Pablo Ferro, R. Greenberg, and a few others. <ref>For a rare discussion of motion graphics prehistory as well as an equally rare attempt to analyze the field by using a set of concepts rather than as the usual coffee table portfolio of individual designers, see Jeff Bellantfoni and Matt Woolman, ''Type in Motion'' (Rizzoli, 1999).</ref> But in the middle of the 1990s moving image sequences or short films dominated by moving animated type and abstract graphical elements rather than by live action started to be produced in large numbers. The material cause for motion graphics take off? ''After Effects'' running on PCs and other software running on relatively inexpensive graphics workstations became affordable to smaller design, visual effects, and post-production houses, and soon individual designers. Almost overnight, the term "motion graphics" became well known. The five hundred year old Guttenberg galaxy sprang into motion.


Along with typography, the whole language of Twentieth graphical century design was &quot;imported&quot; into moving image design. This development did not receive a name of its own, but it is obviously at least as important. Today (2006) the term &quot;motion graphics&quot; is often used to refer to all moving image sequences which are dominated by typography and/or design and embedded in larger forms. But we should recall that, while in the Twentieth century typography was indeed often used in combination with other design elements, for five hundred years it formed its own word. Therefore I think it is important to consider the two kinds of &quot;import&quot; operations that took place during the Velvet Revolution - typography and twentieth century graphic design - as two distinct historical developments.
Along with typography, the whole language of Twentieth graphical century design was "imported" into moving image design. This development did not receive a name of its own, but it is obviously at least as important. Today (2006) the term "motion graphics" is often used to refer to all moving image sequences which are dominated by typography and/or design and embedded in larger forms. But we should recall that, while in the Twentieth century typography was indeed often used in combination with other design elements, for five hundred years it formed its own word. Therefore I think it is important to consider the two kinds of "import" operations that took place during the Velvet Revolution - typography and twentieth century graphic design - as two distinct historical developments.


== Deep Remixability ==
== Deep Remixability ==
Line 142: Line 142:
Although the previous discussion did not cover all the changes that took place during the Velvet Revolution, the magnitude of the transformations should by now be clear. While we can name many social factors that all could have and probably did played some role - the rise of branding, the experience economy, youth markets, and the Web as a global communication platform during the 1990s - I believe that these factors alone cannot account for the specific design and visual logics which we see today in media culture. Similarly, they cannot be explained by simply saying that contemporary consumption society requires constant innovation, constant novel aesthetics, and effects. This may be true - but why do we see these particular visual languages as opposed to others, and what is the logic that drives their evolution? I believe that to properly understand this, we need to carefully look at media creation, editing, and design software and their use in production environments (which can range from a single laptop to a number of production companies collaborating on the same large-scale project.)
Although the previous discussion did not cover all the changes that took place during the Velvet Revolution, the magnitude of the transformations should by now be clear. While we can name many social factors that all could have and probably did played some role - the rise of branding, the experience economy, youth markets, and the Web as a global communication platform during the 1990s - I believe that these factors alone cannot account for the specific design and visual logics which we see today in media culture. Similarly, they cannot be explained by simply saying that contemporary consumption society requires constant innovation, constant novel aesthetics, and effects. This may be true - but why do we see these particular visual languages as opposed to others, and what is the logic that drives their evolution? I believe that to properly understand this, we need to carefully look at media creation, editing, and design software and their use in production environments (which can range from a single laptop to a number of production companies collaborating on the same large-scale project.)


The makers of software used in production do not usually set out to create a revolution. On the contrary, software is created to fit into already existing production procedures, job roles, and familiar tasks. But software are like species within the common ecology - in this case, a shared computer environment. Once &quot;released,&quot; they start interacting, mutating, and making hybrids. The Velvet Revolution can therefore be understood as the period of systematic hybridization between different software species originally designed to do work in different media. In the beginning of the 1990s, we had - ''Illustrator'' for making vector-based drawings, ''Photoshop'' for editing of continuous tone images, ''Wavefront'' and ''Alias'' for 3D modeling and animation, ''After Effects'' for 2D animation, and so on. By the end of the 1990s, a designer could combine operations and representational formats such as a bitmapped still image, an image sequence, a vector drawing, a 3D model and digital video specific to these programs within the same design - regardless of its destination media. I believe that the hybrid visual language that we see today across &quot;moving image&quot; culture and media design in general is largely the outcome of this new production environment. While this language supports seemingly numerous variations as manifested in particular media designs, its general logic can be summed up in one phrase: remixability of previously separate media languages.
The makers of software used in production do not usually set out to create a revolution. On the contrary, software is created to fit into already existing production procedures, job roles, and familiar tasks. But software are like species within the common ecology - in this case, a shared computer environment. Once "released," they start interacting, mutating, and making hybrids. The Velvet Revolution can therefore be understood as the period of systematic hybridization between different software species originally designed to do work in different media. In the beginning of the 1990s, we had - ''Illustrator'' for making vector-based drawings, ''Photoshop'' for editing of continuous tone images, ''Wavefront'' and ''Alias'' for 3D modeling and animation, ''After Effects'' for 2D animation, and so on. By the end of the 1990s, a designer could combine operations and representational formats such as a bitmapped still image, an image sequence, a vector drawing, a 3D model and digital video specific to these programs within the same design - regardless of its destination media. I believe that the hybrid visual language that we see today across "moving image" culture and media design in general is largely the outcome of this new production environment. While this language supports seemingly numerous variations as manifested in particular media designs, its general logic can be summed up in one phrase: remixability of previously separate media languages.


As I stressed in this text, the result of this hybridization is not simply a mechanical sum of the previously existing parts but a new species. This applies both to the visual language of particular designs, and to the operations themselves. When an old operation is integrated into the overall digital production environment, it often comes to function in a new way. I would like to conclude by analyzing in detail how this process works in the case of a particular operation - in order to emphasize once again that media remixability is not simply about adding the content of different media, or the adding together their techniques and languages. And since &quot;remix&quot; in contemporary culture is commonly understood as these kinds of additions, we may want to use a different term to talk about the kinds of transformations the example below illustrates. Let us call it ''deep remixability''.
As I stressed in this text, the result of this hybridization is not simply a mechanical sum of the previously existing parts but a new species. This applies both to the visual language of particular designs, and to the operations themselves. When an old operation is integrated into the overall digital production environment, it often comes to function in a new way. I would like to conclude by analyzing in detail how this process works in the case of a particular operation - in order to emphasize once again that media remixability is not simply about adding the content of different media, or the adding together their techniques and languages. And since "remix" in contemporary culture is commonly understood as these kinds of additions, we may want to use a different term to talk about the kinds of transformations the example below illustrates. Let us call it ''deep remixability''.


What does it mean when we see depth of field effect in motion graphics, films and television programs which use neither live action footage nor photorealistic 3D graphics but have a more stylized look? Originally an artifact of lens-based recording, depth of field was simulated in a computer when the main goal of the 3D computer graphics field was to create maximum &quot;photorealism,&quot; i.e. synthetic scenes not distinguishable from live action cinematography. <ref>For more on this process, see the chapter &quot;Synthetic Realism and its Discontents&quot; in ''The Language of New Media''.</ref> But once this technique became available, media designers gradually realized that it could be used regardless of how realistic or abstract the visual style is - as long as there is a suggestion of a 3D space. Typography moving in perspective through an empty space; drawn 2D characters positioned on different layers in a 3D space; a field of animated particles - any composition can be put through the simulated depth of field.
What does it mean when we see depth of field effect in motion graphics, films and television programs which use neither live action footage nor photorealistic 3D graphics but have a more stylized look? Originally an artifact of lens-based recording, depth of field was simulated in a computer when the main goal of the 3D computer graphics field was to create maximum "photorealism," i.e. synthetic scenes not distinguishable from live action cinematography. <ref>For more on this process, see the chapter "Synthetic Realism and its Discontents" in ''The Language of New Media''.</ref> But once this technique became available, media designers gradually realized that it could be used regardless of how realistic or abstract the visual style is - as long as there is a suggestion of a 3D space. Typography moving in perspective through an empty space; drawn 2D characters positioned on different layers in a 3D space; a field of animated particles - any composition can be put through the simulated depth of field.


The fact that this effect is simulated and removed from its original physical media means that a designer can manipulate it in a variety of ways. The parameters which define what part of the space is in focus can be independently animated, i.e. set to change over time - because they are simply the numbers controlling the algorithm and not something built into the optics of a physical lens. So while simulated depth of field can be said to maintain the memory of the particular physical media (lens-based photo and film recording) from which it came from, it became an essentially new technique which functions as a &quot;character&quot; in its own right. It has a fluidity and versatility not available previously. Its connection to the physical world is ambiguous at best. On the one hand, it only makes sense to use depth of field if you are constructing a 3D space even if it is only defined in a minimal way by using only a few or even a single depth cue such as lines converging towards the vanishing point or foreshortening. On the other hand, the designer can be said to &quot;draw&quot; this effect in any way desirable. The axis controlling depth of field does not need to be perpendicular to the image plane, the area in focus can be anywhere in space, it can also quickly move around the space, etc.
The fact that this effect is simulated and removed from its original physical media means that a designer can manipulate it in a variety of ways. The parameters which define what part of the space is in focus can be independently animated, i.e. set to change over time - because they are simply the numbers controlling the algorithm and not something built into the optics of a physical lens. So while simulated depth of field can be said to maintain the memory of the particular physical media (lens-based photo and film recording) from which it came from, it became an essentially new technique which functions as a "character" in its own right. It has a fluidity and versatility not available previously. Its connection to the physical world is ambiguous at best. On the one hand, it only makes sense to use depth of field if you are constructing a 3D space even if it is only defined in a minimal way by using only a few or even a single depth cue such as lines converging towards the vanishing point or foreshortening. On the other hand, the designer can be said to "draw" this effect in any way desirable. The axis controlling depth of field does not need to be perpendicular to the image plane, the area in focus can be anywhere in space, it can also quickly move around the space, etc.


Following the Velvet Revolution, the aesthetic charge of many media designs is often derived from more &quot;simple&quot; remix operations - juxtaposing different media in what can be called &quot;media montage.&quot; However, for me the essence of this Revolution is the more fundamental ''deep remixability'' illustrated by the example analyzed above. Computerization virtualized practically all media creation and modification techniques, &quot;extracting&quot; them from their particular physical media and turning them into algorithms. This means that in most cases, we will no longer find any of these techniques in their pure original state.
Following the Velvet Revolution, the aesthetic charge of many media designs is often derived from more "simple" remix operations - juxtaposing different media in what can be called "media montage." However, for me the essence of this Revolution is the more fundamental ''deep remixability'' illustrated by the example analyzed above. Computerization virtualized practically all media creation and modification techniques, "extracting" them from their particular physical media and turning them into algorithms. This means that in most cases, we will no longer find any of these techniques in their pure original state.


== Footnotes ==
== Footnotes ==

Latest revision as of 12:57, 19 February 2013

Author: Lev Manovich

Published online, spring 2006

During the heyday of debates on post-modern, at least one critic in America noticed the connection between post-modern pastiche and computerization. In his book After the Great Divide (1986), Andreas Huyssen writes: "All modern and avantgardist techniques, forms and images are now stored for instant recall in the computerized memory banks of our culture. But the same memory also stores all of pre-modernist art as well as the genres, codes, and image worlds of popular cultures and modern mass culture." [1] His analysis is accurate - except that these "computerized memory banks" did not really became commonplace for another fifteen years. Only when the Web absorbed enough of the media archives it became this universal cultural memory bank accessible to all cultural producers. But even for the professionals, the ability to easily integrate multiple media sources within the same project - multiple layers of video, scanned still images, animation, graphics, and typography - only came towards the end of the 1990s.

In 1985 when Huyssen book was in preparation for publication I was working for what was then one of the few computer animation companies in the world Digital Effects [2]. Each computer animator had his own interactive graphics terminal that could show 3D models but only in wireframe and in monochrome; to see them fully rendered in color, we had to take turns as the company had only one color raster display which we all shared. The data was stored on bulky magnetic tapes about a feet in diameter; to find the data from an old job was a cumbersome process which involved locating the right tape in tape library, putting it on a tape drive and then searching for the right part of the tape. We did not had a color scanner, so getting "all modern and avantgardist techniques, forms and images" into the computer was far from trivial. And even if we had one, there was no way to store, recall and modify these images. The machine that could do that - Quantel Paintbox - cost over USD 160,000, which we could not afford. And when in 1986 Quantel introduced Harry, the first commercial non-linear editing system which allowed for digital compositing of multiple layers of video and special effects, its cost similarly made it prohibitive for everybody except network television stations and a few production houses. Harry could record only eighty seconds of broadcast quality video. In the realm of still images, things were not much better: for instance, digital still store Paintbox released by Quantel in 1990 could hold only 500 broadcast quality images and it cost was similarly very high.

In short, in the middle of the 1980s neither we nor other production companies had anything approaching the "computerized memory banks" imagined by Huyssen. And of course, the same was true for the visual artists that were then associated with post-modernism and the ideas of pastiche, collage and appropriation. In 1986 the BBC produced a documentary Painting with Light for which half a dozen well-known painters including Richard Hamilton and David Hockney were invited to work with a Quantel Paintbox. The resulting images were not so different from the normal paintings that these artists were producing without a computer. And while some artists were making references to "modern and avantgardist techniques, forms and images," these references were painted rather than being directly loaded from "computerized memory banks." Only in the middle of the 1990s, when relatively inexpensive graphics workstations and personal computers running image editing, animation, compositing and illustration software became commonplace and affordable for freelance graphic designers, illustrators, and small post-production and animation studios, could the situation described by Huyssen start to become a reality.

The results were dramatic. Within about five years, modern visual culture was fundamentally transformed. Previously separate media - live action cinematography, graphics, still photography, animation, 3D computer animation, and typography - started to be combined in numerous ways. By the end of the decade, the "pure" moving image media became an exception and hybrid media became the norm. However, in contrast to other computer revolutions such as the rise of World Wide Web around the same time, this revolution was not acknowledged by popular media or by cultural critics. What received attention were the developments that affected narrative filmmaking - the use of computer-produced special effects in Hollywood feature films or the inexpensive digital video and editing tools outside of it. But another process which happened on a larger scale - the transformation of the visual language used by all forms of moving images outside of narrative films - has not been critically analyzed. In fact, while the results of these transformations have become fully visible by about 1998, at the time of this writing (early 2006) I am not aware of a single theoretical article discussing them.

One of the reasons is that in this revolution, no new media per se were created. Just as ten years ago, the designers were making still images and moving images. But the aesthetics of these images was now very different. In fact, it was so new that, in retrospect, the post-modern imagery of just ten years ago that at the time looked strikingly different, now appears as a barely noticeable blip on the radar of cultural history.

Visual Hybridity

This article is a first part of the series devoted to the analysis of the new hybrid visual language of moving images that emerged during the period of 1993-1998. Today this language dominates our visual culture. While narrative features mostly stick to live cinematography and video shot by ordinary people with consumer video cameras and cell phones is similarly usually left as is, everything else commercials, music videos, motion graphics, TV graphics, and other types of short non-narrative films and moving image sequences being produced around the world by the media professionals including companies, individual designers and artists, and students - are hybrid.

Of course, I could have picked different dates, for instance starting a few years earlier - but since the After Effects software, which will play the key role in my account, was released in 1993, I decided to pick this year as my first date. And while my second date also could have been different, I believe that by 1998 the broad changes in the aesthetics of the moving image became visible. If you want to quickly see this for yourself, simply compare demo reels from the same visual effects companies made in early 1990s and late 1990s (a number of them are available online - look for instance at the work of Pacific Data Images. [3]) In the work from the beginning of the decade, computer imagery in most cases appears by itself - that is, we see whole commercials and promotional videos done in 3D computer animation, and the novelty of this new media is foregrounded. By the end of the 1990s, computer animation becomes just one element integrated in the media mix that also includes live action, typography, and design.

Although these transformations happened only recently, the ubiquity of the new hybrid visual language today (2006) is such that it takes an effort to recall how different things looked before. Similarly, the changes in production processes and equipment that made this language possible also quickly fade from both the public and professional memory. As a way to quick evoke these changes as seen from the professional perspective, I am going to quote from 2004 interview with Mindi Lipschultz who has worked as an editor, producer and director in Los Angeles since 1979:

If you wanted to be more creative [in the 1980s], you couldn't just add more software to your system. You had to spend hundreds of thousands of dollars and buy a paintbox. If you wanted to do something graphic - an open to a TV show with a lot of layers - you had to go to an editing house and spend over a thousand dollars an hour to do the exact same thing you do now by buying an inexpensive computer and several software programs. Now with Adobe After Effects and Photoshop, you can do everything in one sweep. You can edit, design, animate. You can do 3D or 2D all on your desktop computer at home or in a small office. [4]

In 1989 the former Soviet satellites of Central and Eastern Europe peacefully liberated themselves from the Soviet Union. In the case of Czechoslovakia, this event came to be referred to as the Velvet Revolution - to contrast it to typical revolutions in modern history that were always accompanied by bloodshed. To emphasize the gradual, almost invisible pace of the transformations which occurred in moving image aesthetics between approximately 1993 and 1998, I am going to appropriate the term Velvet Revolution to refer to these transformations. Although it may seem presumptuous to compare political and aesthetics transformations simply because they share the same non-violent quality, as we will see in the later article, the two revolutions are actually related. But we can only make this connection after we analyse in detail how the aesthetics and the very logic of moving images changed during this period.

Although the Velvet Revolution I will be discussing involved many technological and social developments - hardware, software, production practices, new job titles and new professional fields - it is appropriate to highlight one software package as being in the center of the events. This software is After Effects. Introduced in 1993, After Effects was the first software designed to do animation, compositing, and special effects on the personal computer. [5] Its broad effect on moving image production can be compared to the effects of Photoshop and Illustrator on photography, illustration, and graphic design. Although today (2006) media design and post-production companies continue to rely on more expensive "high-end" software such as Flame, Inferno or Paintbox that run on specialized graphics workstations from SGI, because of its affordability and length of time on the market After Effects is the most popular and well-known application in this area. Consequently, After Effects will be given a privileged role in this text as both the symbol and the key material foundation which made the Velvet Revolution in moving image culture possible - even though today other programs in the similar price category such as Apple's Motion, Autodesk's Combustion, and Macromedia's Flash have challenged After Effects' dominance.

Finally, before proceeding I should explain the use of examples in this article. The visual language I am analyzing is all around us today (this may explain why academics have remained blind to it). After globalization, this language is spoken by all communication professionals around the world.You can see for yourself all the examples of the various aesthetics I will be mentioning below by simply watching television in practically any country and paying attention to graphics, or going to a club to see a VJ performance, or visiting the web sites of motion graphics designers and visual effects companies, or opening any book on contemporary design. Nevertheless, I have included references to particular projects below so the reader can see exactly what I am referring to. [6] But since my goal is to describe the new cultural language which by now has become practically universal, I want to emphasize that each of these examples can be substituted for numerous others.

Examples

The use of After Effects is closely identified with a particular type of moving images which became commonplace to a large part because of this software and known as "motion graphics." Concisely defined by Matt Frantz in his Master Thesis as "designed non-narrative, non-figurative based visuals that change over time," [7] motion graphics today include film and television titles, TV graphics, dynamic menus, the graphics for mobile media content, and other animated sequences. Typically, motion graphics appear as parts of longer pieces: commercials, music videos, training videos, narrative and documentary films, interactive projects.

While motion graphics definitely exemplify the changes that took place during this Velvet Revolution, these changes are more broad. Simply put, the result of the Velvet Revolution is a new hybrid visual language of moving images in general. This language is not confined to particular media forms. And while today it manifests itself most clearly in non-narrative forms, it is also often present in narrative and figurative sequences and films.

For example, a music video may use live action while also employing typography and a variety of transitions done with computer graphics (example: the video for Go by Common, directed by Convert / MK12 / Kanye West, 2005). Or it may embed the singer within the animated painterly space (video for Sheryl Crow's Good Is Good, directed by Psyop, 2005.) A short film may mix typography, stylized 3D graphics, moving design elements, and video (Itsu for Plaid, directed by Pleix collective, 2002 [8]).

In some cases, the juxtaposition of different media is clearly visible (examples: music video for Don't Panic by Coldplay; main title for The Inside by Imaginary Forces, 2005). In other cases, a sequence may move between different media so quickly that the shifts are barely noticeable (GMC Denali "Holes" commercial by Imaginary Forces, 2005). In yet other cases, a commercial or a movie title may feature a continuous action shot on video or film, with the image periodically changing from a more natural to a highly stylized look.

While the particular aesthetic solutions vary from one piece to the next and from one designer to another, they all share the same logic: the appearance of multiple media simultaneously in the same frame. Whether these media are openly juxtaposed or almost seamlessly blended together is less important than the fact of this co-presence itself.

Today such hybrid visual language is also common to a large proportion of short "experimental" (i.e. non-commercial) films being produced for media festivals, the web, mobile media devices, and other distribution platforms. [9] The large percentage of the visuals created by VJs and Live Cinema artists are also hybrid, combining video, layers of 2D imagery, animation, and abstract imagery generated in real time. ([10]) In the case of narrative feature films and TV programs, while they still rarely mix different graphical styles within the same frame, many now feature highly stylized aesthetics which would previously be identified with illustration rather than filmmaking - for instance, the TV series CSI, George Lucas's latest Star Wars films, or Robert Rodriguez's Sin City.

Media Remixability

What is the logic of this new hybrid visual language? This logic is one of remixability: not only of the content of different media or simply their aesthetics, but their fundamental techniques, working methods, languages, and assumptions. United within the common software environment, cinematography, animation, computer animation, special effects, graphic design, and typography have come to form a new metamedium. A work produced in this new metamedium can use all the techniques which were previously unique to these different media, or any subset of these techniques.

If we use the concept of "remediation" to describe this new situation, we will misrepresent this logic - or the logic of media computing in general. [11] The computer does not "remediate" particular media. Instead, it simulates all media. And what it simulates are not the surface appearances of different media but all the techniques used for their production and all the methods of viewing and interaction with the works in these media.

Once all types of media met within the same digital environment - and this was accomplished by the middle of the 1990s - they started interacting in the ways that could never be predicted nor even imagined previously. For instance, while particular media techniques continue to be used in relation to their original media, they can also be applied to other media. (This is possible because the techniques are turned into algorithms, all media is turned into digital data stored in compatible file formats, and software is designed to read and write files produced by other programs.) Here are a few examples: motion blur is applied to 3D computer graphics, computer generated fields of particles are blended with live action footage to give it an enhanced look, a virtual camera is made to move around the virtual space filled with 2D drawings, flat typography is animated as though it is made from a liquid like material (the liquid simulation coming from computer graphics field), and so on. And while this "cross-over" use by itself constitutes a fundamental shift in media history, today a typical short film or a sequence may combine many such pairings within the same frame. The result is a hybrid, intricate, complex, and rich visual language - or rather, numerous languages that share the basic logic of remixabilty.

I believe that the "media remixability" which begins around the middle of the 1990s constitutes a fundamentally new stage in the history of media. It manifests itself in different areas of culture and not only moving images - although the later does offer a particularly striking example of this new logic at work. Here software such as After Effects became a petri dish where computer animation, live cinematography, graphic design, 2D animation and typography started to interact together, creating new hybrids. And as the examples mentioned above demonstrate, the result of this process of remixability are new aesthetics and the production of new media species which cannot be reduced to the sum of media that went into them. Put differently, the interactions of different media in the same software environment create cultural species.

Media remixability does not necessarily lead to a collage-like aesthetics which foregrounds the juxtapositions of different media and different media techniques. As a very different example of what media remixability can result in, consider a more subtle aesthetics well captured by the name of the software under discussion - After Effects. If, in the 1990s, computers were used to create highly spectacular special effects or "invisible effects," [12] by the end of this decade we see something else emerging: a new visual aesthetics which goes "beyond effects." In this aesthetics, the whole project - music video, commercial, short film, or a large part of a feature film - displays a hyper-real look where the enhancement of live action material is not completely invisible but at the same time it does not call attention to itself the way special effects usually did (examples: Reebok I-Pimp Black Basketball commercial, The Legend of Zorro main title, both by Imaginary Forces, 2005.) This new hyper-real aesthetics is yet another example of how in the hands of designers the petri dish of software containing all the media creation and manipulation techniques created during human history now produces new hybrids. In fact, it produces only hybrids.

Layers, Transparency, Compositing

Let us now look at the details of new visual language of moving images which emerged from the Velvet Revolution and the material and social conditions - software, user interface, design workflow - which make remixability possible. Probably the most dramatic among the changes that took place during 1993-1998 was the new ability to combine together multiple levels of imagery with varying degree of transparency via digital compositing. If you compare a typical music video or a TV advertising spot circa 1986 with their counterparts circa 1996, the differences are striking. (The same holds for still images.) As I already noted, in 1986 "computerized memory banks" were very limited in their storage capacity and prohibitively expensive, and therefore designers could not quickly and easily cut and paste multiple image sources. But even when they would assemble multiple visual references, a designer only could place them next to, or on top of each other. She could not modulate these juxtapositions by precisely adjusting for instance, the transparency levels of different images. Instead, she had to resort to the same photocollage techniques popularized in the 1920s. In other words, the lack of transparency restricted the number of different images sources that can be integrated within a single composition without it starting to look like many photomontages of John Heartfield, Hannah Hoch, or Robert Rauschenberg - a mosaic of fragments without any strong dominant. [13]

Compositing also made trivial another operation which was previously very cumbersome. Until the 1990s, different media types such as hand-drawn animation, lens-based recordings, i.e. film and video, and typography practically never appeared within the same frame. Instead, animated commercials, publicity shorts, industrial films, and some feature and experimental films that did include multiple media usually placed them in separate shots. A few directors have managed to build whole aesthetic systems out of such temporal juxtapositions - most notably, Jean-Luc Godard. In his 1960s films such as Week End (1967) Godard cut bold typographic compositions in between live action creating what can be called "media montages." Also in the 1960s pioneering motion graphics designer Pablo Ferro who has appropriately called his company Frame Imagery created promotional shorts and TV graphics that played on juxtapositions of different media replacing each other in a rapid succession. [14] In a number of Ferro's spots, static images of different letterforms, line drawings, original hand painted artwork, photographs, very short clips from newsreels, and other visuals would come one after another with machine gun speed.

Within cinema, the superimposition of different media within the same frame were usually limited to the two media placed on top of each other in a standardized manner - i.e., static letters appearing on top of still or moving lens-based images in feature film titles. Both Ferro and another motion graphics pioneer Saul Bass have created a few title sequences where visual elements of different origin were systematically overlaid - such as the opening for Hitchcock's Vertigo designed by Bass (1958). But I think it is fair to say that such complex juxtapositions of media within the same frame (rather than in edited sequence) were rare exceptions in the overwise "unimedia" universe where filmed images appeared in feature films and hand drawn images appeared in animated films. The only twentieth century feature film director I know of who has build his unique aesthetics by systematically combining different media within the same shot is Czech Karel Zeman. A typical shot by Zeman may contain filmed human figures, an old engraving used for background, and a miniature model. [15]

The achievements of these directors and designers are particularly remarkable given the difficulty of combing different media within the same frame during the film era. To do this required the services of a special effects departments or separate companies which used optical printers. The techniques that were cheap and more accessible such as double exposure were limited in their precision. So while a designer of static images could at least cut and paste multiple elements within the same composition to create a photomontage, to create the equivalent effect with moving images was far from trivial.

To put this in general terms, we can say that before the computerization of the 1990s, the designer's capacities to access, manipulate, remix, and filter visual information, whether still or moving, were quite restricted. In fact, they were practically the same as those of a hundred years earlier - regardless of whether filmmakers and designers used in-camera effects, optical printing, or video keying. In retrospect, we can see they were at odds with the flexibility, speed, and precision of data manipulation already available to most other professional fields which by that time were computerized - sciences, engineering, accounting, management, etc. Therefore it was only a matter of time before all image media would be turned into digital data and illustrators, graphic designers, animators, film editors, video editors, and motion graphics designers start manipulating them via software instead of their traditional tools. But this is only obvious today - after the Velvet Revolution has taken place.

In 1985 Jeff Stein directed a music video for the new wave band Cars. This video had a big effect in the design world, and MTV gave it the first prize in its first annual music awards. [16] Stein managed to create a surreal world in which a video cutout of the singing head of a band member was animated over different video backgrounds. In other words, Stein took the aesthetics of animated cartoons - 2D animated characters superimposed over a 2D background - and recreated it using video imagery. In addition, simple computer animated elements were also added in some shots to enhance the surreal effect. This was shocking because nobody ever saw such juxtapositions as this before. Suddenly, modernist photomontage came alive. But ten years later, such moving video collages not only became commonplace but they also became more complex, more layered, and more subtle. Instead of two or three, a composition could now feature hundreds and even thousands of layers. And each layer could have its own level of transparency.

In short, digital compositing now allowed the designers to easily mix any number of visual elements regardless of the media in which they originated and to control each element in the process. We can make an analogy between multitrack audio recording and digital compositing of moving images. In multitrack recording, each sound track can be manipulated individually to produce the desired result. Similarly, in digital compositing each visual element can be independently modulated in a variety of ways: resized, recolored, animated, etc. Just as the music artist can focus on a particular track while muting all other tracks, a designer often turns off all visual tracks except the one she is currently adjusting. Similarly, both a music artist and a designer can at any time substitute one element of a composition by another, delete any elements, and add new ones. Most importantly, just as multitrack recording redefined the sound of popular music from the 1960s onward, once digital compositing became widely available during the 1990s, it changed the visual aesthetics of moving images in popular culture.

This brief discussion has only scratched the surface of my subject in this section, i.e. layers and transparency. For instance, I have not analyzed the actual techniques of digital compositing and the fundamental concept of an alpha channel which deserves a separate and detailed treatment. I have also did not go into the possible media histories leading to digital compositing, nor its relationship to optical printing, video keying and the video effects technology of the 1980s. These histories and relationships were discussed in the "Compositing" chapter (1999) in my The Language of New Media, but from a different perspective than the one used here. At that time I was looking at compositing from the point of view of the questions of cinematic realism, practices of montage, and the construction of special effects in feature films. Today, however, it is clear to me that in addition to disrupting the regime of cinematic realism in favor of other visual aesthetics, compositing also had another, even more fundamental effect.

By the end of the 1990s digital compositing has become the basic operation used in creating all forms of moving images, and not only big-budget features. So while compositing was originally developed in the context of special effects production in the 1970s and early 1980s [17], it had a much broader effect on contemporary visual and media cultures. Compositing played a key part in turning the digital computer into an experimental lab where different media can meet and where their aesthetics and techniques can be combined to create new species. In short, digital compositing was essential in enabling the development of a new hybrid visual language of moving images which we see everywhere today. In other words, compositing enabled media remixability in moving image. Thus, compositing which was at first a particular digital technique designed to integrate two particular media of live action film and computer graphics become a "universal media integrator." And, although compositing was originally created to support the aesthetics of cinematic realism, over time it actually had an opposite effect. Rather that forcing different media to fuse seamlessly, compositing led to the flourishing of numerous media hybrids where the juxtapositions between live and algorithmically generated, two dimensional and three dimensional, raster and vector are made deliberately visible rather than being hidden.

From "Time-based" to a "Composition-based"

My thesis about media remixability applies both to cultural forms and the software used to create them. Just as the moving image media made by designers today mix the formats, assumptions, and techniques of different media, the toolboxes and interfaces of the software they use are also remixes.. Let us again use After Effects as the case study to see how its interface remixes previously distinct working methods of different disciplines.

When moving image designers started to use compositing / animation software such as After Effects, its interface encouraged them think about moving images in a fundamentally new way. Film and video editing systems and the computer simulations of them that came to be known as non-linear editors (today exemplified by Avid and Final Cut [18]) have conceptualized a media project as a sequence of shots organized in time. Consequently, while NLE (the standard abbreviation for non-linear editing software) gave the editor many tools for adjusting the edits, they took for granted the constant of film language that came from its industrial organization - that all frames have the same size and aspect ratio. This is an example of a larger phenomenon: as physical media were simulated in a computer, often many of their fundamental properties, interface conventions and constraints were methodically re-created in software - even though the software medium itself has no such limitations. In contrast, from the beginning the After Effects interface put forward a new concept of moving image - as a composition organized both in time and 2D space.

The center of this interface is a Composition window conceptualized as a large canvas that can contain visual elements of arbitrary sizes and proportions. When I first started using After Effects soon after it came out, I remember feeling shocked that software did not automatically resize the graphics I dragged into Composition window to make them fit the overall frame. The fundamental assumption of cinema that accompanied it throughout its whole history - that film consists from many frames which all have the same size and aspect ratio - was gone.

In the film and video editing paradigms of the twentieth century, the minimal unit on which the editor works on is a frame. She can change the length of an edit, adjusting where one film or video segment ends and another begins, but she cannot interfere with the contents of a frame. The frame as whole functions as a kind of "black box" that cannot be "opened." This was the task of special effects departments. But in the After Effects interface, the basic unit is not a frame but a visual element placed in the Composition window. Each element can be individually accessed, manipulated and animated. In other words, each element is conceptualized as an independent object. Consequently, a media composition is understood as a set of independent objects that can change over time. The very word "composition" is important in this context as it references 2D media (drawing, painting, photography, design) rather than filmmaking - i.e. space as opposed to time.

Where does the After Effects interface came from? Given that this software is commonly used to create animated graphics (i.e., "motion graphics") and visual effects, it is not surprising that we can find interface elements which can be traced to three separate fields: animation, graphic design, and special effects. In traditional cell animation practice, an animator places a number of transparent cells on top of each other. Each cell contains a different drawing - for instance, a body of a character on one cell, the head on another cell, eyes on the third cell. Because the cells are transparent, the drawings get automatically "composited" into a single composition. While the After Effects interface does not use the metaphor of a sdack of transparent cells directly, it is based on the same principle. Each element in the Composition window is assigned a "virtual depth" relative to all other elements. Together all elements form a virtual stack. At any time, the designer can change the relative position of an element within the stack, delete it, or add new elements.

We can also see a connection between the After Effects interface and stop motion, another popular twentieth century animation technique. With the stop motion technique, puppets or any other objects are positioned in front of a camera and manually animated one frame at a time. The animator exposes one frame of film, changes the objects a tiny bit, exposes another frame, and so on.

Just as with the case of both cell and stop-motion animation, After Effects does not make any assumptions about the size or positions of individual elements. Rather than dealing with standardized units of time, i.e. film frames containing fixed visual content, a designer now works with separate visual elements positioned in space and time. An element can be a digital video frame, a line of type, an arbitrary geometric shape, etc. The finished work is the result of a particular arrangement of these elements in space and time. In this paradigm we can compare the designer to a choreographer who creates a dance by "animating" the bodies of dancers - specifying their entry and exit points, trajectories through space of the stage, and the movements of their bodies. (In this respect it is relevant that while the After Effects interface did not evoke this reference, Macromedia Director which was the key multimedia authoring software of the 1990s did directly use the metaphor of the theatre stage.)

While we can link the After Effects interface to traditional animation methods as used by commercial animation studios, the working method put forward by software is more close to graphic design. In commercial animation studios of the Twentieth century all elements - drawings, sets, characters, etc. - were prepared beforehand. The filming itself was a mechanical process. Of course, we can find exceptions to this industrial-like separation of labor in experimental animation practice where a film was typically produced by one person. For instance, in 1947 Oscar Fishinger made an eleven-minute film Motion Painting 1 by continuously modifying a painting and exposing film one frame at a time after each modification. However, because Fishinger was shooting on film, he had to wait a long time before seeing the results of his work. As the historian of abstract animation William Moritz writes, "Fischinger painted every day for over five months without being able to see how it was coming out on film, since he wanted to keep all the conditions, including film stock, absolutely consistent in order to avoid unexpected variations in quality of image." [19] In other words, in the case of this project by Fischinger, creating a design and seeing the result were even more separated than in a commercial animation process.

In contrast, a graphic designer works "in real time." As the designer introduces new elements, adjusts their locations, colors and other properties, tries different images, changes the size of the type, and so on, she can immediately see the result of her work. [20] After Effects simulates this working method by making the Composition window the center of its interface. Like a traditional designer, the After Effects user interactively arranges the elements in this window and can immediately see the result. In short, the After Effects interface makes filmmaking into a design process, and a film is re-conceptualized as graphic design that can change over time.

When physical media are simulated in a computer, we do not simply end with the same media as before. By adding new properties and working methods, computer simulation fundamentally changes the identity of a given media. For example, in the case of "electronic paper" such as a Word document or a PDF file, we can do many things which were not possible with ordinary paper: zoom in and out of the document, search for a particular phrase, change fonts and line spacing, etc. Similarly, the current (2006) online interactive maps services provided by Mapquest, Yahoo, and Google augment the traditional paper map in multiple and amazing ways - just take a look at Google Earth [21].

A significant proportion of contemporary software for creating, editing, and interacting with media developed in this way - by simulating a physical media and augmenting it with new properties. But if we consider media design software such as Maya (used for 3D modeling and computer animation) or After Effects (motion graphics, compositing and visual effects), we encounter a different logic. These software applications do not simulate any single physical media that existed previously. Rather, they borrow from a number of different media combining and mixing their working methods and specific techniques. (And, of course, they also add new capabilities specific to computers - such as, the ability to automatically calculate the intermediate values between a number of keyframes.) For example, 3D modeling software mixes form making techniques which were previously were "hardwired" in to different physical media: the ability to change the curvature of a rounded form as though it is made from clay, the ability to build a structure from simple geometric primitives the way a house can be build from identical rectangular building blocks, etc.

Similarly, as we saw, After Effects original interface, toolkit, and workflow drew on the techniques of animation and the techniques of graphic design. (We can also find traces of filmmaking and 3D computer graphics.) But the result is not simply a mechanical sum of all elements that came from earlier media. Rather, as software remixes the techniques and working methods of the various media they simulate, the result are new interfaces, tools and workflow with their own distinct logic. In the case of After Effects, the working method which it puts forward is neither animation, nor graphic design, nor cinematography, even though it draws from all these fields. It is a new way to make moving image media. Similarly, the visual language of media produced with this and similar software is also diffirent from the langauges of moving images which existed previously.

In other words, the Velvet Revolution unleashed by After Effects and other software did not simply made more commonplace the animated graphics artists and designers - John and James Whitney, Norman McLaren, Saul Bass, Robert Abel, Harry Marks, R/Greenberg, and others - were creating previously using stop motion animation, optical printing, video effects hardware of the 1980s, and other custom techniques and technologies. Instead, it led to the emergence of numerous new visual aesthetics that did not exist before.

3D Compositing: Three-dimensional Space as a New Platform for Media Design

As I was researching what the users and industry reviewers has been saying about After Effects, I came across a somewhat condescending characterization of this software as "Photoshop with keyframes." I think that this characterization is actually quite useful. [22] Think about all the different ways of manipulating images available in Photoshop and the degree of control provided by its multiple tools. Think also about its concept a visual composition as a stack of, potentially, hundreds of layers each with its level of transparency and multiple alpha channels. The ability to animate such a composition and continue using Photoshop tools to adjust visual elements over time on all layers independently does indeed constitute a new paradigm for creating moving images. And this is what After Effects and other animation, visual effects and compositing software make possible today. [23] And while the paradigm of working with a number of layers placed on top of each other itself is not new - consider traditional cell animation, optical printing, photocollage, and graphic design - going from a few non-transparent layers to hundreds and even thousands, each with its controls, fundamentally changes not only how a moving image looks but also what it can say.

But innovative as it was, by the beginning of the 2000s the 2D digital compositing paradigm already came to be supplemented by a new one: 3D compositing. The new paradigm has even less connections to previous media than 2D compositing. Instead, it takes the relatively new media that was born with computers in the 1960s - 3D computer graphics - and transforms it into a general platform for moving media design.

The language used in the professional production milieu today reflects an implicit understanding that 3D graphics is a new medium, unique to computers. When people use terms such as "computer visuals," "computer imagery," or "CGI" which is an abbreviation for "computer generated imagery," everybody understands that they refer to 3D graphics as opposed to any other image source such as "digital photography. But what is my own reason for thinking of 3D computer graphics as a new media - as opposed to considering it as an extension of architectural drafting, projection geometry, or set making? Because it offers a new method for representing physical reality - both what actually exists and what is imagined. This method is fundamentally different from what has been offered by main media of the industrial era: still photography, film recording, and audio recording. With 3D computer graphics, we can represent the three-dimensional structure of the world - this versus capturing only a perspectival image of the world, as in lens-based recording. We can also manipulate our representation, using various tools, with an ease and precision which is qualitatively different to that of a much more limited "manipulability" of a model made from any physical material (although nanotechnology promises to change this in the future.) And, as the case of contemporary architecture makes it clear, 3D computer graphics is not simply a faster way of working with geometric representations such as plans and cross-sections used by draftsmen for centuries. When the generation of young architects and architectural students started to systematically work with 3D software such as Alias in the middle of the 1990s, the ability to directly manipulate a 3D shape (rather than only dealing with its projections as in traditional drafting) quickly led to a whole new language of complex non-rectangular shapes. In other words, designers working with the media of 3D computer graphics started to imagine different things.

To come back to our topic of discussion: When the Velvet Revolution of the 1990s made it possible to easily combine multiple media sources in a single moving image sequence via digital compositing, CGI was added to the mix. Today, 3D models are routinely used in media compositions created in After Effects and similar software, along with all other media sources. But in order to be a part of the mix, they need to be placed on their own 2D layers and thus treated as 2D images. This was the original After Effects paradigm: all image media can meet as long as they are reduced to 2D. [24]

In contrast, in the 3D compositing paradigm all media types are placed within a single 3D space. This works as follows. A designer positions all image sources which are two inherently two dimensional - for instance, digital film or digitized film, hand-drawn elements, typography - on separate 2D planes. These planes are situated within a single virtual 3D space. One advantage of this representation is that since 3D space is "native" to 3D computer graphics, 3D models can stay as they are, i.e. three-dimensional. An additional advantage is that the designer can now use all the techniques of virtual cinematography as developed in 3D computer animation. She can define different kinds of lights, fly the virtual camera around and through the image planes at any trajectory, and use depth of field and motion blur effects. [25]

In 1995 I published the article What is Digital Cinema? which was my first attempt to describe the changes in the logic of moving image production I was witnessing. In that article I proposed that the logic of hand-drawn animation, which throughout the Twentieth century was marginal in relation to cinema, became dominant in a computer era. Because software allows the designer to manually manipulate any image, regarding its source as though it was drawn in the first place, the ontological differences between different image media become irrelevant. Both conceptually and practically, they all reduced to hand-drawn animation.

Having discussed the use of layers in 2D compositing using the example of After Effects, I can now add that animation logic moves from the marginal to the dominant position also in another way. The paradigm of a composition as a stack of separate visual elements as practiced in cell animation becomes the default way of working with all images in a software environment - regardless of their origin and final output media. In short, a moving image in general is now understood as a composite of layers of imagery. A "single layer image" such as un-manipulated digital video becomes an exception.

The emergence of the 3D compositing paradigm can be also seen as following the logic of temporal reversal. The new representational structure as developed within the computer graphics field - a 3D virtual space containing 3D models - has gradually moved from a marginal to the dominant role. In the 1970s and 1980s computer graphics were used only occasionally in a dozen or so, feature films such as Alien (1979), Tron (1981), The Last Starfighter (1984), and Abyss (1989), and selected television commercials and broadcast graphics. But by the beginning of the 2000s, the representational structure of computer graphics, i.e. a 3D virtual space, came to function as an umbrella which can hold all other image types regardless of their origin. An example of an application which implements this paradigm is Flame, enthusiastically described by one user as "a full 3D compositing environment into which you can bring 3D models, create true 3D text and 3D particles, and distort layers in 3D space." [26]

This does not mean that 3D animation itself became visually dominant in moving image culture, or that the 3D structure of the space within which media compositions are now routinely constructed is necessary made visible (usually it is not.) Rather, the way 3D computer animation organizes visual data - as objects positioned in a Cartesian space - became the way to work with all moving image media. As already stated above, a designer positions all the elements which go into a composition - 2D animated sequences, 3D objects, particle systems, video and digitized film sequences, still images and photographs - inside the shared 3D virtual space. There, these elements can be further animated, transformed, blurred, filtered, etc. So while all moving image media has been reduced to the status of hand-drawn animation in terms of their manipulability, we can also state that all media have become layers in 3D space. In short, the new media of 3D computer animation has "eaten up" the dominant media of the industrial age - lens-based photo, film and video recording.

This is a good moment to pause and reflect on the very term of our discussion - moving image. When cinema in its modern form was born during the end of the nineteenth century, the new medium was understood as the extension of an already familiar one - that is, as a photographic image which is now moving. This understanding can be found in the press accounts of the day and also in at least one of the official names given to the new medium - "moving pictures." On the material level, a film indeed consisted of separate photographic frames which when driven through a projector created the effect of motion for the viewer. So the concept used to understand it indeed fit with the material structure of the medium.

But is this concept still appropriate today? When we record video and play it, we are still dealing with the same structure: a sequence of frames. But for professional media designers, the terms have changed. The importance of these changes is not just academic, nor purely theoretical. Because designers understand their media differently, they are creating media that looks different and has a new logic.

Consider the conceptual changes, or new paradigms - which at the same time are new ways of designing - we have discussed so far. Theoretically they are not necessary all compatible with each other, but in production practice these different paradigms are used together. A "moving image" became a hybrid which can combine all different visual media invented so far - rather than holding only one kind of data such as camera recording, hand drawing, etc. Rather than being understood as a singular flat plane - the result of light focused by the lens and captured by the recording surface - it is now understood as a stack of separate layers potentially infinite in number. And rather than "time-based," it becomes "composition-based," or "object oriented." That is, instead of being treated as a sequence of frames arranged in time, a "moving image" is now thought of as a two-dimensional composition that consists of a number of objects that can be manipulated independently. And finally, in yet another paradigm of 3D compositing, the designer is working in a three-dimensional space that holds both CGI and lens-recorded flat image sources

Of course, frame-based representation did not disappear - but it became simply a recoding and output format rather than the space where the actual design takes place. And while the term "moving image" can be still used as an appropriate description for how the output of a design process is experienced by its viewers, it no longer captures how the designers think about what they create, who think today very differently than those of twenty years ago.

If we focus on what the different paradigms summarized above have in common, we can say that filmmakers, editors, special effects artists, animators, and motion graphics designers are working on a composition in 2D or a 3D space that consists of a number of separate objects. The spatial dimension became as important as the temporal dimension. From the concept of a "moving image" understood as a sequence of static photographs we have moved to a new concept: a modular media composition.

Motion Graphics

Let me invoke the figure of the inversion from marginal to mainstream in order to introduce yet one more paradigmatic shift. Another media type which until the 1990s was even more marginal to live action filmmaking than animation - typography - has now become an equal player along with lens-based images and all other types of media. The term "motion graphics" has been used at least since 1960 when a pioneer of computer filmmaking John Whitney named his new company Motion Graphics. However until the Velvet Revolution only a handful of people and companies had systematically explored the art of animated typography: Norman McLaren, Saul Blass, Pablo Ferro, R. Greenberg, and a few others. [27] But in the middle of the 1990s moving image sequences or short films dominated by moving animated type and abstract graphical elements rather than by live action started to be produced in large numbers. The material cause for motion graphics take off? After Effects running on PCs and other software running on relatively inexpensive graphics workstations became affordable to smaller design, visual effects, and post-production houses, and soon individual designers. Almost overnight, the term "motion graphics" became well known. The five hundred year old Guttenberg galaxy sprang into motion.

Along with typography, the whole language of Twentieth graphical century design was "imported" into moving image design. This development did not receive a name of its own, but it is obviously at least as important. Today (2006) the term "motion graphics" is often used to refer to all moving image sequences which are dominated by typography and/or design and embedded in larger forms. But we should recall that, while in the Twentieth century typography was indeed often used in combination with other design elements, for five hundred years it formed its own word. Therefore I think it is important to consider the two kinds of "import" operations that took place during the Velvet Revolution - typography and twentieth century graphic design - as two distinct historical developments.

Deep Remixability

Although the previous discussion did not cover all the changes that took place during the Velvet Revolution, the magnitude of the transformations should by now be clear. While we can name many social factors that all could have and probably did played some role - the rise of branding, the experience economy, youth markets, and the Web as a global communication platform during the 1990s - I believe that these factors alone cannot account for the specific design and visual logics which we see today in media culture. Similarly, they cannot be explained by simply saying that contemporary consumption society requires constant innovation, constant novel aesthetics, and effects. This may be true - but why do we see these particular visual languages as opposed to others, and what is the logic that drives their evolution? I believe that to properly understand this, we need to carefully look at media creation, editing, and design software and their use in production environments (which can range from a single laptop to a number of production companies collaborating on the same large-scale project.)

The makers of software used in production do not usually set out to create a revolution. On the contrary, software is created to fit into already existing production procedures, job roles, and familiar tasks. But software are like species within the common ecology - in this case, a shared computer environment. Once "released," they start interacting, mutating, and making hybrids. The Velvet Revolution can therefore be understood as the period of systematic hybridization between different software species originally designed to do work in different media. In the beginning of the 1990s, we had - Illustrator for making vector-based drawings, Photoshop for editing of continuous tone images, Wavefront and Alias for 3D modeling and animation, After Effects for 2D animation, and so on. By the end of the 1990s, a designer could combine operations and representational formats such as a bitmapped still image, an image sequence, a vector drawing, a 3D model and digital video specific to these programs within the same design - regardless of its destination media. I believe that the hybrid visual language that we see today across "moving image" culture and media design in general is largely the outcome of this new production environment. While this language supports seemingly numerous variations as manifested in particular media designs, its general logic can be summed up in one phrase: remixability of previously separate media languages.

As I stressed in this text, the result of this hybridization is not simply a mechanical sum of the previously existing parts but a new species. This applies both to the visual language of particular designs, and to the operations themselves. When an old operation is integrated into the overall digital production environment, it often comes to function in a new way. I would like to conclude by analyzing in detail how this process works in the case of a particular operation - in order to emphasize once again that media remixability is not simply about adding the content of different media, or the adding together their techniques and languages. And since "remix" in contemporary culture is commonly understood as these kinds of additions, we may want to use a different term to talk about the kinds of transformations the example below illustrates. Let us call it deep remixability.

What does it mean when we see depth of field effect in motion graphics, films and television programs which use neither live action footage nor photorealistic 3D graphics but have a more stylized look? Originally an artifact of lens-based recording, depth of field was simulated in a computer when the main goal of the 3D computer graphics field was to create maximum "photorealism," i.e. synthetic scenes not distinguishable from live action cinematography. [28] But once this technique became available, media designers gradually realized that it could be used regardless of how realistic or abstract the visual style is - as long as there is a suggestion of a 3D space. Typography moving in perspective through an empty space; drawn 2D characters positioned on different layers in a 3D space; a field of animated particles - any composition can be put through the simulated depth of field.

The fact that this effect is simulated and removed from its original physical media means that a designer can manipulate it in a variety of ways. The parameters which define what part of the space is in focus can be independently animated, i.e. set to change over time - because they are simply the numbers controlling the algorithm and not something built into the optics of a physical lens. So while simulated depth of field can be said to maintain the memory of the particular physical media (lens-based photo and film recording) from which it came from, it became an essentially new technique which functions as a "character" in its own right. It has a fluidity and versatility not available previously. Its connection to the physical world is ambiguous at best. On the one hand, it only makes sense to use depth of field if you are constructing a 3D space even if it is only defined in a minimal way by using only a few or even a single depth cue such as lines converging towards the vanishing point or foreshortening. On the other hand, the designer can be said to "draw" this effect in any way desirable. The axis controlling depth of field does not need to be perpendicular to the image plane, the area in focus can be anywhere in space, it can also quickly move around the space, etc.

Following the Velvet Revolution, the aesthetic charge of many media designs is often derived from more "simple" remix operations - juxtaposing different media in what can be called "media montage." However, for me the essence of this Revolution is the more fundamental deep remixability illustrated by the example analyzed above. Computerization virtualized practically all media creation and modification techniques, "extracting" them from their particular physical media and turning them into algorithms. This means that in most cases, we will no longer find any of these techniques in their pure original state.

Footnotes

  1. Andreas Huyssen, "Mapping the Postmodern," in After the Great Divide (Bloomington and Indianapolis: Indiana University Press, 1986), 196.
  2. See Wayne Carlson, A Critical History of Computer Graphics and Animations. Section 2: The Emergence of Computer Graphics Technology http://accad.osu.edu/%7Ewaynec/history/lesson2.html .
  3. http://accad.osu.edu/~waynec/history/lesson6.html
  4. Mindi Lipschultz, interviewed by The Compulsive Creative, May 2004 http://www.compulsivecreative.com/interview.php?intid=12
  5. Actually, The NewTeck Video Toaster released in 1990 was the first PC based video production system that included a video switcher, character generation, image manipulation, and animation. Because of their low costs, Video Toaster systems were extremely popular in the 1990s. However, in the context of my article, After Effects is more important because, as I will explain below, it introduced a new paradigm for moving image design that was different from the familiar video editing paradigm supported by systems such as Toaster.
  6. I have drawn these examples from three published sources so they are easy to trace. The first is a DVD I Love Music Videos that contains a selection of forty music videos for well-known bands from the 1990s and early 2000s, published in 2002. The second is an onedotzero_select DVD, a selection of sixteen independent short films, commercial work and a Live Cinema performance presented by the onedotzero festival in London and published in 2003. The third is the Fall 2005 sample work DVD from Imaginary Forces, which is among the most well known motion graphics production houses today. The DVD includes titles and teasers for feature films, and the TV shows titles, stations IDs and graphics packages for cable channels. Most of the videos I am referring to can be also found on the net.
  7. Matt Frantz (2003), "Changing Over Time: The Future of Motion Graphics" http://www.mattfrantz.com/thesisandresearch/motiongraphics.html.
  8. Included on onedotzero_select DVD 1. Online version at http://www.pleix.net/films.html.
  9. In December 2005 I attended thee Impakt media festival in Utrecht and I asked the festival director what percentage of submissions they received this year featured hybrid visual language as opposed to "straight" video or film. His estimate was about one half. In January 2006 I was part of the review team that judged graduating projects of students in SCI-ARC, a well-known research-oriented architecture school in Los Angeles. According to my informal estimate, approximately half of the projects featured complex curved geometry made possible by Maya that is modeling software now commonly used by architects. Given that both After Effects and Maya's predecessor Alias were introduced the same year - 1993 - I think that this quantitative similarity in the proportion of projects that use the new languages made possible by these software is quite telling.
  10. For examples, consult The VJ book, VJ: Live Cinema Unraveled, or web sites such as www.vjcentral.com and www.live-cinema.org. Paul Spinrad, ed., The VJ Book: Inspirations and Practical Advice for Live Visuals Performance (Feral House, 2005); Timothy Jaeger, VJ: Live Cinema Unraveled (available from http://www.vj-book.com).
  11. Jay David Bolter and Richard Grusin, Remediation: Understanding New Media (The MIT Press, 1999.)
  12. An "invisible effect" is the standard industry term. For instance, in 1997 the film Contact directed by Robert Zemeckis was nominated for 1997 VFX HQ Awards in the following categories: Best Visual Effects, Best Sequence (The Ride), Best Shot (Powers of Ten), Best Invisible Effects (Dish Restoration) and Best Compositing. http://www.vfxhq.com/1997/contact.html www.vfxhq.com/1997/contact.html
  13. In the case of video, one of the main reasons which made the combination of multiple visuals difficult was the rapid degradation of the video signal when an analog video tape was copied more than a couple of times. Such a copy would no longer meet broadcasting standards.
  14. Jeff Bellantfoni and Matt Woolman, Type in Motion (Rizzoli, 1999), 22-29.
  15. While of course special effects in feature films often combined different media, they were used together to create a single illusionistic space, rather than juxtaposed for the aesthetic effect such as in films and titles by Godard, Zeman, Ferro and Bass.
  16. See http://dreamvalley-mlp.com/cars/vid_heartbeat.html#you_might.
  17. Thomas Porter and Tom Duff, "Compositing Digital Images," ACM Computer Graphics vol. 18, no. 3 (July 1984): 253-259.
  18. I should note that compositing functionality was gradually added over time to most NLE, so today the distinction between original After Effects or Flame interfaces and Avid and Final Cut interfaces is less pronounced.
  19. Qtd. in Michael Barrier, Oscar Fishinger. Motion Painting No. 1 http:www.michaelbarrier.com/Capsules/Fischinger/fischinger_capsule.htm
  20. While a graphic designer does not have to wait until film is developed or a computer has finished rendering the animation, the design has its own "rendering" stage - making proofs. With both digital and offset printing, after the design is finished, it is sent to the printer that produces the test prints. If the designer finds any problems such as incorrect colors, she adjusts the design and then asks for proofs again.
  21. http://earth.google.com/
  22. Soon after the initial release of After Effects in January 1993, the company that produced it was purchased by Adobe who are already selling Photoshop.
  23. Photoshop and After Effects were designed originally by different people at different time, and even after both were purchased by Adobe (it released Photoshop in 1989 and After Effects in 1993), it took Adobe a number of years to build close links between After Effects and Photoshop eventually making it easy to back and forth between the two programs.
  24. I say "original" because in the later version of After Effects Adobe added the ability to work with 3D layers.
  25. If 2D compositing can be understood as an extension of Twentieth century cell animation where a composition consists from a stack of flat drawings, the conceptual source of the 3D compositing paradigm is different. It comes out from the work on integrating live action footage and CGI in the 1980s done in the context of feature films production. Both film director and computer animator work in a three dimensional space: the physical space of the set in the first case, the virtual space as defined by 3D modeling software in the second case. Therefore conceptually it makes sense to use three-dimensional space as a common platform for the integration of these two worlds. It is not accidental that NUKE, one of the leading programs for 3D compositing today was developed in house at Digital Domain a company which was co-founded in 1993 by James Cameron - the Hollywood director who systematically advanced the integration of CGI and live action in his films such as Abyss (1989), Terminator 2 (1991), and Titanic (1997).
  26. Alan Okey, post to forums.creativecow.net, Dec 28, 2005 http://forums.creativecow.net/cgi-bin/dev_read_post.cgi?forumid=154&postid=855029 .
  27. For a rare discussion of motion graphics prehistory as well as an equally rare attempt to analyze the field by using a set of concepts rather than as the usual coffee table portfolio of individual designers, see Jeff Bellantfoni and Matt Woolman, Type in Motion (Rizzoli, 1999).
  28. For more on this process, see the chapter "Synthetic Realism and its Discontents" in The Language of New Media.

This text was written as part of a Research Fellowship in the Media Design Research programme at the Piet Zwart Institute, Willem de Kooning Academie Hogeschool Rotterdam: http://www.pzwart.wdka.hro.nl/