To start things off, the foundation of machine-generated
content lies in a subject area that encompasses a large part of this class:
machine-assisted generation of content. While it could be argued that all machine generated content is, in
fact, just “machine-assisted generation”, in the context of this work, content
that is generated through “machine-assistance” is loosely defined to be content
created using a machine such that all “creative decisions” made in the creation
of the work are done by humans – machines only serve to expedite or enable the
process. For most of human history, this has been the extent to which machines
are able to generate media: in the case of Walter
Benjamin, think of a machine that merely copies an existing work onto a new
canvas; in the case of Copyright Criminals/RIP: A Remix Manifesto, think of the
computers and DAWs
that allow for the electronic mixing of music into new compositions – and the list
goes on. Essentially if you picture what Adobe has built a large part of their business
model around (Photoshop, Premiere Pro, After Effects, Audition, Illustrator),
you’ve got the main idea (even if non-electronic machines, like the printing
press, can fit into this category as well). We spent nearly a quarter or more of
the class talking about the potential implications of this style of content
generation, but it’s still worth noting here if only because of how
foundational it is to the topic at hand. Overall, the main cycle of development
in this area has been the back-and-forth battle of enabling more creative functionality
while trying to make that functionality as accessible as possible. For example,
in the context of photographs, we started simply with the ability to copy
photographs (ignoring tricks in the darkroom with regards to masking, burning,
and dodging), and then, when we progressed to the digital age, we could modify
images at the pixel level – an incredibly granular level that offers a great deal
of precision, but it also required a large amount of effort to perform
substantial transformations. Thus, we made the tools themselves more accessible
by delegating more control to the machine (i.e. Photoshop/Paint.NET filters, patch
tools, filler tools, etc). In fact, this engine of progress in the form of
“enable more capabilities; make those capabilities more accessible” can serve as
both a guiding principle in the development of media generation and as a segue
to the development of our many other areas of content generation. Now, this
engine, being more cerebral than corporeal, has many ways of being
“implemented”, so to speak. One of the key methods that we’ll talk about next
is through the “trivial” implementations of machine generation systems.
No comments:
Post a Comment