Here is a link to my final project, “Lefting the User-Friendly: An Experiment in Altered Images.”
And here’s a few excerpts from it:
Here is a link to my final project, “Lefting the User-Friendly: An Experiment in Altered Images.”
And here’s a few excerpts from it:
Here is the rationale behind my final project: One of my very first experiences in the MAL was playing on the Atari with Kyle. We were playing the football cartridge, and for the life of me I could not figure out how to control my on-screen players with the joystick. The movements seemed inverted and alien until I discovered I had been operating the joystick with the “wrong hand,”–my left hand–and had twisted the controller from its intended position to accommodate this. Perhaps if it had been a newer console I would have caught my mistake sooner, conforming my play to the design without a second thought. But, because I had struggled for several minutes (under the assumption that the gameplay was just different because the console was old), my mistake was something of a revelation. Just how often were assumed “user-friendly” designs touted by our technologies (and our technology companies) not really user-friendly at all? Or, perhaps the better question: who is the “user” in “user-friendly?”
This question does not only apply to issues of handedness; as I looked into custom, one-handed game controllers later in the semester, I saw just how quickly issues of accessibility and disability came to the fore when discussing the ideologies and limitations inherent in the user-friendly (as well as in the creation of this “baseline” user by designers). Technology’s second grandest claim, that it opens up worlds and opportunities to all bodies, seemed at odds with this quest for ease of use and operation (or, at the least, there’s often a sharp price difference when comparing “user-friendly” technologies and technologies available for differently abled bodies). Perhaps this could be excused as nothing more than an adherence to the logic of supply and demand, but then there it is–the inextricable link between “user-friendly” and capitalist enterprise. “User-friendly” meets the needs of mass production, and one result is that these technologies lock out/block out/change the experience of themselves for many users. And another unintended result is that this need for mass-producible user-friendliness limits the ways we can/could engage with and subsequently innovate our technologies, present and future.
And here is my plan: In my project, I am taking up issues of handedness and other accessibility concerns when it comes to the design of technologies like keyboards, controllers, phones, hand-held games, etcetera. I am particularly interested in how left-handedness works within systems designed for the right-handed user, and how those with different bodies “undermine” (in a good way) these devices for their own use (especially when private companies would prefer to make money off their various conditions with expensive, “specialty” devices). I would like to highlight these experiences in a visual way, transforming common, user-friendly designs into images such as this through Photoshop:
Along with these images, I want to use quotes from usability/design textbooks and guidebooks as well as creative commentary to underline/riff off the ideas presented in my rationale. The end result will be a booklet of text and images (as coffeetable-like in style as you can get with 8.5×11 paper), hopefully encapsulating and expressing my intended messages.
Issues I’m still wrestling with: Because I’m still in the midst (and mist) of this project, my biggest concern at this point is balancing the issues of left-handedness with other differences which cause the “user-friendly” to be questioned. While I initially only wanted to focus on left-handedness, it seemed a disservice to overlook the much larger problems with (and implications of) user-friendly designs for differently abled bodies . Another challenge this project presents is how to convey more traditional criticism in artistic and visual ways; it is not something I’ve been given the chance to do in the past, and want to be sure I select and transform devices in ways that will give me the most bang for my buck (speaking of capitalism). Finally, because I am far from a Photoshop expert, I am discovering some of my limitations with the software when it comes to manipulating these images. At this point, I am trying to think of how this struggle could be incorporated into the project itself (which perhaps could flesh out my reasoning for making this a creative work in the first place).
Recently, I had an unexpected opportunity to tinker with my Nintendo 64 (after a roadtripping friend brought it from my hometown, AKA now best friend). It seemed a perfect chance to continue my inquiry into a media archaeological study of video games and video game consoles, since it seems to be a developing theme for me this semester. So I sat down with some old favorites, and got to it.
What was initially obvious is that my N64’s AV cable is likely on its last leg, or that my flat screen TV is somewhat incompatible with the 64. While the sound comes in perfectly, the display was mostly in black and white (with flickers of color). I’ve ordered a replacement AV cable to give a whirl, but–of course–this slight hiccup unfortunately reveals the mortality of these devices, whether they simply give out over time or the necessary surrounding technology closes them out (I would say becomes too advanced, but, oh, the linear progression in that).
But, okay, I’ve already talked about the problems with both valuing and preserving these older consoles as something more than nostalgia machines. And I’ve looked for new in the old when it came to the SNES TMNT II game and its blatant product placement; in the same regard I could talk about the absence of long load screens, the strange design of the 64 controller, and the immediacy and physicality of cartridge games. But I just can’t help but feel that this is surface scratching. I end up sitting there, blasting away enemies in the Frigate level of Goldeneye, wondering what else I can do, how one can dig deeper into this media without making it a progress narrative or a mediation on violence (the seemingly dominant ways of thinking about video games). Perhaps why I feel so restricted when discussing them is because media archaeology (as I understand it) encourages experimentation, and video game consoles provide the least amount of room for or access to experimentation, tinkering—thinkering.
Unlike a computer that even a standard user can try to program (and, barring catastrophe, will still work even if your program doesn’t), the “guts” of video game consoles seem to be some of the most blackboxed and unalterable, especially when considering older consoles like the 64, SNES, and NES. Even modding newer consoles seems to require expert-level knowledge so one does not brick the device (and typically, any such tinkering voids warranties or can even block access to online content). While newer consoles essentially model computers (and newer computers/computer-like devices like tablets become increasingly “just-trust-us” blackboxed), there seems to be a huge gulf between the standard video gamer and their device. I guess I look at it is way, that I can fix basic problems with my computer and even make slight changes (admittedly, not in its OS) to make it run in ways I prefer. But, as my N64’s visual output flickers in black and white, I can only hope its a problem with the AV cord and nothing internally. The internal workings are a total mystery and far too precious to attempt altering.
This raises another question about the limits of experimentation in media archaeology, and how you can maintain an archive which allows for experimentation (i.e. the potential to destroy or permanently alter a device). I suppose if I had the money I could buy another 64 off eBay to tinker with and keep my own, providing the opportunity to transgress while still having a (mostly) usable device–my cake and eating it too. But I wonder, as time goes on and these devices become rarer (or it becomes rarer that they have all their original parts), whether there’s almost a sort of ethics surrounding how much we can/should tinker, if we want to preserve these devices (like gaming consoles) in their original form/maintain those that remain. At what point so you need to put it behind bullet proof glass in a museum, I guess? And, then again, what are these devices without users?
As I put my 64’s game cartridges away, I noticed–for the first time–the list of warnings on the back of them. One of the warnings read “Do not blow on the edge connector or touch with your fingers.”
Of course, the childhood standard for a glitching game had been to blow into the cartridge and even the device itself, removing dust or whatever other bogeymen could be found. And, surely enough, when I discovered the output was in black and white I had repeated the same action as of old. I don’t know if it’s necessarily a moment worthy of media archaeological inquiry, but it seems to capture the mystery between gamer and console and the disconnect between user and designer.
Upon finishing my perusal of The Gorgeous Nothings collection (and perusal seems to be the best way to describe my interaction with this myriad of text/textual fragments), here are some of my thoughts:
As a general statement, I enjoy the accessibility of this collection, and wish that the manuscripts and various ephemera from other notable authors would be published in a similar way. This would not only preserve their work (and would do it better than simply sealing off the decaying originals to a few privileged hands), but it is also far better than having some middleman transcribe it with his/her own intentions and impositions. (I can quickly see the injustice that would be done to Dickinson’s envelopes if they were transcribed and typeset in a standard font with no mention of their materiality.) And I say this as someone who did considerable work on the Scribner’s version of Ernest Hemingway’s The Garden of Eden, famous for being a highly edited posthumous text based off fragments from Hemingway’s incomplete manuscripts. While it was clear that Scribner’s could never give me the full story, I was equally perturbed by how academic scholarship inherently privileged these difficult-to-access manuscripts. There was not only a complete disregard for how quickly the criticism game became an elitist enterprise under such restrictions (needing a “good enough” excuse, travel expenses, lost wages, it goes on), but also an undervaluing of the text that was readily available for the common (commoner?) reader. Of course, I realize half the battle is securing publishing rights from the estate (along with the sticky question of whether or not the writer would have wanted these things to see the light of day) while also convincing someone that it is worth publishing. (It is clear that a collection like this, with so many large images, was not at all cheap to produce.) However, the intellectual dividends from putting together something like The Gorgeous Nothings are numerous, allowing any reader to gaze at the intimate work of the writer which sometimes escapes easy categorization, and, from there, being able to draw his/her own opinions (rather than seeking some high critic which has seen the “plates”). I’d like to continue seeing such “democratization” of materials like this, but I realize–with issues of money and prestige on the line–it’ll probably continue to be a slow process.
More specifically, I enjoyed how The Gorgeous Nothings played with notions of the archive in the presentation of these envelope writings. While there was certainly method to the madness in their arrangement (I don’t believe it was chronological though, unless I overlooked something?), the indexes provided at the end offered whole new ways to (re)visualize and engage with these scraps, anything from the shape of the envelopes to how Dickinson wrote on them. Admittedly, the pictures are small, but it would not be difficult to find the original pages and read them in a new way based off the index–creating a new “flock.” The inclusion of the backs of the letters as well revealed that readers could also see how the writings engage with the materiality of the envelope they were written on. This statement from the editors especially stuck with me: “Flocked together for a few moments before dispersing again into other equally provisional constellations, they remind us that a writer’s archive is not a storehouse of easily inventoried contents– i.e., ‘poems,’ ‘letters,’ etc.–but also a reservoir of ephemeral remains, bibliographical escapes” (207). Even choosing to collect these ephemeral scraps which lack any clear sign of intent (were they a collection, was Dickinson really playing with space and materiality, were these complete works, what do we make of added words, crossed out lines) seems to be an anti-archive move.
What was additionally engaging about this collection was that one could actually see Dickinson’s handwriting. And, because I have never dealt with handwritten manuscripts (because I don’t have the money or habitus to do so), I found that it was a particularly different emotional experience. There was a sense, however silly it is to describe, that someone really wrote this–that Dickinson was real and, however I may have slighted it in the past, that there was something more bodily and physically present about handwriting. And, while I’m pretty unfamiliar with her work, that was still very moving. Of course, when I wanted to read more quickly I often found myself “cheating” by looking at the transcriptions, which, I think, is a whole discussion in itself…
This post is getting long, so I’ll just mention I’m also interested in how we even begin to deal with the fraught question of intent with a collection like this, and how we avoid reading too much into it.
I’ll close with one of the small scraps I particularly enjoyed, of course altered since I’m typing it:
“There are those
who are shallow
by accident” (158)
I know I should probably be writing about concrete poems, but a recent personal discovery on the Internet has my mind aflutter about archives.
Maybe I’m the last to know about this, but while searching for a source for my master’s thesis I inadvertently brought up the a page under The Internet Archive’s Wayback Machine (http://archive.org/web/). I didn’t even know something like this was in existence, which took “screenshots” (in a loose sense) of web pages and preserved them for posterity. Of course, before I learned anything more about the organization, I performed a search almost instantly:
That, in all its embarrassing, nerdy, early-teenage years glory, is my old Yahoo! Geocities website dedicated to Ewan McGregor–movie actor, motorbike aficionado, world’s best looking man. Yep. I said it. This page was last updated in 2005 and last crawled by the Wayback Machine in 2009 shortly before Yahoo! Geocities shut down and I lost the site for good.
I suppose I had imagined that major player websites were being archived by someone somewhere out there, but I had no idea about this indiscriminate archiving process which had picked up a random fangirl’s now nearly decade-old website. My nostalgia was palpable. But more importantly, I saw how The Internet Archive seemed to challenge the notion of the archive itself by being inclusive of pretty much anything it could get its “hands” on. And, as the Internet Archive’s website shows, there’s really no sense of order to the information. Just search the address you’re looking for, or select some of the popular options scrolling by on the screen. It’s a kind of archive, kind of anarchy.
This archive is equally interesting in the ways it is fragmented. Like lost pages in a book, my website is certainly far from complete, with most of the links and images now broken (only certain pages were crawled). It is almost a piece of ephemera in that sense, never meant to stand the test of time but here it is regardless.
Of course, what’s kind of alarming about thinking of my old website as ephemera is that it is not that old. I started ewymc back in 2003 when I was in eighth grade with (clearly) little else to do. By comparison, drawings of mine from kindergarten have stood the test of time in a cardboard box far better. And, reading The Internet Archive’s about page, it seems those behind this program recognize the need to document (and more fully document) what is going on with the Internet, given the pace of the modern day They write, “With the explosion of the Internet, we live…[a] ‘digital dark age.'” And the Internet does seem to be somewhat ahistorical in the dark age sense, as websites (or hosts) that shut up shop often entirely disappear. Even if the page content is archived , the layouts change, everything adopts or shifts to newer and newer platforms, and we’re not allowed to see what the content really looked like at that moment in time. I did another two searches, picking a certain infamous date, and found the results of archiving can be particularly powerful:
Nevertheless, some traditional ideologies still dictate the construction of this internet archive. Major websites (those with high traffic) are indexed far more often, revealing some traditional curating bias. (Admittedly, this seems as understandable as an art museum preferring Picasso’s paintings over my own scribbles.) Additionally, there’s a heavy emphasis on chronology as it’s pretty much the only organizing factor. And much of the language around the utility of this archive (“show how far we’ve come,” “see where we once were”) still emphasizes the linear narrative of progress.
And there’s something even more insidious and Ministry of Truth-like about the Internet Archive as it relates to “robots.txt.” Websites can choose to not be indexed—even retroactively—by setting up this command that the archive must obey. It’s a way to avoid potential legal snafus about “reproducing” copyrighted content. (Facebook won’t let you, for example, apparently they’re concerned about their own privacy. *cue joke*) But it all seems so Orwellian to me, that certain histories can be hidden or erased from public view. Granted, anyone can request that their site not be archived (or employ robots.txt) which is democratic. I guess it just raises all sorts of questions about public v. private, and whether the Internet can really be “owned” in these ways.
In closing, I am thinking about ways to play with this archive which may be more Ernst-like, mixing up pages to undo chronology, showing the brokenness of links, maybe some mashing up in Photoshop—playing with the “materiality” (or should I say, “visuality” is that even a word?) of the pages…
The other day I decided to repair two xbox controllers, in hopes of not having to spend $40 each for replacements. One of controller #1’s joysticks had been ripped off by a very willful, nearly year-old corgi (who cannot be left alone for five minutes). And controller #2 had been spiked into the ground about six months prior, during a particularly frustrating sequence in Dead Space (video games can sometimes bring out the worst in me). While they were minor repairs (replacing the joystick and one of the LB/RB buttons), I am the epitome of your blissfully blackboxed consumer. It’s not surprising how money/socio-economic status plays into this either…if the controllers were cheaper or I had more money to spend, I probably would just buy new. (Completing the repairs, this seems silly…and sheds light on other similar decisions…)
After purchasing online the special security screwdriver (I initially found this out the hard way) and a set of buttons (for a grand total of no more than $6), I set to work educating myself on how this thing I had held in my hands for hours on end really worked—at least on the most basic of levels. It was a strangely liberating experience—really a testament to how it is possible for our society to be so technologically advanced yet so technologically ignorant at the same time.
Oh, and as for further blackboxing after the security screws? One was hidden under the serial number sticker. I’m sure it’s a way to test if you’ve tinkered and subsequently voided the warranty on the controller…haven’t looked it up, but wouldn’t be surprised.
Consequently, perhaps this kind of “education” was another way to think about technologies in media archaeology and to challenge our grand narratives of progress—by not just taking someone’s word for it and by knowing your machine inside and out. It certainly does seem to alter perspective, and I really only skimmed the surface by opening up the controller and replacing a few buttons.
When I was looking up replacement parts for controllers though, something else showed up: information on how to mod controllers for one-handed use, among individual sellers who were custom-making such devices.
Clearly, these controllers were being made for gamers who (presumably) did not have use of both hands. (I say presumably because I’ve learned in a few small circles that this is just a way to up the difficulty for some extreme gamers.) It was something I hadn’t thought about, because, with my able-bodied privilege, it hadn’t been a blip on my radar.
But suddenly, here was a whole demographic of gamers being overlooked by mainstream, dominant designs. And creativity was a consequence, as disabled and able-bodied gamers alike came together to make up their own unique designs that would allow them to play games in an industry that assumes the ability to play with both hands, or hands at all (and this creativity could exist because there was no top-down influence). This was a bit of a revelation, as so often it seems that technology is championed as equipping those with physical disabilities in ways never thought possible; here was an instance where these technologies excluded. My searching led me to organizations like The AbleGamers Foundation, which addresses such concerns and seeks to “improve the overall quality of life for those with disabilities through the power of video games.” (Website here: http://www.ablegamers.com/). The biggest problem is that specialized controllers alone can’t solve the problem, as many game developers continue to ignore the needs and abilities of those in this community. (Perhaps because it is seen as predominantly an “entertainment” device?)
Overall, the discovery of these modded controllers got me thinking about the potential intersections between media archaeology and disability studies. When the conception of the consumer (and the consumer’s abilities) is shifted, something emerges that questions not only the sense of “user-friendly” but reveals the assumptions our machines make and the physicality of them as well.
This week, I dwelt particularly on this passage from Wolfgang Ernst’s Digital Memory and the Archive:
“Although most current theories of media archaeology aim at forming counterhistories to the dominant traditional histories of technology and of mass media, their textual performance still adheres to the historiographical model of writing, owing a chronological and narrative ordering of events. Admittedly, reclaim to perform media-archaeological analysis itself sometimes slips back into telling media stories, the cultural inclination to give sense to data through narrative structures is not easy for human subjectivity to overcome.” (56)
Alas, every time I think that I just about grasp this concept, I get thrown for a loop. Lately Zielinksi’s “new in the old” has dominated my thinking when concerning how to approach something from a media-archaeological standpoint. (Perhaps it’s the only thing I’ve gripped with any certainty this semester.) By looking for the new in the old, I had felt like I was freeing my object of study up from the restrictions and repressions of the dominant progress narrative.
But–taking this Ernst passage into consideration–I find that I have still been creating a counterhistory and have very much been a narrator telling a story (even if it’s a particularly unriveting one about product placement in old video games, as I did a few posts ago). There is still a chronology too by simply defining old game vs. newer game. Just great.
But how in the world do you not narrativize when you are putting down words in meaningful sequences onto a page? What is left when you take those things away? Ernst continues that “it takes machines to temporarily liberate us from such conditions” (56). But how? Is he talking about the experience we have in the moment with the machine (not when we later must write about it), or is he suggesting “thinking like a machine” and whatever that approach may reveal? It seems, if we were to follow something like this, we wouldn’t be writing complete sentences in paragraph format as we have seen in his book. To me, it seems that–in order to follow Ernst’s advice to the letter–you would need to get wholly experimental.
This is what brings me to Writing Surfaces, a collection of John Riddell’s fiction. Not going to lie, when I first opened it I thought I had seen this before:
But, as I read the introduction upon finishing the collection in a desperate attempt to grasp what I had just read(?)/looked at(?)/experienced(?)/struggled with because the resolution of the digital book is awful, I was drawn to Professor Emerson and derek beaulieu’s discussion of the first proposed title for the collection, Media Studies. They write that this title would still be representative of the contents because “Riddell’s work is a kind of textbook for the study of media through writing, or the writing of writing” (loc 57). Therefore, by undoing standard formatting, standard narration, and standard anything altogether, could I say that Riddell was being more Ernst than Ernst? Or, to put it another way, was Riddell and other artists like him really showing us how to conduct/perform media studies?
My thinking was led particularly by Riddell’s “a shredded text” and “Traces” pieces, which certainly employed the use of a shredder and a copier. I thought about how these technologies are generally thought about: the one destroys sensitive information and the other makes clean copies (and that’s certainly how they are both marketed). But Riddell’s use of these technologies seemed to do something new with both of them. There was a sense of sadness in both pieces, expressed by the writing destroyed by the shredder, the unreadable copy of a copy of a copy look in “Traces.” Both works show unintended consequences by reminding us of the physicality of paper, that it can be shredded and that a copy is not a perfect replica, but something else–they both are something else entirely. They also remind us of how these technologies have been implemented by not allowing us to simply glean the words off the page; they are jumbled, in disarray, or simply no longer readable. While I don’t have all the dots connected, it seems that this says something about these technologies that their dominant histories do not–all while avoiding the creation of a counterhistory.
Perhaps what would better serve me now, then, is to approach these artifacts artistically.
After tinkering around with the XO laptop, I decided to look a bit more into the non-profit group behind it– One Laptop per Child. Much like the motto “if you build it, they will come” from Field of Dreams, their mission is simple: if a kid has a laptop, he/she will learn. More specially, if a kid in a poor, developing nation has access to a laptop, he/she will learn and that education will allow them to eventually solve whatever other problems plague their community or country. A bit too starry-eyed for my tastes, though I think everyone has the right to access such technologies and decide for themselves their worth or place in their lives.
As I thought more about the mission statement though, I couldn’t help but see how it conflicted with opinions on the relationship between children and technology in developed nations. It seems that, in places like the United States, we often cite our preoccupation with technology (like computers, cell phones, video games, and the Internet which connects them) as detracting from education and learning. Thus, all the conversations about increasingly short attention spans, textspeak, difficulty reading, etcetera etcetera. While “educational” uses are out there, in the first world they are too often subverted for playtime, Facebook, and looking for the name of the band that played that one song. Our dream for these technologies in developing nations seems to be that they will be used “purely,” eventually solving complex problems like war, disease, and poverty–all without us really having to give or change much of ourselves.
It’s complicated. In class we often talk about the darker sides of internet access, the loss of privacy, the mining and selling of data, even the ways in which our thinking is structured or limited by computers. The picture I’ve provided of the kid looking intently at the Google homepage is subsequently disturbing, then–further entanglement into the machinations of late capitalism disguised as educational and financial opportunity, tsk tsk. (Man, am I talking about XO laptops or higher education in general now?) This kind of thinking will always make programs like OLPC (even if it’s carried out in a far less bumbling fashion) seem shady, potentially even dangerous for the communities they target.
However, I think what is more dangerous is if we decide what is and isn’t dangerous for others who are not seated in a similarly privileged position (which is due, in major part, to these technologies). And perhaps that’s what I found most bothersome about the XO laptop, that it has made such decisions in its design and promotion (especially with its Sugar interface), even while trying to be about open access. With such a strong emphasis on learning and young learning, it seems that the laptop discourages adult users (too old for change) while insisting that kids only use the device to learn in traditional ways (much of the time, the Internet is still out of reach, so opportunities are limited to a word processor, a painting program, and a rudimentary music program…from what I gleaned). The result is an ideology of truly pure educational technology, with no threat of play (like video games, for example). Of course, the word “play” might often be used in the advertising, but it’s a kind of “play” with high hopes behind it, a high-stakes kind of “play” that will eventually figure out a way to bring clean water to the village and so on. I can understand the angle from a marketing perspective (investors more sold by “education” than “cool toy”) as well as a budget perspective (more features equals higher cost), but if the idea is to give these children equal access and opportunity, the XO falls short.
Then again, something like the XO is better than nothing, or at least might help hatch better programs and devices in the future. I always want to be sure, when I critique a non-profit like OLPC, that I am not throwing out the baby with the bathwater. Given a lifetime of always having access to these kind of technologies and never understanding what it is to be without them, I’m not sure I am able to truly judge their efficacy or their dangers.
I’m feeling far from the Indiana Jones of media archeology lately.
I know that analogy doesn’t work on so many levels, but perhaps it best captures my rather bumbling cluelessness as I attempt to grasp what it means to experiment with old technologies without slipping down the cave entrance into nostalgia with all its “it was better back in the day” sentiments. After all, Zielinski makes it clear that’s not the intention:
“My quest in researching the deep time of media constellations is not a contemplative retrospective not an invitation to cultural pessimists to indulge in nostalgia” (10).
Or here’s another way of putting it a page later:
“These excursions into deep time of the media do not make any attempt to expand the present nor do they contain any plea for slowing the pace” (11).
So, already I feel like I’m off to a rather bad start. When I approached the NES in the lab last week, it was difficult not to begin putting the machine in a nostalgic context. I recalled with fondness the hours spent playing it in my grandparents’ basement on an old, wooden framed television set, what it meant to be young in the 90s and oops, here we go. Instantly, it feels like I’ve put the machine into a “had to be there” category. Perhaps it would be easier if I selected something outside my realm of experience and memory, though it seems that media archeology does not require these artifacts to be alien or long buried and nearly forgotten before being examined; in fact, it seems crucial–in unhinging this dominant, homogenizing narrative of technological progress–that we always look to the recent past, no matter what our involvement (perhaps our witnessing makes our reflections all the more poignant).
Therefore, back to the NES. Will and I fired up some Contra (we didn’t remember the 99 lives cheat code, so that was short lived), followed by Super Mario Bros, followed by Teenage Mutant Ninja Turtles 2. I tried to keep in mind Zielinski’s vision of what such study should be:
“The goal is to uncover dynamic moments in the media archaeological record that abound and revel in heterogeneity and, in this way, to enter into a relationship of tension with various present-day moments, revitalize them, and render them more decisive” (11).
Or, to say this in another way:
“On the contrary, we shall encounter past situations where things and situations were still in a state of flux, where the options for development in various directions were still wide open, where the future was conceivable as holding multifarious possibilities of technical and cultural solutions for constructing media worlds” (10).
But yikes, that second excerpt’s past tense usage, when situations were still in flux, when options were still wide open, when the future was conceivable as holding multifarious possibilities. It was hard not to read that like the fall of man, or the opening song to All in the Family nostalgically titled “Those Were the Days.” Because if things were open, what were they now? Was all hope lost now that we were no longer in that past time of openness, that we had moved to a monopolized technological present where user-friendly ideology had squashed innovation and experimentation? I could certainly get behind that. But, in avoiding cultural pessimism–the thinking that the past was always better–didn’t this conclusion directly conflict with the main tenants of media archeology study? Perhaps we’re not supposed to assign any value to “open” and “closed” (as good and bad), but it’s a binary that I feel is being set up and not entirely dismantled. I would be more comfortable thinking that as some things streamline, others are moving into flux or chaos, therefore making the past tense unnecessary. (An example of this may be how the music industry streamlined to CDs, then mp3s, but now, between internet piracy and hipster demand for vinyl, we’ve seen changes in both media and profits.)
“Do not seek the old in the new, but find something new in the old” (3)–despite my other issues with understanding the examination of the deep time of media, perhaps this was a statement I could truly get behind. After all, it did not seem to be passing judgment, but instead erasing the distinction between new and old. And, this process of finding the new in the old is paramount to showing how narratives of technological progress have thrown out anything that hasn’t lived up to the present under the guise that it simply wasn’t as good (a survival of the fittest mindset for technology).
So, as Will and I played TMNT 2, I tried to keep thinking new in the old, new in the old. And I came across its overt product placement.
Every few feet in the platform style game (as we fought ninjas down the hallway of April’s burning apartment complex), there were posters on the wall with the Pizza Hut logo. In ways it was unsurprising (considering product placement today), but the ways it was implemented were strange. Of course, modern sports video games (Madden, Gran Turismo) make great use of product placement which often–crazily enough–adds to their “realistic” factor. But what surprised me–even in a game where it’s well known that the characters dig pizza–is that we don’t see more in-game product placement today. Why not have Commander Shepherd sipping on a Coca Cola in a Mass Effect cut scene? Why not have the Joker eating McDonalds as he taunts Batman in the Arkham series? (The clown angle even!)
Product placement is so often done in movies and television shows today; it does seem strange video games have not been similarly NASCARed. Of course, a notable exception may be mobile device gaming, where ads either interrupt gameplay or run as banners above and below the screen. However, if one chooses to “buy” the game, the ads usually go away. We pay much, much more for TV shows (commercials aside) and movies, but we don’t get to “pay away” their product placement. Though I am somewhat glad that we aren’t being as advertised to on top of all the other ideologies broadcast by video games, it does seem that something new was left in something old. Don’t tell “satan’s little helpers,” as the late Bill Hicks would say (I kid because I’m a moonlight marketer, I’ll expect rotten cabbages tomorrow…)
Nevertheless, I worry that this is still not the point of media archeology. With such discussion of flux and possibilities and heterogeneity, pointing out some minor detail feels like I’m missing the bigger picture. I suppose it’s because I feel that media archeology functions more successfully on a macro level than a micro one. Then again, maybe it’s just going to take time to undo this grand narrative indoctrination that the new is always an improvement on the old, or that the things left behind were left behind with good reason. Right now I still feel stuck in the snake pit, or between the closing walls….whatever Indiana Jones reference you’d like.
Unfortunately, my experience with Google Glass was rather short-lived, as its battery was dead. Even in the 45 minutes I was there, Kyle and I could not get the glasses to re-juice enough to flicker anything more than the low battery light. Alas, I will have to try them out again post-software upgrade if possible.
However, to glean from my brief experience with the glasses in class, they did leave me wondering about whether they were a more seamless integration of human/machine than current devices (as they seem to be marketed). To me, nothing felt seamless about them; I found not just one eye but both eyes desperately trying to make out the tiny screen (and a definite feeling of being cross-eyed and one of my contacts almost popping out resulted). This extra concentration made me feel even further removed from my surroundings than my phone or iPad would. Granted, I guess there is always the issue of the learning curve. But more importantly, maybe I’m approaching Google Glass with the wrong mindset, focusing more on what it’s *not* (not a smart phone, not a tablet, not a PC) than on what it is or eventually could be, imposing the ways I interact with my current technologies onto how I will interact with the Glass and other future technologies like it.
Especially because of its digital photography/video abilities, I can see a future for Glass where the camera is so in line with the eye that we simply “trust” the device and don’t have to focus on a screen to make sure it’s the shot we want—it’s precisely what we see (“trust,” I know, a scary word). Perhaps the same goes for voice recognition software too, which would or could do away with all the tapping and swiping and other crazy gestures. It does seem a long way off, still, and hard to imagine all its uses….but, then again, I thought the same thing about tablets when they came out (bigger smart phones that aren’t phones and aren’t full computers?), and now I see the benefits of having one in my day-to-day. The cynic in me knows that we will always find or create uses for our technologies , even if they aren’t truly “useful,” but I’m also not ready to do the grandpa cane-shake either.
Since the glasses would not work during my time at the lab, I decided to introduce myself to the original Atari instead. As a casual gamer who still has a Sega Genesis, Nintendo 64, and Dreamcast (and would have a SNES and NES were they not given to other cousins after my grandparents’ passing *sighs*), it was a delight to use one of the original home video game systems. Kyle and I decided to play the “Football” cartridge. At first, it seemed that the game (from a modern day perspective) was extremely simplistic, but as we continued to play I saw how certain moves of the joystick caused subtle effects in the gameplay and that there was the possibility for strategy (and you can certainly see how games like Madden have built off this initial setup). But I struggled with controlling my players, as the controls seemed to be inverted and Kyle ran in touchdown after touchdown. It suddenly occurred to me, looking down at the controller, that—as a left-hander—I was holding the joystick in my left hand, and had turned its base so the button on it would be on the right side (accessible to my right hand). This had subsequently caused my players to run in every direction but the one I wanted. I hadn’t even thought about how the joystick was set up for right-handed use only, and how modern controllers, while not nearly as cruel to left-handers, still value the right hand more than the left (in terms of what the left and right sides of the controller *control*, the amount of things mapped to one side’s usage over another). It’s such a small thing, but it seems significant when considering just how ideological and market-based “user friendly” really is. Because right-hand preferred technology still seems to be everywhere…right-side keypads, the mouse defaulting on the right side (I know, you can move it, but still), the electronic pens on the credit card swipes being anchored on the right, so on and so on….lefthanders know all of this all too well. And then there’s Google Glass. And it’s clear what side of the glasses you’re supposed to tap and swipe.
PS – For the lefties, http://www.buzzfeed.com/katienotopoulos/the-18-worst-things-for-left-handed-people
I struggled with Jonathan Crary’s 24/7. While his writing style certainly evoked emotion and interest through its tight, oftentimes poetic/polemic prose (and perhaps too tight—the footnotes/endnotes were far too scant or “just trust me on this”), I simply didn’t buy (pun intended?) his nightmarish vision (or should I say, his insomniac’s vision) of our sleepless future. And I didn’t buy it at the level of his thesis statement, which—for the purposes of discussion—I’ll quote most here:
In its profound uselessness and intrinsic passivity, with the incalculable losses it causes in production time, circulation, and consumption, sleep will always collide with the demands of a 24/7 universe…. Sleep is an uncompromising interruption of the theft of time from us by capitalism. Most of the seemingly irreducible necessities of human life—hunger, thirst, sexual desire, and recently the need for friendship—have been remade into commodified or financialized forms. Sleep poses the idea of a human need and interval of time that cannot be colonized and harnessed to a massive engine of profitability, and that remains an incongruous anomaly and site of crisis in the global present. In spite of all the scientific research in this area, it frustrates and confounds any strategies to exploit or reshape it. The stunning, inconceivable reality is that nothing of value can be extracted from it.” (10-11)
What I can buy is this: You certainly can’t work when you’re sleeping, so sleep in that sense is useless from an economic advantage point (and perhaps only tolerated right now due to its status as a human necessity). And I can fully get behind the idea that we are increasingly plugged in all day, every day. (The first thing I do when I wake up in the morning is check my emails on my phone while still in bed…and it is the last thing I do at night.) People work first, second, and third shifts to keep up with the demands of contemporary capitalism…the 24/7. For me, it’s scary to think about how we demand those unnatural nocturnal schedules—the second and third shifts—which disconnect the (typically) most economically disadvantaged from the dominant tides of daily life (we should demand an “unplugging” and a time for rest…but then again there’s the issue of police officers, firefighters, doctors, so welcome to modern life).
Therefore, it’s not entirely impossible to imagine ways in which humans might try to pare down our necessity for sleep in the future, and how—despite noble intentions—that process could be exploited (if, and only if, we ever can eliminate sleep). But what time scale is this happening on? While Crary is quick to open with the image of the sleepless soldier as being just around the corner, in this excerpt he conversely states how “in spite of all the scientific research in this area, [sleep] frustrates and confounds any strategies to exploit or reshape it” (11). Which one is it, then? Are we only a few decades away or (more realistically) centuries away when it comes to “conquering” sleep? And while perhaps Crary’s point is that it doesn’t matter, that we need to act early—I can’t help but feel there’s more pressing issues being caused by capitalism that are going to shape human life far sooner –such as climate change, nuclear war, and bioterrorism.
But Crary’s thesis soon takes and even more troubling turn for me. He writes, “Most of the seemingly irreducible necessities of human life—hunger, thirst, sexual desire, and recently the need for friendship—have been remade into commodified or financialized forms. Sleep poses the idea of a human need and interval of time that cannot be colonized and harnessed to a massive engine of profitability…” When Crary states that hunger, thirst, sexual desire, etcetera have been commodified—I can understand that. We buy food, we buy drinks, we can buy sex, you can “rentafriend”… okay. Humans commodified hunger when we stopped hunting and gathering, though…is that really the sole result of capitalism? And have we really commodified the human feeling? It seems that to make that argument you would have to say that, if the apocalypse happened and we could no longer purchase food, we would simply all die off. And while I’m not saying that a majority of us wouldn’t die, the cause would not be that we had entirely lost the instinct to go out and hunt/gather our own food—we’d just suck at it and there would be too many of us for pre-modern systems to support. Whether intentionally or not, Crary makes it sound like nothing about food or fucking is part of a natural bodily process anymore, entirely “remade” by capitalism despite “commodification” that happened before capitalism. And while capitalism can bring us food and the other “f” much faster than other means (and in that way, frees up time for working in a way that sleep cannot be freed up), we still have the option of not buying food and not paying for sex (or not choosing to commodify romantic love in the way of flowers, jewelry, expensive weddings, etcetera, etcetera).
Besides this, how has sleep not been commodified? While Crary states that “nothing of value can be extracted from it,” I would think many a bed manufacturer, a real estate agent, and a pharmaceutical company hawking sleeping pills (not to mention countless other industries around sleep) would be inclined to disagree. Crary brings this up momentarily when talking about capitalism “encroaching” upon sleep and requiring drugs like Ambien and Lunesta to deal with the stresses of the 24/7 (18), but he doesn’t talk about how much certain parts of our capitalistic machine would stand to lose if people no longer required sleep. Beds would no longer be needed, houses would not have to be as big, sleeping aids would be unnecessary….and that’s millions, billions lost. How, then, is sleep any less commodified than hunger, thirst, and sexual desire by our modern world? If we are talking about the very act itself, that the body will sleep whether or not it has a bed—the body will also eventually eat and drink too whether or not it has a McDonald’s (and in that way, those functions have not been fully commodified). I don’t understand how Crary can hold up this distinction, making sleep some kind of agitator to modern capitalism.
Above all else, though, what bothers me most about a work like this is that it offers so little in the way of solutions. Crary does not seem to be pushing for a limitation or reevaluation of capitalism, as he states on the last page that “because capitalism cannot limit itself, the notion of preservation or conservation is a systemic impossibility” (128). In that way, it seems the only true answer becomes an entire overthrow of our global capitalistic society. (Simple enough, right?) But what replaces it? How do we meet the demands of billions without commodifying human need in some ways? Maybe I’m just that heartless neoliberal who fails to see the people who don’t fit within this modern capitalistic system and die because of it (44)l maybe I’m more willing to submit out of a fear of “falling behind or being deemed outdated” (46). I’d like to think what I am more interested in finding are tangible solutions that can both work in and against current power structures, small steps that can add up, as opposed to some dream of a revolution with no clear goals, no clear processes—at worst, nothing more than a lot of sound and fury.
So, upon finishing Parikka’s What is Media Archaeology? (a before e, remember, a before e…), I’m not sure I have–or perhaps it’s good I don’t have–a stable definition of this study? Or is it a theory, or a process, or a progression (non-linear), or an experimentation, or a “cartography?” It’s somewhat unnerving to arrive at the end of an introductory text without being able to clearly answer the question that is its title, but perhaps that is Parikka enacting the methodologies he describes/builds upon– in that way, encouraging us to take part in constructing meaning out of our technological “progress” that is not ideologically rooted in the dominant, Enlightenment-style histories we have been given or any other history we would readily accept (if Parikka set out answering the question point-blank). Here are five reflections, in no particular order
1) What was most clear to me was Parikka’s discussion on steampunk and artistic uses for/ repurposing of old and dead media. This seems to suggest a new interaction between the technical (mathematical, scientific) and the artistic/aesthetic that can serve theoretical and political purposes.
2) Understanding that a connection exists between technologies past and present was also noteworthy – thinking about what was left behind, why it was, and how it has shaped our present media (alternate histories, marginalized histories, layered histories, non-linear histories). This demands an investigation of how our present media still interacts with/answers past needs – developing a sense of “We’re not so different, you and I.”
3) The notion of noise in media was a particularly tough section of the book, but the striving for “noise reduction” and the advent of spam, viruses, filtering technology, and hacker culture stuck out for me, along with the realization that information security was a problem even in the telegraph age.
4) Film theory was significant to Parikka’s work too -how some of those theoretical techniques could be applied to studying/analyzing/opening up media. Granted, this is probably an overwhelming over-generalization.
5) And wow, this text contained a lot of high theory when I was initially expecting more tinkering/thinkering with artifacts. Consequently I felt thrown into the deep end, sometimes because of the argument’s construction, other times by the argument’s content (overall, there seemed to be a definite assumption of familiarity). While Parikka asks “do we have to become engineers to say and do anything interesting and accurate about current media culture?” and answers that, fortunately, that’s not the case (loc 3449)…I’m not entirely convinced, as it seems the level of engagement desired would require such expertise.
Alas, to close with an attempted exercise in “thinkering”, Parikka’s section on noise did make me think about my own experiences as an intern at a radio station. There, I would have to edit my breaths out of recordings (which had been picked up and were often amplified by the microphone). I did this for normally pre-recorded materials like news updates and commercials, but elected to make such edits to informal bumps between songs as well. Upon playback, the breaths did jar me as “noise,” distracting interruptions where “natural” pauses should be, even though the natural should be the breath and it was not noticeable outside of the medium. Breaths were also not economical; when trying to edit a newscast to fit a 2 minute timeslot (when 2 or 3 sponsors needed to be mentioned) that noise–the human aspect–took up too much time. Perhaps I’m making too much of it, but thinking about it in these ways opens up critiques about increasing disembodiment and (maybe, if it’s not a stretch?) capitalism.