Author Archive: kylebickoff

M(A)LI – MAL Accessibility Initiative [codename “Mali”]

Tuesday, May 6th, 2014 by kylebickoff

Hi all—what a great class, what a great end of the semester. I was ridiculously impressed with everyone’s finals—it seems to me that we take too few courses where we all have a chance to see that amazing individual work everyone produces for the final project.

Here are a few notes on my own final, and a bit of a further description. When developing an idea for this project, I sought out to make it practical and to create a real impact. I asked, how can my project benefit the Media Archaeology Lab, and how can I use my newly gained knowledge from this course to improve the experience of a researcher at the Media Archaeology Lab?

My project morphed as I continually developed new ideas for MAL improvement projects. What I found was that I really wanted to take this opportunity to work outside the bounds of the work I normally engage in. Within the past few weeks, the MAL has received donations from several different donors, and has accumulated weeks of full time work to add to tasks at hand. My archival work in the lab is very often dominated by accessioning of materials, writing up descriptions of donations, completing documentation for each acquisition, and finally curating these new materials. This sort of work is necessary, but at times such a process prevents one from thinking abstractly about the collection as a whole. Moreover, it can force one to approach the materials from the perspective of the archivist, rather than that of the researcher.

By taking on the perspective of the researcher I have shifted my perspective in the lab. I began by engaging primarily with theoretical approaches to Media Archaeology I have gleaned from Parikka. Moreover, Zielinski’s Variantological approach to understanding Media Archaeology has very consciously shifted my approach to understanding the media in the lab.

As described in my presentation, I continued making cards for each system on display in the lab. I have created 45 of these cards for the workstations. The cards take a minimalist approach to addressing the instant needs of researchers in the facility. Although this may not seem significant, the cards have taken me many hours to make. I have carefully researched each system and tested the software demonstration on each to be sure the descriptions are accurate. This took at least 14 hours. Moreover, adding each fact and the basic “powering on” instructions has taken at least an additional 10 hours (given the 45 systems, and their oftentimes slow speeds). On top of the card layout, design, and printing, I have realized the true difficulty of this process. Moreover, engaging in this process has shown me how much care similar projects at other institutions take.

I have also taken on some additional work in the back room. I have removed the Compaq 386, the Apple Portable, an iBook G3 clamshell, and a PowerBook 165 from display. I have instead improved the aesthetic and placed the OLPC and digital Etch-a-Sketch on display. I have also connected the two Timex Sinclair systems (ZX-81 and 1500) to a television in the back, though it seems that there are still some compatibility issues to work out. Come by the lab and see all the changes!

To take a look at my Prezi, which I presented to the class and created to visually indicate the problems I address and the importance of my project, I’ll include the link here. (

Again all, thank you much!



The MAL as (dis)organized Archive

Monday, April 21st, 2014 by kylebickoff

As I think through alternative ways of understanding archival order, and the ways in which the lab systems are organized, I decided to organize some photos chronologically. In the photo below I have arranged images of Apple desktop keyboards, specifically focusing on the placement (or lack thereof) of the arrow keys. In descending order, we have the Apple II, the Apple III, Apple IIe, Apple Lisa, Apple IIc, Apple Macintosh 512, Apple Mac Classic II, Apple Macintosh Centris 610, Aple iMac G3, Apple eMac, Apple iMac G4. These systems are arranged chronologically from the earlier (1977) to the most recent (2002). But is there really any sense of order apparent in these photos? What would Zielinski say? Certainly, it seems that to create any meaning in this ordered list, we would have to construct a narrative around this already organized list. But does that mean that a linear chronology is best for an archive? How about an archive of digital content?
The first photo contains only left and right arrows. The second contains ‘all four,’ but in a horizontal alignment. The third contains four, but arranged in a strange ‘L’ shape. The fourth a modified ‘L’ shape. The fifth, a reversion to the horizontal. The six, NONE AT ALL!. Etcetera, etcetera, etcetera… What does this all tell us? Very little. In fact, the layouts are representative of navigation, of interface. Maybe these systems should be organized based on their operating system? Do they have a GUI? Do they operate through a command line? These seem to be questions that help to define better categories in the archive.
I don’t have a single clear vision for ‘the one best way’ that the systems in the Media Archaeology Lab should be arranged. But I do understand that the lab is not currently using a better method. I am certainly open to suggestions on how a more ‘user friendly’ layout of computers in the lab might better help us as students and researchers as we’re using these systems.


This photo shows the keyboard progression I note.

The MAL as (dis)organized Archive

Reading the Stars

Monday, April 14th, 2014 by kylebickoff

Similarly to Deven and Renee, I will discuss my visit to Counterpath and my experience at Stephanie Strickland’s and Eric Baus’ reading on 14 April 2014. As Deven has described, Baus’ talk prefaced Strickland’s—he prepared the audience for a sonic interpretation of the work. I might call his interpretation a ‘close reading’ of audio, yet he describes it as ‘distant listening.’ Moreover, this is not to be confused with already described fields such as ‘distant reading,’ which focus on using analytical data to study text. Baus, as opener, prepared the audience in an unexpected way for a subsequent ‘close/distant listening’ experience with Strickland’s performance.

Strickland read first from the text of Dragon Logic, then the recently published V : WaveTercets / Losing L’una. The latter work was presented to the audience via projected iPad app. In the dark room and among the silent audience, I was most drawn to the stars and constellations ‘created’ before the viewer’s eyes. Strickland’s app drew up connections, linked words, and created textual ‘associations’ in this starfield. When I use the word association, I use it to recall the term that Latour and Moretti employ when discussing network theory. I felt that within this vast starfield, the audience felt a sense disorientation—when constellations (familiar and new) are created, the audience regains a sense of location. Such associations within this grid, and within a created set of associations, lend clear structure in the work for the viewer. When such constellations were created, Strickland subsequently read them aloud—her voice audibly expressed these associations, yet remained silent when associations faded away. I invoke network theory here to suggest that this manner of ‘close reading’ the text might allow the audience to create a cohesive narrative. I believe there to be no ‘right’ or ‘wrong’ way to interpret her text, but I find this approach to be a helpful roadmap to reading the stars.


Sunday, April 6th, 2014 by kylebickoff

I wanted to take this week to write another of my five posts on materials in the Media Archaeology Lab. This week I have chosen to write about Tetris. Most know it as the greatest video game ever made—those who don’t might consider reassessing. The original Tetris game was released in 1984, yet the game did not make it to the USA until 1988, that is because it was an export of the USSR, and was finally ported to the C64 for export at that time. Sold with the tagline, “From Russia With Fun,” this structured, tile-matching puzzle game aligns Soviet bloc styled Soviet blocks in grids, for fun! More than this, the game functions within the constructs of a defined grid. Although (in most proper adaptations) the grid is invisible, it is the blocks (or Tetriminos) that create the grid when inserted into the digital space.  As a user, what I can’t help but think about is control. Although it seems that the blocks need to be aligned vertically and horizontally to achieve success, the control here is not distributed, rather quite centralized. Alexander Galloway and Eugene Thacker, as we have read previously, talk in depth about vertical versus horizontal control systems. As much as I want Tetris to resemble horizontally distributed control systems, it cannot be—the tetriminos are distributed by the sole individual in power over the game. Literally, all of the control rests in one individual’s hands (in the NES controller in this case). Really, it’s no surprise to see the love-affair America has with Tetris, with control, and with the obsession of a game that depends upon maintaining centralized power. Maybe, it comes as no bombshell to witness the fall of the USSR under Gorbachev  produce such a control-centric game.

Just today, we were been privileged to watch “the largest Tetris game in the world” played on the side of a skyscraper in Philadelphia. (New York Times Article). But really, is it any wonder that a great, $180 million skyscraper that is host to the office space of multiple private equity firms, a large hedge fund, and (wait for it) a major digital forensics company, would have a vested interest in perpetuating control? And  that it might, to the awe of onlookers across the city of Philadelphia, project the largest Tetris game ever, the most popular game of control, across the “City of Brotherly Love” or “The Birthplace of America”?

(Ab)using ASCII in the Apple IIe

Monday, March 17th, 2014 by kylebickoff

I wanted to talk about the layering of text in Zelevansky’s Swallows. Although this was a work we covered a bit ago, I wanted to be sure that I concurrently make sure I study physical objects in the lab for the course. Moreover, I wanted to take this chance to work with a piece of software which runs on a computer that existed pre-WWW. What I noticed when interacting with this text was the ‘layering’ inherently embedded, and nested, in the media. I want to consider this text on multiple levels, engaging in a critical course of study that avoids the fallacies inherent in ‘screen essentialism.’ If we consider Niebisch’s Media Parasites In the Early Avant-Garde: On The Abuse of Technology and Communication in relation to Zelevansky’s Swallows, we might consider how exactly, beneath the surface level  of the screen, Swallows is able to subvert traditional uses of Apple II systems.

Niebisch notes that the ‘abuse of media’ requires one to “(ab)use media technologies … in the system in a way note intended by hegemonic powers.” In the history of computing on Apple systems, the Apple IIe is released in 1983, while the apple Macintosh is released in 1984. If the Apple II line represents the open, DIY intent of computing, the Apple Macintosh line represents the blackboxed, proprietary, worst option for Apple (aka Jobs vs. the Woz). In Swallows, what Zelevansky engages in is a non-traditional representation of text and image for the system. Specifically, text represented alone typically makes use of the ASCII character-set on the Apple II, using text in Applesoft (Microsoft BASIC for Apple computers). When image is represented on the top portion of the screen, and text is represented below (like a caption), the text is also displayed with the Applesoft character-set. But, when text is represented atop (layered over) an image, then text is represented in a non-Applesoft ASCII character set. In this case, the words are represented either through an undefined font, or in a font intended to imitate ASCII font through image. In such imitations, the borders of the characters lose their sharpness and some of their contrast, becoming slightly blurry. Whether intentional or unintentional, I notice that the text repeatedly follows such a pattern. Would the author (and programmer) have embedded text differently if Applesoft supported alternative character-sets and character displays? I suspect font would have been used differently. In this case, image indeed subverts the ‘hegemonic power’s’ desire to define how a user will both enter and display text. This work is quite early in using the Apple II in this unintended way, but I would argue, indeed, subverts the intentions of text display on the Apple II line of computers. My further notations on subversion are go on, but I shall stop here.

Rethinking The Etch-A-Sketch

Sunday, March 9th, 2014 by kylebickoff

This week I went to the Media Archaeology Lab and chose to work with some new materials. I wanted to find something obscure, forgotten, and ideally a dead end. Then I found The Etch-A-Sketch Animator. This system is from 1986, and one can quickly tell. I was not quite sure what to make of the device, so I used my natural instincts. I created on the digital medium, what I find the digital world is most successful at circulating—cat memes. I recreated, as I could, Nyan Cat: the greatest of all cat memes. Nyan Cat him/herself may indeed be 8-bit, but I chose to work in this 2-bit medium of black/white, on/off, 1/0. In this binary dot-matrix interface, I recreated Nyan Cat, who became no longer ‘neon.’ Rather, a binary cat emerges, a Byan Cat, if you will.



But jests aside, such interfaces have a propensity for re-creating the familiar, the known, the producible/reproducible, and that which brings one joy. As a child, I never had an Etch-A-Sketch Animator, but I did have a traditional Etch-A-Sketch. Within this frame I used the magnetic particles to create, what I will describe now as, binary images. Ultimately, this Etch-A-Sketch Animator may be forgotten and a technological dead-end, but it functions here as a piece of technology that forces the user to rethink the experience with an interface, and similar interfaces. The Etch-A-Sketch Animator, self-consciously digital, forces one to reconsider the traditional Etch-A-Sketch. The Etch-A-Sketch, produced in 1960, may pre-date the digital. But this ever-present childhood play toy, I argue, popularizes a lineographic, binary writing surface for the masses. This 1960s toy propagates the binary frame, predating the home computer, but preparing an entire generation for the adoption of a new binary interface in the first monochromatic computer monitors.

Professor Matt Soar’s Lost Leaders at Counterpath (Denver, CO)

Monday, March 3rd, 2014 by kylebickoff

On Saturday March 1st I attended Professor Matt Soar’s talk and screening of his work at Counterpath Press and Gallery in Denver. His Presentation, titled Lost Leaders, seeks to investigate film leaders—the additional film/mylar leader at the head or tail of a film reel. Embedded in this material is data, which is handwritten, still image, moving image, magnetic audio track, optical audio track, etched, or inscribed in a number of manners. Such inscription correlates strongly with marginalia, evidenced in current thrift shop books and early modern manuscripts alike. More than anything, the process that Dr. Soar engages in is both elaborate and absorbing. Rather than just viewing the material, Dr. Soar participates with the production process—he hand processes his own film, then creates original work with his own creative stamp imprinted in order to learn the process of interacting with the medium. Dr. Soar literally puts his work under a microscope, investigating every facet of it.

Professor Soar engages with the content of this metadata, the formalisms, and ultimately the form of the data. While his work embraces such an understanding, it seems not to limit itself by such a specific reading—rather it helps to open up conversation on the film leader, a medium that is currently becoming “doubly ‘lost’” as it escapes both the audiences vision, and its obsolescence in the digital age.

Centralizing Control of a Distributed System?

Monday, February 17th, 2014 by kylebickoff

As I have been working through Alexander Galloway’s Protocol: How Control Exists After Decentralization, I cannot help but continue to think through the implications of the control society upon which he explicates in his work. To contextualize this briefly: the binary of the metaphorical network and the non-metaphorical (infrastructure) forces us to consider which mode of thinking is more accurate? Answer: both (Galloway 15). Moreover, when we think through networks, we need to employ tools (media archaeological tools of investigation) in order to begin to understand these networks. The bi-level logic that underlies the TCP/IP (Transmission Control Protocol/Internet Protocol) enables a horizontally distributed mode of communication, while the DNS (Domain Name System) vertically stratifies the horizontal logic through these regulatory bodies that manage internet addresses and names (Galloway 16).

I cannot help but attempt to consider how we can think through the implications of this theoretical construct—we might consider that even though data nodes are distributed worldwide, data is so widely disseminated a moved such long distances that institutional control can exert influence more easily in a control society. Consider how the EU is considering legislation to force domestic EU data to remain in the EU—specifically avoiding transference through the US ( Merkel, who advocates this move, hopes to specifically avoid the prying eyes of American mass surveillance institutions—specifically the US National Security Agency. A certain German tendency in the present to minimize government surveillance manifests itself in this practice; such legislation indicates how vertical institutions of power in a control society still exert influence on horizontal networks more than Galloway represents. Such a move lessens the horizontal distribution of data worldwide and centralizes it, to an extent, retaining it within the EU. Conversely, by retaining greater control over domestic EU data, the EU might be seen as reducing overall institutional control over data, and thus biopolitical control. This would create a greater good for a greater amount of individuals.

Such legislation forces any media theorist to consider how exactly control is exerted over data, and how data might exert biopolitical control more/less as a result of such legal agreements. Do the benefits of reducing some horizontal control protocols overall increase the security and sanctity of the internet’s infrastructure? Or does such control indicate the closing of the web, and a tendency back towards a more centralized web? If the EU creates its own network, still interconnected with the global network, will other regional powers follow? How might such an ‘internet,’ if that term is still accurate, appear?

On Mainstream Glitch

Monday, February 10th, 2014 by kylebickoff

I’d like to begin by citing Trevor Owens (at the Library of Congress), citing Mark Sample (George Mason), citing Matthew Kirschenbaum (University of Maryland) in order to define screen essentialism: when “‘digital events on the screen” … become the sole object of study at the expense of underlying computer code, the hardware, the storage devices, and even the non-digital inputs and outputs that make digital objects’ [existence] possible.’ It is within this area of study, study the events on-screen, that digital media studies takes ground and interprets the message from the inside, the outside, and everywhere in-between. Just as Kittler describes in There is No Software, the message we encounter on-screen is simply an extension of the software we are running to read this message, which is an extension of the GUI, the OS itself, then the BIOS, and finally the binary digits at the level of machine language that our processor interacts with. Only by studying this continual translation of the message can one seek to engage in a media studies reading of the material. This, I will argue, is why critical studies of glitch and artistic works of the glitch aesthetic are so appealing to the reader/user, and why the glitch aesthetic remains still hidden and largely inaccessible to the masses.

We might say that a void in the interface prevents humans from logically comprehending the translation of data that occurs between the machine language and the message’s visual output on-screen, but it is only the inability to process this data at the rapid speed that a current microprocessor can, that constructs this apparent barrier. Regardless, this barrier makes the content beneath the interface seem hidden. It is within this hidden space that either the glitch, or proper translation of data occurs. From this origin, I note, the glitch aesthetic arises—out of the hidden, the unfamiliar, and the seemingly inaccessible. It is this place that the hackers, the artists, the digital explorers work to navigate and bring the glitch aesthetic to the fore. The opportunity to work in such an aesthetic remains highly accessible to most computer users, yet the term glitch has only recently made it into the mainstream: Barack Obama’s repeated use of the word glitch describing and the 24/7 news cycles adoption of the word into a certain mainstream American vocabulary. I began thinking through these ideas after reading Trevor Owen’s piece on the Library of Congress’ online publication, The Signal, which presents itself as a user-friendly guide for creating glitches in just a few easy steps. I realized that the term, and the recognition of the glitch has entered into the mainstream American consciousness, but the glitch itself remains hidden behind our interfaces layered one over another, over another. The glitch may be a part of popular culture, but popular culture still does not own the glitch. For now, we still depend upon hackers and artists to bring this aesthetic to the fore.

Google Glass: Too Early. Maybe Ever?

Monday, January 27th, 2014 by kylebickoff

Google Glass. Google Glass… I find myself torn (not literally) at every attempt to use GG. Of course, I look ridiculous wearing them, I can’t really use them without putting in contact lenses (which approaches the feeling of putting sandpaper on your eyes in a place of such low humidity as we have in Colorado, my rooted Android version of ICS won’t sync with the phone (I suspect I shouldn’t expect that Google might permit this), and I must create a Google+ account to sync my Glass to my phone—which is necessary to do anything remotely productive with it… This part scares me. The information conglomerate known as Google might lie more at the source of it, which again, I knew coming into to this. My nightmares are not set on the sidewalks of Her, with identical masses talking to their technology devices, navigating them through the world, falling in love. Rather, my nightmares are of a world in which we cannot interact with our technologies without first syncing to the cloud, agreeing to ransom off our personal information and all recorded metadata, and all the while looking pretty ridiculous. Our devices should not lock themselves down until they receive this information. We should have control over our devices, not the reverse. In typical Google fashion, there is no micro-SD expansion for memory on the pair either…

I think the specs in the device are solid—at least as dated IT OMAP 4430 SoC processors go—but this can be easily remedied. I think the tile-based interface on Android 4.04 (ICS) for Glass is user friendly and intuitive. I think flash heavy pages remain almost unnavigable on glass, and for this and other reasons, a significant amount of the web remains highly burdensome to navigate. Thus, the user must for now download various apps that have been developed thus far for Glass—there are some boring ones such as facebook, or some pretty cool one’s functioning as “The WolframAlpha of Glass,” or a live “bitcoin ticker app.” But HTML5, allowing support for better access to content from a variety of contents and browsers, remains decent on Glass and represents a space for improvement in the future. As more sites begin to adopt HTML5 and platform compatibility grows (and disparity grows from the still quickly growing mobile market), the web access across a multiplicity of devices will continue to improve. Glass, of course, is part of this. What I will reiterate is that Glass’s web navigation is decent, but built for a web of the future. The system is still a bit buggy, froze multiple times on me, and had the short battery life interrupt me other times. I see potential for Glass. At the same time, I hope the future of wearable technology and these new devices will not follow this black-boxing that both Google and Apple are wont to employ. Moreover, I hope that conscious consumers (and literally highly conspicuous consumers) might similarly note this and voice their concern vocally or through their spending power. In Glass I see potential—I am not afraid of seeing text and letters fly through my vision constantly, in fact, I’m maybe a little bit excited for the right tech company to do it.

Permissions for Consumption

Monday, January 27th, 2014 by kylebickoff

I found myself incredibly impressed at how Between Page and Screen expresses the complications of interface. I began by first looking at every page—each page contains an image which alludes to the QR code, the images remain aesthetically pleasing, minimal, plain enough to not alienate the user. Similarly, I find they recall a certain glitch aesthetic as well as icons from 1980s game culture. But beyond these visually alluring images, I find that the interface is the most alluring aspect of this creation to consider critically. Does the pages recall written letters or emails? Stock tickers or narratives? Charcuterie or swinish consumption? The material page, the material screen, and the immaterial interface at the center of this intersection is the point to which the most attention is paid, and the point with the fewest  clear answers. Certainly, capitalism lies near the ‘heart’ of this work.

Moreover, the book forced me to consider consumption. This book’s site encourages me to interact with the book online, which then, after asking permission, starts Adobe Flash on my computer. Instantly, my processor speed kicked up as the window took up a consistent 18-25% of my CPU usage during my usage. Now, there is certainly a way to calculate energy consumption, the coal burned to run this (Boulder, CO electricity’s source is coal) interface, but I will not engage in such a cost analysis. Rather, this forced me to reconsider the inherent required consumption to read both the digital text and the physical text. I realize that not just this text, but all interfaces require similar consumption. My smartphone, my computer, my access to the cloud, all require resource consumption. Each additional interface I interact with similarly requires additional consumption—every gmail server I access, the syncing of this file to dropbox, the electronic confirmation for funds which purchased this book, my access to the course’s website to submit this post for the week—all require additional consumption, which I actively permit at every step of the way. This ‘permission,’ for consumption, which I find myself now even more conscious of now, has been actively allowed by me. Every user agreement I approve (including the several to run this digital work) is active permission to consume. We may tell our systems to ‘remember this choice,’ as I did, but this question is: do we remember it?


-Kyle Bickoff