Archive for January, 2014

Rethinking Romantic Texts with Media Archeology

Tuesday, January 28th, 2014 by dparker90

This is a repost from my other blog, the NASSR Graduate Student Caucus (, but Lori thought it might also be of interest here. Worlds collide!

In my first post for this blog, I wrote about how my background in archeology influences my perception of texts as physical objects, and how I’d like to move towards an “archeological hermeneutics” that takes into account a text’s material conditions as contributing to its content and their significance. Moving forward, I’d like to complicate our understanding of text-as-object by introducing what I’ve so far learned in my “Media Archeology” seminar taught by Lori Emerson. It came as a surprise to my family and friends that I enrolled in this course, because I tend to take classes that focus on the study of 18th and 19th century literatures. Although I won’t be reading any texts “in my period” for this class, I’ve found it has in fact supplied me with a variety of alternative methodologies for my Romantic-era research.

Although those who work in the field tend to resist a concrete definition, Jussi Parikka calls media archeology “a way to investigate the new media cultures through insights from past new media, often with an emphasis on the forgotten, the quirky, the non-obvious apparatuses, practices and inventions” (Parikka loc 189). We’re encouraged to take apart machines in order to understand how they operate, and in turn expose the conditions and limits of our technologically mediated world. Relying on Foucault’s Discipline and Punish, among other texts, media archeologists expose structures of power embedded within the hardware of modern technology, revealing the ways in which media exert control over communication and provide the limits of what can be said and thought.

I find this way of thinking about the structures and limitations imposed by media particularly useful for the study of 18th and 19th century texts. Instead of thinking about how printing and publication practices give rise to individual texts, as I have in the past, I’ve started to consider texts from the inside out: what do books tell us about the cultural conditions and constraints imposed by the media in which they were (and are) written, manufactured, and consumed? Like the ASU Colloquium’s post, I wonder what three volume novels, for example, might tell us about communal reading practices and circulation of texts and, importantly, our modern reading practices in comparison. I’d hypothesize that circulating texts and libraries would contribute to communities of readers in which reading was, perhaps, a shared experience. In contrast, modern reading tends to be solitary experience which involves owning texts (especially when the library has only one copy of the book you need).

I’ve also found media archeology’s rethinking of linear time and notions of progress particularly useful and interesting. Collapsing “human time” allows us to bring together seemingly unrelated technologies for comparison and analysis. I’m thinking here of the Amazon Kindle and 18th century circulating libraries, which both create spaces for communal reading. In contrast to the private reading practices I described above, I think the Kindle – and specifically the “popular highlight” feature – presents an opportunity for readers to become aware of their participation in collective readerships. When you click on a pre-underlined sentence, it shows how many other people have also highlighted it. While at first I found this feature annoying – perhaps evidence of the private relationship I tend to have with books – I’ve begun to enjoy the way it makes me aware that I’m one of many readers who’s enjoying this particular text. Furthermore, I wonder if my newfound sense of collective readership would also give me a better understanding of Romantic-era reading practices that were likewise characterized by shared texts and mutual engagement. The ASU Colloquium posed an important question about whether we should attempt to read texts as their original readers would have; since many of us no longer have access to the original 3 volume novels and their circulating libraries, maybe we can gain insight into these texts and reading practices from the vantage point of our own collaborative technologies.

To close this post, I want to introduce one more concept from my media archeology reading that I’ve also found particularly applicable to the study of Romanticism: glitch aesthetics. Typically understood as accidents and hick ups within games, videos, and other digital media, glitch artists exploit them in order to “draw out some of [that technology’s] essential properties; properties which either weren’t reckoned with by its makers or were purposefully hidden” (McCormack 15). Again, media archeologists are concerned with exposing the power structures embedded in technologies, this time by giving us a peek of what lies beneath. While looking at glitch art, I couldn’t help but think of an experience I’d had in the British Library reading Keats’s manuscripts. I remember finding an additional verse to “Isabella: Or, the Pot of Basil” in George Keats’s notebook in what I think was Keats’s hand etched nearly invisible on the opposite page. Of course, this mysterious stanza threw a wrench in the carefully constructed argument I’d planned, and I had no idea what to make of it. Now that I look back on it, I’d like to think of that stanza as a textual glitch – it’s possible that Keats never intended for it to be read. Perhaps it had even been erased from the page. For me, this “glitch” reveals the textual instability of the poem and disrupts the sense of solidity and permanence with which I’ve come to regard Keats’s oeuvre.

I still have much to learn about media archeology and its methodologies (which I’ve certainly oversimplified), but I think this field could lead our work in Romanticism in new and exciting directions.



Google Glass: Too Early. Maybe Ever?

Monday, January 27th, 2014 by kylebickoff

Google Glass. Google Glass… I find myself torn (not literally) at every attempt to use GG. Of course, I look ridiculous wearing them, I can’t really use them without putting in contact lenses (which approaches the feeling of putting sandpaper on your eyes in a place of such low humidity as we have in Colorado, my rooted Android version of ICS won’t sync with the phone (I suspect I shouldn’t expect that Google might permit this), and I must create a Google+ account to sync my Glass to my phone—which is necessary to do anything remotely productive with it… This part scares me. The information conglomerate known as Google might lie more at the source of it, which again, I knew coming into to this. My nightmares are not set on the sidewalks of Her, with identical masses talking to their technology devices, navigating them through the world, falling in love. Rather, my nightmares are of a world in which we cannot interact with our technologies without first syncing to the cloud, agreeing to ransom off our personal information and all recorded metadata, and all the while looking pretty ridiculous. Our devices should not lock themselves down until they receive this information. We should have control over our devices, not the reverse. In typical Google fashion, there is no micro-SD expansion for memory on the pair either…

I think the specs in the device are solid—at least as dated IT OMAP 4430 SoC processors go—but this can be easily remedied. I think the tile-based interface on Android 4.04 (ICS) for Glass is user friendly and intuitive. I think flash heavy pages remain almost unnavigable on glass, and for this and other reasons, a significant amount of the web remains highly burdensome to navigate. Thus, the user must for now download various apps that have been developed thus far for Glass—there are some boring ones such as facebook, or some pretty cool one’s functioning as “The WolframAlpha of Glass,” or a live “bitcoin ticker app.” But HTML5, allowing support for better access to content from a variety of contents and browsers, remains decent on Glass and represents a space for improvement in the future. As more sites begin to adopt HTML5 and platform compatibility grows (and disparity grows from the still quickly growing mobile market), the web access across a multiplicity of devices will continue to improve. Glass, of course, is part of this. What I will reiterate is that Glass’s web navigation is decent, but built for a web of the future. The system is still a bit buggy, froze multiple times on me, and had the short battery life interrupt me other times. I see potential for Glass. At the same time, I hope the future of wearable technology and these new devices will not follow this black-boxing that both Google and Apple are wont to employ. Moreover, I hope that conscious consumers (and literally highly conspicuous consumers) might similarly note this and voice their concern vocally or through their spending power. In Glass I see potential—I am not afraid of seeing text and letters fly through my vision constantly, in fact, I’m maybe a little bit excited for the right tech company to do it.

Permissions for Consumption

Monday, January 27th, 2014 by kylebickoff

I found myself incredibly impressed at how Between Page and Screen expresses the complications of interface. I began by first looking at every page—each page contains an image which alludes to the QR code, the images remain aesthetically pleasing, minimal, plain enough to not alienate the user. Similarly, I find they recall a certain glitch aesthetic as well as icons from 1980s game culture. But beyond these visually alluring images, I find that the interface is the most alluring aspect of this creation to consider critically. Does the pages recall written letters or emails? Stock tickers or narratives? Charcuterie or swinish consumption? The material page, the material screen, and the immaterial interface at the center of this intersection is the point to which the most attention is paid, and the point with the fewest  clear answers. Certainly, capitalism lies near the ‘heart’ of this work.

Moreover, the book forced me to consider consumption. This book’s site encourages me to interact with the book online, which then, after asking permission, starts Adobe Flash on my computer. Instantly, my processor speed kicked up as the window took up a consistent 18-25% of my CPU usage during my usage. Now, there is certainly a way to calculate energy consumption, the coal burned to run this (Boulder, CO electricity’s source is coal) interface, but I will not engage in such a cost analysis. Rather, this forced me to reconsider the inherent required consumption to read both the digital text and the physical text. I realize that not just this text, but all interfaces require similar consumption. My smartphone, my computer, my access to the cloud, all require resource consumption. Each additional interface I interact with similarly requires additional consumption—every gmail server I access, the syncing of this file to dropbox, the electronic confirmation for funds which purchased this book, my access to the course’s website to submit this post for the week—all require additional consumption, which I actively permit at every step of the way. This ‘permission,’ for consumption, which I find myself now even more conscious of now, has been actively allowed by me. Every user agreement I approve (including the several to run this digital work) is active permission to consume. We may tell our systems to ‘remember this choice,’ as I did, but this question is: do we remember it?


-Kyle Bickoff

The Humble Rebellion of the Everyday

Monday, January 27th, 2014 by lasu9006

Crary’s 24/7 makes me want to live in the woods. It makes me want to escape to a place untouched by technology, to a place where there is no Internet or Netflix (gasp!), to a place where, if I want to play Solitaire, I’ll need an actual deck of cards. But, for now, I will continue to live in this buzzing city, will continue to attend graduate school, will continue to participate in society.

While I agree with many that Crary has done little to suggest any actual remedies to the problems facing our contemporary, capitalist society, I am quite enthralled with the solutions he does suggest, because as far as I can tell, those solutions entail doing nothing much at all. How intriguing!

In particular, I have been thinking a lot about the notion of the everyday, a notion that Crary describes as “the vague constellation of spaces and times outside what was organized and institutionalized around work, conformity, and consumerism” (70). Not too long ago, the everyday was a relatively fixed part of life. Now, of course, it’s nearly impossible to detach from the constant onslaught of technologized manipulation that we encounter on our portable devices and in our television sets. In fact, corporations and political entities do their best to suppress the remnants of the everyday, to eradicate the time spent on individuated decision-making and unmediated introspection (40).

If there is one suggestion that I can take away from Crary’s text, it is that the individual must take great care to preserve those fragmentary moments of the quotidian, to maintain the private (for now) experience of internal contemplation and reflection, and, of course, to continue to enjoy the phenomenon of sleep. For these moments of temporary disconnection allow us to be human, to just be, to experience the limitations of our humanity. Furthermore, the everyday, while seemingly innocuous, is, at its root, rebellious, because it implies a momentary withdrawal from society, from consumerism. In fact, the everyday can be seen as downright dangerous because of its uneventfulness—it is  “both unconcealed and unperceived;” it is beyond the gaze of the empowered (70).

I’m going to try to turn over a new leaf. I’m going to try to relish in moments of the quotidian, to put down my goddamned phone when I’m waiting in line at the post office. I’m going to try to just sit and think, like I used to as a kid, instead of letting my phone do the thinking for me.

Of course, there are limitations to this humble form of rebellion. Like the escapism that I considered in the beginning of my post, the preservation of the everyday really only works on the level of the individual. But isn’t that what’s so great about it? That only you can experience it, and that it doesn’t need a hash tag slapped on it in order for it to count? I think so. I’m going to close my laptop now and am going to sit and watch the snowflakes fall. 


Six Reasons Why I Will Never Get Behind Google Glass Regardless of How Often I Praise Google Otherwise

Monday, January 27th, 2014 by lola192

Like those who have blogged before me, I too would like to jump on the “Google Glass ain’t that great” bandwagon. To air my grievances in a succinct manner and to avoid subjecting you all to a soapbox address, this post shall be presented in a friendly list format.

  1. I found the entire concept bothersome. You can’t possibly remain aware of your surroundings or, more importantly, those within it whilst wearing that bright orange monstrosity (I’m telling you how I really feel). We have already become a society that doesn’t fully participate in everyday life because of our need for immediate access to our technology – there is no longer such a thing as uninterrupted social interaction – and Google Glass will only make it worse.
  2. The screen invades your field of vision as obnoxiously as a tween Justin Bieber fan with a Twitter account.
  3. It is neither user-friendly nor intuitive. Evidence: I accidentally sent a picture of the Media Archaeology Lab to a stranger at 8pm on a Tuesday. I can’t even tell you how I did it.
  4. As Will pointed out in his post, Google Glass serves to collect information about its consumers – It “hijacks our eyeballs” and mines away at our data. Disturbingly, the majority of consumers seem ok with that. In permitting our technology (and those behind it) access to our personal preferences, wish-lists, and how inefficiently we may ride a bike, we allow marketers to not only sell us a product, but to sell us the idea that we simply aren’t good enough people without that product. To use Will’s example, for Google, it isn’t sufficient that we are out riding bikes, that we are trying to leave a smaller carbon footprint or are merely trying to lead a healthier lifestyle. Rather, Google focuses on the fact that we aren’t doing it well enough. As a result, they are better able to sell us fitness software, cycling gear (because the right shorts will not only make you peddle faster, but boy, will you look good doing it), and a speedier, lighter bike.
  5. As Scott Fitzgerald, a popular glitch artist, said, people become empowered when they “understand the tools and the underlying structures… [when they] know what is going on in the computer”. Google denies its consumers this power. The hardware inside Google Glass is even more inaccessible than that powering our smartphones, our laptops, and our desktop computers. Since Apple began denying their customers knowledge of the inner-workings of their machines, technology has become a greater mystery. Google Glass, with its miniscule computer that only the most qualified, decaffeinated, and tech-savvy could pick apart, seems the epitome of this denial. With Glass, Google is denying real knowledge and understanding of its product to the consumers it relies so heavily upon. 

Constant Consumption

Monday, January 27th, 2014 by asobol

I found many of Crary’s claims dubious, hyperbolic. All this doom and gloom about the possibility of the sleepless consumer, née sleepless soldier. Can one even be a consumer all day long? I tried it for a day to see how my body handled it. I tried being a conscious consumer of everything, which meant physically buying things at a store as well as subjecting myself to the internet and various media intake. I tried to limit sleep (more accurately: to put it off as long as my body could handle it) and see what would occur over the course the day. Running around stores became exhausting and annoying and eventually I could no longer remember any potential desires I may have had coming in. Why am in this store to begin with? The longer I spend in the store, the more I begin to resent it, the less I want to spend money there. 

When I returned home, I propped all my devices around me. Laptop on lap. Phone on the left, Kindle on the right. Television on. As this continued, numbness came over me. I could only provide attention to one thing at a time. The TV would eventually get lost to my browsing. The browsing interrupted at times by TV. At a certain point the night, TV became mostly infomercials, prompting me to ask why I even had it on anymore. My attempts to continue to consume on the internet was a wash once I ran out of ideas. I have a limited set of interests and while I can look scan Amazon and eBay for a long time, I eventually exhaust them. Boredom was constantly resurfacing. Even shorts bursts onto Twitter or Facebook left me with nothing to chew on. The later it became, the less interested I was in the things I was searching for. What remained was this zombie-like clicking through. It ended when I realized I was searching Amazon for pillows and linens — a cue from the unconscious to go to bed, perhaps? The next morning, perhaps coincidentally, I woke with a nasty cold, which with its migraine-like sinus pain, made looking at screens a painful chore, making me into a terrible consumer.

My roundabout point being that the attention span is finite. And even if we didn’t have to sleep, how long could things really keep us invested, especially 24/7. There’s only so much stimuli before we need to reset. I can only stare so long at a screen before my eyes begin to sting. I can only buy so many consumable objects before I have my fill. In the abstract, sure, capitalism may be insatiable but people (body and mind) are not.


Stray question: Anyone else feel Crary is guilty of Tyler Durdenisms? I mean, the entire last paragraph of chapter two feels like (with the exception of a certain charm or charisma) it would fit straight into Brad Pitt’s mouth:

Even in the absence of any direct compulsion, we choose to do what we are told to do, we allow the management of our bodies, our ideas, our entertainment, and all our imaginary needs to be externally imposed. We buy productions that have been recommended to us through the monitoring of our electronic lives, and then we voluntarily leave feedback for others about what we have purchased. We are the compliant subject who submits to all manner of biometric and surveillance intrusion, and who ingests toxic food and water and lives near nuclear reactors without complaint. The absolute abdication of responsibility for living is indicated by the titles of the many bestselling guides that tell us, with a grim fatality, the 1,000 movies to see before we die, the 100 tourist destinations to visit before we die, the 500 books to read before we die.

Yikes. (I don’t think I’m entirely off-base comparing the two, especially given that Durden was born out of a character’s lack of sleep.)


Monday, January 27th, 2014 by willm2

Google Glass is one part utopian cyborg interface and nine parts aggressive modification. Returning to a thread from last week’s discussion, Google Glass is explicitly intended to mold the user into a subject that not only sees the world differently, but interacts with it in a radically new way. Before the device can use us properly it must condition us to its mode of operation. Glass would like to extend data-mining and self surveillance to our immediate physical world. Whereas smartphones and modern web-browsers are only able to capture traces of our consumer identity (through cookies, targeted advertising, amazon’s associative product recommendation, etc…), Google Glass literally wants to see through our eyes and walk in our shoes. Glass is not an information resource like the internet, or a tool for exchange, but a pure data-miner. If Eric Schmidt of Google wants to capitalize on the limited ‘eyeball-time’ of the consumer, Glass is a brazen attempt to literally hijack our eyeballs from us like the sandman of folklore. Whatever data Glass can glean from our physio-optical behavior can be processed and routed back to us with suggestions for improvement, thus creating a feedback loop of user-modification similar to that used by e-commerce sites for some time now. As a hypothetical example: Glass knows that you bike in such a way that much of your peddling energy is wasted. Glass suggests either electronic or mechanistic mods to your bike to remedy this. In a prosaic way your physical form has been shaped by the exchange of data with Google.

However, the user at this point in time is not ready to become a vector for such a device. Compare Glass to iOS devices, which seem to effortlessly meld with our minds and bodies, creating a sense that the device is a utilitarian extension of ourselves, no different than a walking stick a pair of running shoes. Glass, on the other hand, is an interface disaster. Again, unlike other devices which can be figured out without any instruction manual, Glass is counter-intuitive. Neither voice commands nor manual manipulation lead to predictable results, at least not from my hour or so spent with the device. A video tutorial is necessary for achieving even basic functionality. It reminded me of my earliest childhood experiences with DOS (Dark Obscure System) and the feeling that my method of interacting with the world was being forcibly reshaped into something different, like a plowshare being beaten into a sword. Due to the tortuous attempt at turning the user into a Glass-subject, Glass becomes a glitch masquerading as a device. Through the frequent mistakes (did I just delete that?), navigational maroonings (how did I get here?), navigational imprisonment (how do I get out of here?), and near catastrophic misuse (did I really just send that to every friend in the Google + network?), Glass calls attention to itself through error. If it had been a seamless experience like my first iPhone, perhaps I would have been swallowed up by the 24/7 continuity of hypercapitalism Crary writes on. Instead, through systemic glitch, I could see clearly what Glass wanted from me, and how little I wanted it back.

Late Capitalism and the Sleeper

Sunday, January 26th, 2014 by brandontruett

Having read a couple posts that lament the lack of solutions offered in Jonathan Crary’s 24/7, I want to focus on the solution as demonstrated by Crary’s ambitious attempt to identify an opposing temporality in the form of the “sleeper.” He makes a strong claim that techno-conglomerates have interpellated us to exist as compliant subjects in a world of unfettered late capitalism. Crary elucidates a social reality that is surveyed biopolitically, which is to say, we have been conditioned to self-administer our own compliance. While 24/7 certainly contains much that should be unpacked, I am specifically interested in how the argument pertains to social relations, how we interact with one another through various technologies that seemingly offer connection. Having established the pernicious nature of 24/7 temporalities, Crary claims that “[w]ithin 24/7 capitalism, a sociality outside of individual self-interest becomes inexorably depleted, and the interhuman basis of public space is made irrelevant to one’s fantasmatic digital insularity” (89).  Crary critiques the marketed ideology that champions the newest technology, which connects one to his or her loved ones, ensuring more frequent communication. Crary also points out that neoliberalism successfully demonized the dreams of communality that existed in the 1960s. As late capitalism ensures the marketability of every part of the day, eventually even sleep, we move further away from a healthy sociality outside of shallow digital communication.

As a solution, Crary champions a type of temporality that allows for “waiting” in order to have “time-in-common” (127). I’ll admit the irony of myself blogging about Crary who points to blogging as the end of politics due to the fact that bloggers do not wait to hear one another; they endlessly chatter into the ether. I agree with Crary’s solution for what might ameliorate or mitigate our entrenchment in capitalism. The solution, as he states, takes the form of “the sleeper [who] inhabits a world in common, a shared enactment of withdrawal from the calamitous nullity and waste of 24/7 praxis” (126). I wonder how we can tap into this “sleeper” ontology in order to shore up the encroachment of global capitalism. Does he mean that we should disconnect from our technologies more often, attempting to opt-out for short periods in which we might dream or imagine other ways of being-in-the-world?

Capitalists and Those Other Guys

Sunday, January 26th, 2014 by eadodge

Reading Between Page and Screen threatened to overwhelm me with a sense of how privileged the whole discussion of media archaeology can be, but then I got a little perspective. This threat occurred as I fiddled with my Windows Surface for the dozenth time to get the poem on my screen to an angle where I could read the top bit. Between my own frustrations with the technology I was using and the clever “POLE/PALE/PAWL/PEEL” word prism on one of the pages, I felt hyper-aware that the objects with which I was interacting were all part of a glaringly privileged vantage point.

In 24/7, Jonathan Crary identifies that the people “who cannot be integrated be into the new requirements of markets,” which is to say those who are unemployed, impoverished, or those living in developing countries, are condemned by capitalism (44). These individuals do not necessarily have the means to enjoy Between Page and Screen (the book website identifies these materials as a webcam and a browser). I’m not sure how these individuals would respond to the existence of the book as an exercise of modern technology, but I imagine them being affronted that time and resources are being spent toward enterprises like Between Page and Screen. But then again, I am in this Media Archaeology class, in this university – and I am writing this post on my gaming PC while sitting in a one bedroom apartment in Boulder. I am not in the most appropriate place to imagine their responses, one can accurately say.

With regards to the field media archaeology, while it has been accused of not being concerned with the wider politics of humankind, our class discussion on the 21st spoke about the ways in which the field is subtly rooted in human politics because the field is rooted in human culture. On the 21st as well, Prof. Emerson pointed to a passage from McLuhan’s book – about the inherent good or evil of an object – and we arrived at the idea that there are certain objects that are ideologically loaded, regardless of a user’s intent. I wonder the extent to which that is also true for media. Crary certainly does not seem too hopeful about media – entrenched as media is in the capitalist ventures of modernity, he claims that it deactivates its users (88). But even Crary ends his discussion in 24/7 with an appreciation of sleep and dreams, or more generally, private spaces less controlled by capitalism. In those spaces, Crary indicates, there is a glimmer of hope for the power dynamics established by capitalism to collapse (128). Whether capitalism at large is still in place or not, though, if one succeeds in divorcing technology/media from capitalism, then one is opening spaces for the thriving of what would usually become imagined technologies. If money was not the object – if raw consumption was not the object – then technologies/medias could be produced that are more varied and more representative of human curiosity and desire. In this regard, in the excavation of human reality and possibility, I personally find the field worthwhile and grounded, and that is a conclusion to which I need to come if I can to continue to study media archaeology.

Erin Dodge


Crary, Jonathan. 24/7. New York: Verso, 2013.

Does Sleep *Really* Escape Capitalism?

Sunday, January 26th, 2014 by samanthalong88

I struggled with Jonathan Crary’s 24/7.  While his writing style certainly evoked emotion and interest through its tight, oftentimes poetic/polemic prose (and perhaps too tight—the footnotes/endnotes were far too scant or “just trust me on this”), I simply didn’t buy (pun intended?) his nightmarish vision (or should I say, his insomniac’s vision) of our sleepless future.  And I didn’t buy it at the level of his thesis statement, which—for the purposes of discussion—I’ll quote most here:

In its profound uselessness and intrinsic passivity, with the incalculable losses it causes in production time, circulation, and consumption, sleep will always collide with the demands of a 24/7 universe…. Sleep is an uncompromising interruption of the theft of time from us by capitalism.  Most of the seemingly irreducible necessities of human life—hunger, thirst, sexual desire, and recently the need for friendship—have been remade into commodified or financialized forms.  Sleep poses the idea of a human need and interval of time that cannot be colonized and harnessed to a massive engine of profitability, and that remains an incongruous anomaly and site of crisis in the global present.  In spite of all the scientific research in this area, it frustrates and confounds any strategies to exploit or reshape it.  The stunning, inconceivable reality is that nothing of value can be extracted from it.” (10-11)

What I can buy is this:  You certainly can’t work when you’re sleeping, so sleep in that sense is useless from an economic advantage point (and perhaps only tolerated right now due to its status as a human necessity).  And I can fully get behind the idea that we are increasingly plugged in all day, every day.  (The first thing I do when I wake up in the morning is check my emails on my phone while still in bed…and it is the last thing I do at night.)  People work first, second, and third shifts to keep up with the demands of contemporary capitalism…the 24/7.  For me, it’s scary to think about how we demand those unnatural nocturnal schedules—the second and third shifts—which disconnect the (typically) most economically disadvantaged from the dominant tides of daily life (we should demand an “unplugging” and a time for rest…but then again there’s the issue of police officers, firefighters, doctors, so welcome to modern life).

Therefore, it’s not entirely impossible to imagine ways in which humans might try to pare down our necessity for sleep in the future, and how—despite noble intentions—that process could be exploited (if, and only if, we ever can eliminate sleep).  But what time scale is this happening on?  While Crary is quick to open with the image of the sleepless soldier as being just around the corner, in this excerpt he conversely states how “in spite of all the scientific research in this area, [sleep] frustrates and confounds any strategies to exploit or reshape it” (11).  Which one is it, then?  Are we only a few decades away or (more realistically) centuries away when it comes to “conquering” sleep?  And while perhaps Crary’s point is that it doesn’t matter, that we need to act early—I can’t help but feel there’s more pressing issues being caused by capitalism that are going to shape human life far sooner –such as climate change, nuclear war, and bioterrorism. 

But Crary’s thesis soon takes and even more troubling turn for me.  He writes, “Most of the seemingly irreducible necessities of human life—hunger, thirst, sexual desire, and recently the need for friendship—have been remade into commodified or financialized forms.  Sleep poses the idea of a human need and interval of time that cannot be colonized and harnessed to a massive engine of profitability…”  When Crary states that hunger, thirst, sexual desire, etcetera have been commodified—I can understand that.  We buy food, we buy drinks, we can buy sex, you can “rentafriend”… okay.  Humans commodified hunger when we stopped hunting and gathering, though…is that really the sole result of capitalism?  And have we really commodified the human feeling?  It seems that to make that argument you would have to say that, if the apocalypse happened and we could no longer purchase food, we would simply all die off.  And while I’m not saying that a majority of us wouldn’t die, the cause would not be that we had entirely lost the instinct to go out and hunt/gather our own food—we’d just suck at it and there would be too many of us for pre-modern systems to support.   Whether intentionally or not, Crary makes it sound like nothing about food or fucking is part of a natural bodily process anymore, entirely “remade” by capitalism despite “commodification” that happened before capitalism.  And while capitalism can bring us food and the other “f” much faster than other means (and in that way, frees up time for working in a way that sleep cannot be freed up), we still have the option of not buying food and not paying for sex (or not choosing to commodify romantic love in the way of flowers, jewelry, expensive weddings, etcetera, etcetera). 

Besides this, how has sleep not been commodified?  While Crary states that “nothing of value can be extracted from it,” I would think many a bed manufacturer, a real estate agent, and a pharmaceutical company hawking sleeping pills (not to mention countless other industries around sleep) would be inclined to disagree.  Crary brings this up momentarily when talking about capitalism “encroaching” upon sleep and requiring drugs like Ambien and Lunesta to deal with the stresses of the 24/7 (18), but he doesn’t talk about how much certain parts of our capitalistic machine would stand to lose if people no longer required sleep.  Beds would no longer be needed, houses would not have to be as big, sleeping aids would be unnecessary….and that’s millions, billions lost.  How, then, is sleep any less commodified than hunger, thirst, and sexual desire by our modern world?  If we are talking about the very act itself, that the body will sleep whether or not it has a bed—the body will also eventually eat and drink too whether or not it has a McDonald’s (and in that way, those functions have not been fully commodified).  I don’t understand how Crary can hold up this distinction, making sleep some kind of agitator to modern capitalism.       

Above all else, though, what bothers me most about a work like this is that it offers so little in the way of solutions.  Crary does not seem to be pushing for a limitation or reevaluation of capitalism, as he states on the last page that “because capitalism cannot limit itself, the notion of preservation or conservation is a systemic impossibility” (128).  In that way, it seems the only true answer becomes an entire overthrow of our global capitalistic society.  (Simple enough, right?)  But what replaces it?  How do we meet the demands of billions without commodifying human need in some ways?  Maybe I’m just that heartless neoliberal who fails to see the people who don’t fit within this modern capitalistic system and die because of it (44)l maybe I’m more willing to submit out of a fear of “falling behind or being deemed outdated” (46).   I’d like to think what I am more interested in finding are tangible solutions that can both work in and against current power structures, small steps that can add up, as opposed to some dream of a revolution with no clear goals, no clear processes—at worst, nothing more than a lot of sound and fury. 

On Crary’s 24/7

Sunday, January 26th, 2014 by sdileonardi

This post is hardly argumentative, but rather an attempt to map out the claims being made on pages 41 and 42 in Jonathan Crary’s 24/7. I think that any general comprehension of the book relies on at least a partial grasp of the two tenants of his argument outlined in these pages.

On the one hand,  our contemporary context, especially regarding forms of technological consumption, inherits many of its defining characteristics from the industrial projects of modernization and capitalistic expansion occurring during the end of the nineteenth century. As I write, the hum of the washing machine signals the ominous presence of GE, which 130 years ago was called the Edison Electric Light Company. Giants such as Edison have proven that capitalism has also helped produce particular types of consumers, shaping and forming the nuanced aspects of society through an understanding of what drives a profit margin.

However, Crary wants to adumbrate an alternate kind of consumer production model that is inaugurated by the 1990s through Microsoft, Google, and others. Companies like these take a step beyond the production of consumers and actually succeed in producing subjectivities of technological users whose social ontologies are largely outlined and delineated by the roles these megalithic entities play. Crary posits that through this model “technological consumption coincides with and becomes indistinguishable from strategies and effects of power” (42). It is with this Foucauldian bent that Crary allows the subsequent chapters of his book to unfold. It is not that the mega-capitalistic model of Edison et. al. has been replaced, but rather augmented. And while for centuries philosophers and writers have focused their attention on political figureheads in order to fully highlight the power of the sovereign, Crary represents a shift in focal point as corporations have demonstrated the ability to wield subject-producing power.

Curing writer’s block with a typewriter

Saturday, January 25th, 2014 by dparker90

After reading about Renée’s adventures with the Altair 8800b, I couldn’t help but reflect on my own experiences at the MAL last week. I’d spent Wednesday morning preoccupied by a travel grant application, trying to write a convincing argument about why I deserve to get paid to read old books in faraway libraries. After several hours writing, cutting, and pasting, I was left with an unsatisfying three sentences. As usual, I’d been paralyzed by the importance of the project and rendered incapable of putting words on the page. I left for the MAL in my frustration, hoping that playing with Google Glass might stimulate my creative faculties, or at least provide a much-needed distraction.

Like Angie, I found my experience with Glass slightly underwhelming, probably due to my inability to look at both screen and outside world at the same time. I think what’s supposed to make this gadget cool is it’s ability to superimpose the Google interface into your direct line of sight so that you’re looking at the world through it, but I couldn’t help continuing to view them separately. The Glass blocked my vision rather than augmenting it, though perhaps with more use I’d adapt to this. After all, it took me 2 months to make the transition from Blackberry to iPhone keyboards without hitting multiple letters at once.

Craving something more analogue after my experience with Glass, I turned to the Olympia De Luxe Typewriter. I have to echo Renée here when I say that this machine really put up a fight. Assuming that loading paper would be self-explanatory, I didn’t check for instructions. I think my incorrect assumption is indicative of our standardized and commodified “user friendly” electronics, where using an iPad, iPhone, or MacBook for the first time feels instinctive. Not so with Ms. Olympia De Luxe, who continually resisted my efforts to unriddle her.

Finally managing to load paper, I set to typing. I began the experience with Parikka’s text in mind, aware that the possibilities of what I could create on the typewriter were limited by the technological constraints of the machine itself. This proved partially correct, as I was obviously unable to alter text once it was printed on the page. Putting ink on the page initially felt limiting in its tangibility and permanence – there was no going back after pressing a key. However, as I continued to write, I found that the finitude of ink on page had an advantage: I was forced to construct complete, meaningful sentences before putting them on paper.

Of course, compared to the seemingly endless possibilities for writing and editing offered by modern Word processors, the typewriter was limited. Yet, in the knowledge that what I put on the page would stay on the page, I began to think before writing each word, resulting in cohesive, thoughtful prose. Taking another stab at my grant proposal, I found that the slower pace of writing offered by Olympia forced me to collect my thoughts before putting them to paper and improved the quality of my writing.

I don’t mean to suggest that it’s better to write on a typewriter, or that the finished product turns out any better with this technology, but I do think my experience with Olympia says much about the limitations – or seeming lack thereof – of modern Word processors. In my case, at least, the overwhelming possibilities of what can appear on the page sometimes prevent me from writing anything. Once something does get written, I can’t stop editing and rewriting until the inevitable deadline, and who’s to say that the resulting product is any better than it’d be on a typewriter? Paradoxically, even when technologies promise endless creative possibilities, we are subject to their limitations.