Monday, January 27th, 2014 by willm2

Google Glass is one part utopian cyborg interface and nine parts aggressive modification. Returning to a thread from last week’s discussion, Google Glass is explicitly intended to mold the user into a subject that not only sees the world differently, but interacts with it in a radically new way. Before the device can use us properly it must condition us to its mode of operation. Glass would like to extend data-mining and self surveillance to our immediate physical world. Whereas smartphones and modern web-browsers are only able to capture traces of our consumer identity (through cookies, targeted advertising, amazon’s associative product recommendation, etc…), Google Glass literally wants to see through our eyes and walk in our shoes. Glass is not an information resource like the internet, or a tool for exchange, but a pure data-miner. If Eric Schmidt of Google wants to capitalize on the limited ‘eyeball-time’ of the consumer, Glass is a brazen attempt to literally hijack our eyeballs from us like the sandman of folklore. Whatever data Glass can glean from our physio-optical behavior can be processed and routed back to us with suggestions for improvement, thus creating a feedback loop of user-modification similar to that used by e-commerce sites for some time now. As a hypothetical example: Glass knows that you bike in such a way that much of your peddling energy is wasted. Glass suggests either electronic or mechanistic mods to your bike to remedy this. In a prosaic way your physical form has been shaped by the exchange of data with Google.

However, the user at this point in time is not ready to become a vector for such a device. Compare Glass to iOS devices, which seem to effortlessly meld with our minds and bodies, creating a sense that the device is a utilitarian extension of ourselves, no different than a walking stick a pair of running shoes. Glass, on the other hand, is an interface disaster. Again, unlike other devices which can be figured out without any instruction manual, Glass is counter-intuitive. Neither voice commands nor manual manipulation lead to predictable results, at least not from my hour or so spent with the device. A video tutorial is necessary for achieving even basic functionality. It reminded me of my earliest childhood experiences with DOS (Dark Obscure System) and the feeling that my method of interacting with the world was being forcibly reshaped into something different, like a plowshare being beaten into a sword. Due to the tortuous attempt at turning the user into a Glass-subject, Glass becomes a glitch masquerading as a device. Through the frequent mistakes (did I just delete that?), navigational maroonings (how did I get here?), navigational imprisonment (how do I get out of here?), and near catastrophic misuse (did I really just send that to every friend in the Google + network?), Glass calls attention to itself through error. If it had been a seamless experience like my first iPhone, perhaps I would have been swallowed up by the 24/7 continuity of hypercapitalism Crary writes on. Instead, through systemic glitch, I could see clearly what Glass wanted from me, and how little I wanted it back.


2 comments on “GOOGLE GLITCH

  1. Renee F says:

    I agree that Google is using Glass as a means for pointedly harvesting data from the user and I emphatically agree that the interface is frustrating at best, but I’m not sure we can only view this device as a mechanism with potential solely dedicated to information gathering.

    Consider glitch art. Is it possible that instead of creating a device with utillitarian purpose, that Google may develop Glass into the technological equivalent of “rose colored glasses”? Glitchy and awkward as the device is, I wonder if we can derive an aesthetic appreciation of our experience through Glass, though I admit that is likely not the intent of Google. Phrases like “uncovering the soul of the machine” have been thrown around glitch art. Is it possible that what seems to be stifiling and flawed, actually could be the machine broadening how we view and contextualize the world?


  2. Lori Emerson says:

    Great post, Will. This is just to register that you make a lot of smart points – especially about Google trying to come up with the ultimate data mining device. Thank goodness for the glitches of GG but I can’t help thinking it won’t be long before they do learn how to create a “seamless”, “intuitive” device. That’s why us early users/adopters are called explorers – we’re basically doing product testing for Google. For free.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: