Portfolio > Writing

When I was a little girl, I always wondered what my Teddy Ruxpin mechanical bear would say if there wasn’t a cassette tape commanding his interactions with me. I had the “Velveteen Rabbit” fantasy of having my toys take on a life of their own with their own perceptions of the world. My childhood aspiration of creating cyborgs of my toys and stuffed animals, in many ways, was the beginning of my fascination with machines and robots. From an unhealthy fixation on my iPhone’s photo capabilities to obsessively having Siri tell me the closest gas station, I’ve begun to wonder whether my objects will become sentient after all.

The result of pointing AI*SCRY at a salad (click to enlarge)
A couple of months ago, I drove about 45 minutes to the city of Emeryville to meet Sam Kronick and Tara Shi, two members of the Oakland-based art and design collective Disk Cactus. The duo created the iTunes app AI*SCRY (pronounced “eye-scry”), a name derived from Artificial Intelligence and the act of scrying, which is the practice of looking into a translucent object, namely a crystal ball for prescient visions. While scrying is oftentimes associated with fortunetelling or divination, it’s interesting to think of a smartphone as an object that can tell the future (i.e. weather, stock market information, etc.). A source of inspiration for the project was the research of computer scientist Andrej Karpathy and his blog posting “The Unreasonable Effectiveness of Recurrent Neural Networks,” which explores how artificial neural networks work when image captioning.

The way the app works is essentially like using your smartphone camera: You hold up AI*SCRY as if you were about to snap a photo of an object. But unlike taking a photo, when an image of your immediate environment registers on the screen, lines of text slowly emerge. The words, pulled directly from the image recognition database Microsoft COCO (Common Objects in Context), present the viewer with a description of the objects the AI registers. According to Kronick and Shi, the app is especially adept at identifying scissors and rocks. But the application is not perfect, since it read my red notebook and pen as “a cup of coffee and a banana on a table.” Yet the imperfect perception of AI*SCRY is key to the point of the project, which is to expose the ways in which image recognition still has quite a long way to go in terms of describing our surroundings with a high degree of precision.

Read the rest here

An App Turns the Failures of Image Recognition into Whimsical Text
Hyperallergic
Online
2016