This is a quick experiment applying computer logic, in which the computer loses track of a person when it can’t detect the face properly, to a human interaction, sans the graphics indicating what is going on. Applying the same concept to a larger situation, or doing it live, would be really interesting to see as a next step.
This compilation of computer vision is really interesting to watch if you imagine seeing it without the source footage behind the computer graphics. I really like the description Kyle McDonald gives in the comments of the aesthetic qualities of computer vision graphics and will add to it myself:
“color choices tend toward high contrast saturated primaries (easy colors to code), almost no text (it’s never descriptive, only enumerative), trails are used to show history, ellipses and rectangle/bounding boxes are used as placeholders for complex shapes…”
This project explores the idea that distinguishing between humans and machines might become irrelevant. The combination of the drawings or animations with captions provides an ambiguous and vague yet intriguing jumping off point for my next experiments that I will develop over the break. Using the fuzzy logic of computer systems and applying it to human interactions is an idea that I intend to explore next. What does it feel and look like if a human is seeing like a machine but still has human emotional reactions? I will use this idea to generate more drawings and a set of short videos. I had been focusing on ideas relating to disembodiment and telepresence but realized through more research that these are no longer things I consider to be strange or interesting since they have already become part of daily life. The idea of projecting yourself elsewhere or speaking through a machine are things we are simply used to by now. I think re-framing the overall argument to address the idea that we are already cyborgs instead of speculating that we might be will be more fruitful. This allows me to exaggerate our current condition and speculate from there instead of creating a completely science fictional universe to design within in.
Act I: The Spectacle of the Indifferent Gaze
She wants to be wanted. The best impression must be given if this is going to go well. She flips the switch on the mirror to show her flaws. The left eyebrow is tilted three degrees lower than the right. She applies powder to correct it. Her lips are twelve percent thinner than they should be so she drags a coat of plumping cream and three coats of lipstick across them. Her eyes do not have enough contrast so she defines them with shadow, pencils, and mascara. Her nose is slightly too large so she adds more defined cheekbones and rosy cheeks to distract from it as suggested. Her hair naturally parts twenty-five millimeters to the right of where it should, so she corrects it by spraying and drying it into place, forcing it into submission. Now she is ready. She fits the mask.
She steps out of her apartment onto the street, walking with purpose. Her movements will be traced, her face identified and analyzed, and she wants them to see her at her best. She never changes her quick pace. She walks in a perfectly straight line with her head held high. She didn’t do all of that work to not be remembered today. She wants evidence. She wants documentation. And most importantly she wants to gain followers. She is alerted that it is working. She has been chosen. She turns right and begins to run up the stairs.
Click. The view changes quickly. This is the one. This guy is definitely going somewhere good. I’m going to stick with him. I’m walking down the street rapidly. Maybe I am trying to catch the metro. Maybe I am late. I feel like I must do this all the time. I see a tall building up ahead. I hope I’m not going in there. I didn’t come to see the inside of a generic office building. Those look the same everywhere. There’s no way to verify the verisimilitude. Ugh…I’m going in. Yep, it’s the same as every other one. This guy looks like he’s leaving. Click. I’m opening the door to the street. A sharp left. Now I’m running. This is a good one. Maybe I am being chased.? That would be exciting. I wish I could look back and see. It doesn’t look like anyone is there when I alternate to the top down for a second. Okay, I’m slowing down. Dark. Better find a new one quick. Let’s go, let’s go, let’s go. Come on I don’t have all day. There she is. Okay now I am going somewhere. I’m going up the staircase quickly. Now I’m turning around. Back down the staircase. Well that’s a different view at least. Back up the staircase again. And back down. Floating up and down the stairs without bouncing feels nice.
Act II: We are H+
Machine Thinking. Human Understanding.
Thirty seconds. Back and forth, back and forth, back and forth. Beep. Switch sides. Thirty seconds. Back and forth, back and forth, back and forth. Beep. Switch to the top. Thirty seconds. Back and forth, back and forth, back and forth. Beep. Switch sides. Thirty seconds. Back and forth, back and forth, back and forth. Off. Spit.
I met with Gideon Nave, a computer scientist working on his PhD at CalTech. He studies cognitive illusions and is specifically focusing his research on incorporating physiological sensors into financial transactions and negotiations in order to improve trust between the two parties. HIs work is about predicting human behavior in bargaining. He told me about physiological sensors, like reading pupil dilation, changes in heart rate, skin conductivity, and brain imaging and how they can be used to predict behavior, encourage trust, or make a story from very limited information. The abstraction of person into just a pupil on an interface in a bargaining transaction and the making decisions based on machine sensing were the most interesting takeaways in relation to my thesis.