This is a quick experiment applying computer logic, in which the computer loses track of a person when it can’t detect the face properly, to a human interaction, sans the graphics indicating what is going on. Applying the same concept to a larger situation, or doing it live, would be really interesting to see as a next step.
I met with Gideon Nave, a computer scientist working on his PhD at CalTech. He studies cognitive illusions and is specifically focusing his research on incorporating physiological sensors into financial transactions and negotiations in order to improve trust between the two parties. HIs work is about predicting human behavior in bargaining. He told me about physiological sensors, like reading pupil dilation, changes in heart rate, skin conductivity, and brain imaging and how they can be used to predict behavior, encourage trust, or make a story from very limited information. The abstraction of person into just a pupil on an interface in a bargaining transaction and the making decisions based on machine sensing were the most interesting takeaways in relation to my thesis.
When discussing the new book Collage Culture that he worked on with Chandler McWiliams, Brian Roettinger said that they realized the only way to make something devoid of reference is to use software to generate it. So the human maintains some control but the machine creates it. This relates to what I was I thinking about in terms of crafting a narrative automatically or in partnership with a machine. They used compositional rules as a starting point for generating an image and eliminated the selective input of the artist, but also brought the hand aesthetic into the machine aesthetic.