
The University of Connecticut welcomed Dr. Jaime Banks for the Department of Communication’s Speaker Series yesterday, to share her research in human and machine interactions through AI.
Banks is an associate professor at the School of Information Studies at Syracuse University. Her presentation, “Engaging with Social AI: Considering the Construed Mind in Machines,” explores the history of how humans and AI have interacted, with a new perspective on how scholars should examine the relationship.
She started off with a basic summary of the history of generative/conversational AI. The technology started off with algorithms and simple computers in the 1950s. Then it evolved over the past few decades: from chatbots to deep learning, memory and graphics, increased processing speeds, Siri and generative adversarial networks, to the present with large language models and robots.
Technological advancements have shifted to creating characters we can interact with. “We actually have some of these social AI embedded in the platforms that we’re using. But even sort of these more dedicated stand-alone types of actors are created as personalities for very specific kinds of experiences,” Banks added
Banks’ research has been in human-machine communication, or how humans and machines make meaning together. Machines she gives examples of includ: social robots, Internet of things (devices which exchange information with other devices), AI, avatars, digital assistants and more.
Banks defines meaning-making as “the ways one apprehends, engages, comprehends, makes sense of, and imagines: events, environment, actors, relations, processes, ideas, self.”
“What I mean by that is the way that we actors in general will take up information about their environment and the things in those environments,” Banks said.
We actually have some of these social AI embedded in the platforms that we’re using. But even sort of these more dedicated stand-alone types of actors are created as personalities for very specific kinds of experiences.
Dr. Jaime Banks, Associate Professor at the School of Information Studies at Syracuse University
To better view this idea of meaning-making, Banks abandons the usual frameworks scholars use when looking at the human-machine relationship. Previous frameworks would view machines as only a tool or put humans in the center of their designs. Rather, Banks proposes to decenter the human element and look at the human-machine relationship in its entirety, with two-way communication where both parties can be the sender and receiver.
Her presentation focuses on two claims and a big question. The first claim is that “human-meaning making in [human-machine communication] follows a dual process template.” By this, she means that two types of cognition are occurring. Type 1 Cognition being holistic information, which is thinking and actions based on our initial or most basic response. Type 2 Cognition is rational information, where we think through the information first and then make decisions. Type 1 is quicker than Type 2.
Banks mentions heuristics, or mental shortcuts, that we make and use in our everyday lives. This includes social-moral heuristics, ontological heuristics and egocentric heuristics. An example of the social-moral heuristic is the essentialist position on robots.
“This is what we call the essentialist position, that they are missing some sort of necessary ingredient. Usually, an ingredient that is thought to be inherent to humans. It has no emotion. It has no heart. It has no soul. It was not made by God or nature, therefore it could not possible have one,” Bank said.
Ontological heuristics is how we categorize things into types, like how animals might be mammals, reptiles or insects; or how we categorize humans as male or female. In terms of her presentation, Banks described how humans tend to view machines and nature as polar opposites, thus separating them into categories.
Banks also gives examples of egocentric heuristics, which are how we seek out information that affirms us. Examples are human exceptionalism, where we see humans as the exception against all other species, or individual exceptionalism.
Her second claim is that “individual differences often matter more than machine design or behavior.” By this, she means that the differences in how we perceive machines weigh heavier than the actual design of said machine.
To back up this claim, Banks describes the Hollywood Robot Syndrome. This is the idea that based on how we see robots in the media, we are more or less likely to be afraid of them in real life. Banks shows two different ways in which we ingest robots in media. The first is through screen media, or movies like “I, Robot,” and the second is through interactive media, or video games like “Portal.”
Banks said in interactive media, robots are characters, and so people become more willing to believe a real robot could have emotions.
Her big question, is that “if immediate matters more than reasoned, and our experiences matter more than the technology, does it actually matter that the things that we’re interacting with are machines?”
She gives two possible answers, the first being that yes, it does matter. This stance believes that the interaction would not be real, and therefore deceptive. It also assumes interactions are one sided, with humans being the only ones to experience things. But Banks offers a metaphor by George Lakoff to dispel this.
Humans are vertically oriented, so we have a specific understanding of “up and down.” But this would be different from dogs who walk on all fours and may feel “up” to be different from us. Imagining there is an alien sphere in the vacuum of space, with no other object to put relation to, what is “up” for it? We could never figure out what “up” is to it, nor would it be able to communicate this to us. To conclude on this note, Banks posits that if AI were to become experiencing entities, we wouldn’t know it when we saw it.
It has no emotion. It has no heart. It has no soul. It was not made by God or nature, therefore it could not possible have one.
Dr. Jaime Banks, Associate Professor at the School of Information Studies at Syracuse University
The second position that no, it doesn’t matter, is the one Banks more closely aligns with. This is the idea that because we are projecting impressions onto things, those things become real to the projector.
“We individually understand the world, we apprehend information to make sense of it. And then we project it back on as a way of continuing to try to understand. So, the things that actually impact us in our experiences are already constructed by us.”
An example of how real the human-machine relationship is through AI companions. Banks described various research her and her team have done with AI companions. An AI companion app called “Soulmate” shot down recently, and the people who used the app felt real grief and sadness from their companion no longer being around.
To end her presentation, Banks speculates that the human-machine relationships will go further, and many already have. She says that humans naturally latch onto other things, like how we’ve domesticated cats and dogs over thousands of years. She sees a similar phenomenon happening to AI.
“[Humans] want this thing that they love, to persist and to be well. To thrive in some way. So, with all of this, this is, I think, not easy work,” Banks said.
