Currently, there are two general frames of thought in terms of interacting with computers. First is the media equation, conceived by Byron Reeves and Cliff Nass (1996), which later evolved into the computers as social actors (CASA) framework (see Nass & Moon, 2000). The media equation and CASA both suggest that we mindlessly process media and computer-based interactants. Evolutionarily, we’re not designed to cope with this thing that is COMPUTER or TELEVISION, so the argument is that we process those stimuli in the same way we would if they weren’t mediated. For example, a person on a television screen or a partner in text chat are both interpreted as we would any person. The media equation explains a vast array of phenomena, including why horror movies are scary or why we cuss out our laptops when they malfunction–because we respond to media in a very human-like way.
Another perspective on the matter, although conceived as being constrained to a certain context and domain, is Blascovich’s (2002) social influence model in virtual environments. Blascovich suggests that when an computer-mediated entity is believed to be controlled by a human, it is more influential than when it is believed to be controlled by a computer. The perception of human control or agency, then, is key to persuasion in virtual environments. Blascovich offers a second route, however, as he notes that behavioral realism is an important factor that interacts with our perception of agency. If we think we’re interacting with a human, the representation doesn’t necessarily have to be realistic. If we’re interacting with a computer, however, it needs to act realistic and human-like for us to be affected the same way. Blascovich’s model doesn’t really tackle mindless or mindful processing, but it does provide a contrasting expectation to CASA in terms of how we may respond to computer-mediated representations.
Really, these things come down to a type of Turing test. As more interactants (email spammers, robo-dialers, Twitter bots, NPCs, etc.) are controlled by algorithms, it becomes important that we study the conditions in which people are understanding and responding to these entities as humans, and when they are conceiving of them as computers, and what impact that has on communicative outcomes.
As virtual environment researchers, we wanted to test these contrasting predictions to see whether there were differences in how people respond to visual avatars (i.e., virtual representations controlled by humans) or agents (i.e., virtual representations controlled by computers). (Although you’ll note the conspicuous absence of CASA in the paper, as a reviewer insisted that it didn’t fit there…never mind that we discussed the project and our contrasting hypotheses with Cliff…*sigh*) So, we gathered every paper we could find on visual virtual representations that manipulated whether people thought they were interacting with a person or a computer. These included studies that examined differences in physiological responses, experiences such as presence, or persuasive outcomes (e.g., agreeing with a persuasive message delivered by the representation). One study, for example, measured whether people were more physiologically aroused when they believe they were playing a video game against a computer or a human. Another study measured whether people performed a difficult task better if they thought a human or a computer was watching them.
What we found is that on the whole, avatars elicit more influence than agents. These effects are more pronounced when using desktop environments and objective measures such as heart rate. We anticipate that immersive environments may wash out some effects of agency because of higher levels of realism. Another finding was that agency made more of a difference if people were co-participating in a task with the representation, whether that was cooperating or competing. Perhaps having an outcome contingent on the other’s performance made it more meaningful to have a person in that role than a computer. A final finding is that when both conditions were actually controlled by a human–as opposed to both being actually controlled by a computer–agency effects were greater. So, there is something to be said about perhaps a subconscious Turing test, wherein people can somehow tell when they are interacting with computers even though they don’t explicitly think about it.
What This Means for Research Design
Our findings have a lot of relevance to how we interact with computers and humans, but you can read more about that in the paper. What I want to draw attention to is the importance of these findings in terms of research, as they may extend to any number of technological domains. Tech scholars often run experiments where they are having participants text chat or interact in a VE with someone, or they have them playing a video game, and they are testing some sort of influential effect of this interaction. Our findings indicate it is imperative that you clarify who they are interacting with, even if it seems obvious. Second, it is important that they believe it. If you are not clarifying this, or if your participants aren’t buying into your manipulation, you are probably going to be stuck with weird variance in your data that you can’t explain.
The problem is that directly asking people what they thought isn’t the best approach. As Reeves and Nass note in the media equation, if you straight out ask someone if they are treating a computer like a human, they’ll look at you like you’re nuts–but that doesn’t mean they won’t treat the computer like a human. Further, if you ask someone if they thought they were interacting with a person or a computer, they might have never even thought they weren’t interacting with a person–but now that you’ve introduced them to this idea, they’d feel dumb admitting they didn’t know, so they’re going to say “computer.” Or, you’re going to get them reflecting on the task, and they will suddenly recognize that the mechanistic responses did seem an awful lot like a computer, so they will report “computer” although they didn’t recognize this at the time of the task. Thus, direct questions aren’t the greatest way to parse this out.
My advice is to use a funneling technique, preferably in a verbal debriefing. You might start by asking what they thought the study was about, and then, based on the design, ask relevant questions (e.g., ask about their feelings about their text chat partner, or ask if what they thought about the other player’s style of play.) One thing to note is the use of pronouns (“she was…” “he was…”) that indicates at least some acceptance of the interactant as human. Then, keep probing: “Do you think your partner/opponent/etc. was acting strange at any point, or did they do anything you wouldn’t normally expect?” This is a broad enough question that shouldn’t immediately point to the partner being a computer, but might get them thinking in that direction. If they don’t say anything about it being a computer, I’d say you can be pretty confident they bought the manipulation and believed they were interacting with a person. You can wrap up with more direct questions: “At any time, did you think you were interacting with a computer rather than a human?” The feedback you get will also be helpful in designing future studies or scripts to eliminate this variance.
You can check out the paper through the link below:
Fox, J., Ahn, S. J., Janssen, J. H., Yeykelis, L., Segovia, K. Y., & Bailenson, J. N. (in press). A meta-analysis quantifying the effects of avatars and agents on social influence. Human-Computer Interaction. doi: 10.1080/07370024.2014.921494
Comments are closed.