Framing in Gaming: When Crime Acts like a Beast

Procedural rhetoric theory claims that games mechanics and rules make arguments to their players. However, this had never been empirically demonstrated. My colleague Barrett Anderson and I designed our own game to test whether players could detect the arguments in a game’s mechanics, and whether those arguments had some impact on them.

See the 15 minute talk at FDG 2020, co-presented with Barrett Anderson (17:30-35:02)

Read the 11-page paper here

Making a Game

A screenshot from the “Literal Beast” version of the game. The grid on the left is a map of the city, with beasts roaming around causing mayhem. Actions the player can take are presented below the grid. The graph on the right side shows the city’s m…

A screenshot from the “Literal Beast” version of the game. The grid on the left is a map of the city, with beasts roaming around causing mayhem. Actions the player can take are presented below the grid. The graph on the right side shows the city’s mayhem level over time, a measure of the player’s performance.

To test the predictions of procedural rhetoric, we decided to make our own game. This started with a board game I designed based on a classic metaphor paper, framing a city’s crime problem as either a Virus or a Beast. After many iterations of collaborative testing and design, we had 4 versions of the game: “Crime as a Beast,” “Crime as a Virus,” “Literal Beast,” and “Literal Virus.” In “Crime as a Beast,” the player chases down individual criminals and locks them away, implying that crime is a problem of individual criminals who need to be removed from society. In “Crime as a Virus,” the player enacts social programs to decrease the likelihood of crime spreading, implying that crime is a more systemic problem that can be addressed with economic reforms. The “Literal Beast” and “Literal Virus” use the same mechanics as their respective crime games, but instead the player is dealing with literal beasts or viruses in a city. With these four versions, we could separately manipulate both theme (crime, beast, or virus) and mechanics (beast or virus).

The Study

We carried out an experiment with 110 participants. Experimental sessions were run by a team of undergraduate research assistants and high school interns, overseen by Barrett and myself. Participants played one of the literal versions of the game, and one of the crime versions of the game, then were asked whether they thought the game was making an argument and what that argument was. We also assessed opinions about crime policy before and after playing the games, to measure changes in opinion after playing the games.

We collaboratively developed a qualitative coding scheme with our team of research assistants to process qualitative responses on what argument the game was making.

Questions and Findings

Was an argument perceived?

Participants overwhelmingly (83%) said that the game was making an argument. Scores on the rhetorical content scale which we developed in a previous project were comparable to other clearly rhetorical games.

 

What argument was perceived?

Responses were quite mixed. Around a third of participants thought the game made a pro-prevention argument (social reform more effective than harsh enforcement), while about a sixth saw the game as pro-enforcement.

Surprisingly, we didn’t find any differences between versions of the game.

Did the game influence attitudes or preferences?

Playing a particular version of the game did not seem to cause a reliable change in the participants’ attitudes towards crime or their preferences about crime policy (e.g. how city funds should be spent).

 

What did we learn from in-game actions?

Participants tended to shift their policy preferences to align with their in-game actions. For example, a player who spent in-game resources on after school programs would see those programs as more effective to reduce crime after playing the game than they did before playing the game.