This article was originally published in Battles magazine #3, and was cited in Dr Sabin's book Simulating War.
Dr James Sterrett (of the US Army CGSC) has previously offered his thoughts on the topic, as well.
Games and Sims for Training and Learning
This is a favorite topic of mine, and the Origins War College has hosted several panels over the last few years devoted specifically to the basic topic of “What is a game and what is a sim, and what can we do with them?”
In the world of military training, games and simulations have developed over the years from map exercises to elaborate digital virtual reality exercises. In many cases, the tools developed for training have come from commercial products, or were later converted into commercial products, and thus the wargaming community has the opportunity to poke, prod, and play with comparable tools to those used to train soldiers and sailors around the globe. As the world has gone digital, many of these tools and toys have moved away from tabletops and into computer monitors, but the underlying heart – the game engine – is still of great interest to gamers.
Mapping the conceptual terrain bounded by games-simulations-exercises is more than just an intellectual exercise, as it allows us all to establish a common conceptual framework and vocabulary for the future discussion of the utility of these games and sims. It also allows to discuss with more specificity our exact likes and dislikes, and level of comfort with the features and processes, and the underlying mechanisms that make them go.
When describing the use of games for training and learning purposes, there are several concepts that must first be understood, and their meanings agreed upon, before the best use of games and simulations can determined.
much, much more - including graphics! - after the jump
One of the first distinctions I try to draw is the difference between “training” and “learning.” While the exact definitions may be the subject of much long and arduous debate, for now, let us try to agree on the following: “Learning” will be described as “acquisition of a new skill” in which the learner possess no expertise, and perhaps only passing familiarity. Most American readers would probably have to “learn” the sport of team handball, as they might have stumbled across it in the Olympics, but likely have no idea what it is. “Training” will be described as “practice of an existing skill.” Using another sports analogy, once someone has learned the basics of dribbling a soccer ball, increasing the complexity of the drill with cones or live opponents would be considered ‘training’ rather than ‘learning.’
Again, as with the tasks, it is important to draw distinctions between ‘games’ and ‘simulations’. Although there may be some overlap, games are not necessarily simulations, and all simulations are not inherently games. This confusion is evident in the interchangeability of this term: even the title of the journal Simulation & Gaming, in which articles clearly use the words as synonyms.
The key determining factors in distinguishing between simulation and gaming, or determining their overlap, are the twin concepts of competition and abstractness. Games are inherently competitive. There is a defined criteria for winning, and measurable way of determining the winner. Simulations need not have a winner, and the endpoint may be redefined as needed to meet the needs of the training, but they have a high level of detail that abstracts as little as possible within the ‘interface’.
Thus a game must have a way of determining a score. A flight simulator on a computer may be used to simply explore a landscape, with no regard to record-keeping; in fact, military terrain-overview software shares many underpinnings with flight simulators. Alternatively, it may be used to stage a race between competitors. However, it is not a game until an agreed-upon criteria for determining a winner is established.
Additionally, games need not attempt to model the actual behaviors of the task in order to be a competitive tool for learning. A quiz game could be used for learning new terminology, for example. No one expects the performance of an EMT in the field to include a quiz on the proper terminology for performing triage on a injured patient. But the use of a game in the learning of the terminology, outside of the scope of the performance of the EMT duties, may increase the motivation of learner, and increase the amount of enjoyment felt by the participants in the learning environment.
Theorists and designers as diverse as Costikyan, Prensky, and Asgari have all described detailed lists of the attributes of a game, but all three can agree on the basic tenet of ‘competition’ – you don’t have a game until there’s a way to keep score. Until you keep score, it’s just a toy.
Simulations are often far more complex than games, which may have little, if any resemblance to the actual task. In the commercial world of games, it is popular to describe almost anything as a ‘simulation’ as a way of demonstrating a product’s complexity or ‘realness’ regardless of how accurately the game reflects reality.
Chess is a game of combat. However, it is a very abstract game, with artificial constraints placed on it for the sake of balancing gameplay. Does anyone even remotely familiar with military arts really believe that knights can only move in an L-shaped pattern? Similarly, the US Army uses a computer system known as JANUS for simulating combat to train battle staffs. JANUS includes algorithms to track ammunition and fuel consumption, account for terrain in the visibility of units and the speed at which they travel, the presence of smoke or fire on the battlefield and their effects on optics, and innumerable other behind-the-scenes calculations that accurately reflect the battlefield. Both Chess and JANUS purport to represent action on a battlefield, but one is a far more complex and accurate representation of reality than the other.
Plotted on the axes of realism & competition, games and sims overlap, but are not identical |
Simulations seek to re-construct components of reality. The level of “reality” in these reconstructions variable, however; certain abstractions must be made in order to make the simulation compact and usable. To return to the example of the flight simulator, above: the laws of physics are recreated by the software, but the actual cockpit is abstracted and the participant uses a computer keyboard to simulate the cockpit controls.
Thus, games are by nature competitive, but may be abstract or loaded with realistic complexity. Simulations are by nature complex, but need not be competitive. As noted, there exists little differentiation between these two concepts in the existing literature; ‘game’ and ‘simulation’ are used almost interchangeably throughout.
When represented visually, games and simulations exist on two different axes: competition, and realism. Games exist at one end of the “competition” axis, but may be very realistic, or very abstract. Simulations reside on one end of the “realism” axis, and may be competitive, or non-competitive. There also exists an ‘overlap’ area, where highly-realistic, competitive products are both games and simulations.
And thus we have some tools/toys that qualify as both: game-sims that overlay a competitive set of rules over a complex re-creation of some facet of reality. In overlaying a competitive framework, one must consider the end-state at which the competition will be judged. While pure games have such end-states built in, simulations converted to games need to ensure they are well-defined beforehand, lest the participants become frustrated or confused, or worse, work toward the wrong goal.
It is important to note that the end-state criteria listed for simulations need not be a competitive comparison at the end. They certainly could be, but these comparisons are presented merely to show the parallels between the possible end-points for both games and simulations.
How do these match up with the training and learning of tasks?
Building on the research of Greitzer, Kuchar, and Huston (2007), their "zone of learning" falls nicely on this two-axis model of games and sims. |
With this low complexity, and clearly-defined correct/accurate outcomes, tasks at this stage are well-suited to the use of games in learning them. Performance measures at this stage are often articulated as a part of the task, enabling new learners to measure their task performance. Overlaying a competitive framework on these performance measures is not a great conceptual leap.
As the complexity of tasks is ratcheted up, the outcomes become less cut-and-dried. While there are individual tasks embedded within the larger, more realistic/complex framework of group tasks or compound-individual tasks, the overall outcomes are colored more in shades of gray than black-and-white.
Training – the practice of known skills – thus moves into more complex terrain, and into tasks less well suited to a competitive framework. Moving along both axes from low complexity – competitive toward high complexity – non-competitive, situates training in a more realistic environment.
When overlaid on the earlier visual representation of games and simulations, a clear directional movement emerges in which learning moves toward training. As this happens, participants are moved from abstract competitive games (such as quiz games measuring rote memorization) toward more realistic, less competitive simulations in which multiple paths to success may exist.
Evaluated in a realm where the process may be more important than the outcome, training may be artificially constrained by time or location, limiting its true complex reflection of ‘reality.’ But training frequently involves the synthesis of many skills, in which the tasks being trained are complemented by other, essential, tasks that may not be the subject of the evaluations, but are nonetheless indispensable for the overall performance of the mission.
Within this rather simple set of definitions, we’ve actually seen some complex ideas evolve. Learning is an activity that is best started in small chunks – we see this wargames that introduce only a few rules at a time (such as the excellent ASL Start Kits). These types of simplified environments are perfect or a competitive exercise, as there are limited options with which to keep score.
However, as the complexity increases, the abstract activity moves more towards a simulation and the participant is now juggling many different resources and decisions. It is at this level of complexity that many professionals find themselves when training, rather than playing a game. However, it is also this level of detail and fidelity that attracts historical gamers, interested in immersing themselves in the rich conflicts modeled on their tabletops. And thus gamers find themselves in that uneasy gray area where their entertainment is someone else’s training tool, and their competition is more detailed than the base learning environment of someone exposed to new tasks. And hey – we do this for fun!
Some excerpted paragraphs from my dissertation draft that are relevant to the above. Cites may be found in the bibliography tab on GrogNews.
Perhaps most difficult in the realm of learning game design is matching the type of game, its complexity, and its scope with the espoused learning objectives of those responsible for the training. Game? Simulation? Some combination thereof? Narayanasamy, et al., (2006) attempted to distinguish between them (table, left). Although focused on digital games, the authors clearly distinguish between games and simulations, with a middle ground between them. Plotted graphically, these distinctions might result in a two-axis model. Following Costikyan’s (2002) guidance, games are categorized as inherently competitive, and this is consistent in the above table, in which games are characterized as possessing a defined end-state. The other axis – realism – attests to the key ingredient of a simulation. This four-block model includes “simulations” spanning across the range of competitive to non-competitive activities, and “games” as perpetually competitive, but ranging from great abstraction to highly realistic. |
|
Greitzer, Kuchar, and Huston (2007) proposed a similar two-axis model for identifying an optimal zone for learning and motivation (graphic, right). The two axes are “game difficulty” and “ability/level of learning”, and roughly correspond to the two axes above in the following way. |
By: Brant
This isn't the first time I've seen graphs like this: http://giantbattlingrobots.blogspot.com/2008/12/fear-of-failing.html
ReplyDeleteThe point being, it looks to me like the aspects of good training and good games are one and the same thing.
I'll try to expand on that after I'm back from a trip.