Edit: This is now part of a series of posts exclusively about the development of a procedural narrative system.
Part 1: Game Visions: A Roleplay-Inspired Procedural Narrative System
Part 2: Game Visions: Modeling Human Behavior and Awareness
This article is part two of a series covering interactive procedural narrative. Previously, we covered some elements that could be involved in the development of a robust, open-ended middleware application that can procedurally generate the building blocks of a narrative: sequences of associations between narrative entities. You can read the original article here.
As a quick review, here are the relevant terms…
- Narrative Entity: a Character, Item, Place, piece of Lore, AKA “thing” that is relevant to the narrative in some way.
- Storyteller facade: decides how content is generated (including the preparation of in-game events). Acts like a dungeon master.
- Agent facade: responsible for the decision-making of a willed Narrative Entity. Acts like a role-player.
My first thought was the simplest: a model of relationships using a graph, nodes as narrative entities, the lines between them as the relationships. However, I saw several issues with this. It’s missing information about interactions, how narrative entities engage in interactions, and how those interactions affect the entities’ relationships.
In search of a more informative model, I found myself inspired by game engines such as Unity and Unreal Engine 4. Each of them sports an entity-component system whereby all game objects are composed of concrete behaviors that define them and the “type” of an object is interpreted logically based on the combination of its behaviors.
Under this model, I can’t create a “dragon” directly. I can create a “scale-armored”, “flying”, “fire-breathing”, “animalistic” entity that “periodically attacks villages” which I simply label as a dragon. Each of those attributes can be added or removed whenever the system needs, allowing for highly fluid and flexible actors. Relationship modeling could be improved by incorporating this design.
What We Need To Model
Assuming interactions involve an owning “source” entity and a targeted “destination” entity, we must model…
- the entities that exist in a narrative.
- the relationships they have to each other.
- the role a given entity has in a given relationship.
- the possible interactions that entities in a given relationship could engage in.
- the probable interactions a given entity in a relationship would engage in.
- the degree to which an owning entity’s interaction meets, exceeds, or fails to meet the target entity’s expectations of the relationship (which in turn implies the effect an interaction will have on the relationship).
In addition, we want the same model to be a viable method of simulating non-interactive information such as an Agent’s awareness of its relationships to other entities. This will allow for a full social simulation, complete with the need for information-gathering and the presence of limited information, misinformation, and deception. As such, we must also model…
- an Agent’s awareness of a narrative entity.
- an Agent’s awareness of another Agent’s interactions.
- an Agent’s awareness of others’ perceptions, i.e. “Do I know that you know?”
Finally, it is possible that entities can be associated with one another indirectly via space or time. It will be important for an Agent to be able to identify associations of this sort if it is to draw accurate conclusions from perceptions.
Proposed Model: Interactions
In order to handle the additional complexity of these connections, let’s imagine a new logical representation. Suppose a graph models interactive connections not as lines, but as spheres where the poles of each sphere exist at each pair of nodes’ locations. Each node represents a narrative entity. Interactions between narrative entities involve a directed connection from a source node (the owner) to a destination node (the target) that runs along the surface of a given sphere layer. Interactions can be logically associated with a relationship where each endpoint of the connection is logically associated with a role that matches the relationship (such as “father”-“son” or “master”-“apprentice”). Each role has expectations (anticipated interactions) associated with it. To develop the quality of the relationship, expected actions must be executed. The combination of all minor relationships between two narrative entities establishes the major relationship, represented by the sphere. You can picture an application that can group or color-code the arcs running along the sphere based on a suspected role or relationship perception.
Each sphere would have three layers and a core.
- Core: the data permanently associated with the relationship. Includes a history of interactions.
- Outer Layer: “Possible Interactions”
- The Storyteller pulls from a database of crowd-supplied interactions (assumed to be large), filters them based on dramatic relevance to the relationship using the story’s history and the relationship’s Core, and populates this layer with a highly narrowed subset of interactions from all of the possibilities.
- This layer ensures that the only decisions that will be available to an Agent are decisions that lead to an interaction with something dramatically relevant.
- Middle Layer: “Expected Interactions”
- The Storyteller populates this layer by filtering interactions in the Outer Layer based on level of expectation in the relationship.
- Each interaction in this layer is associated with a float value from 0 to 1 indicating the degree to which it is expected.
- Inner Layer: “Probable Interactions”
- The Agent pulls from the Outer Layer, factors in the relationship’s Core and the Agent’s associated Narrative Entities’ (usually a Character) traits/personality/goals/attributes to determine how it wants to update the status of the relationship. The more its own goals can be furthered by maintaining a good relationship, the more likely it is to engage in actions associated with the relationship’s expectations (to preserve a good relationship). Likewise, if it does not care for the relationship, it may decide those interactions are a lower priority and act against them or disregard them entirely.
- The “quality” of a relationship can be calculated as how close the history of interactions matches each party’s expectations of the other (the Core mapped against the Middle Layer).
- The Inner Layer is used exclusively as a decision-making pool for future actions, and has no direct bearing on the Outer or Middle Layers.
- The Agent will choose from this layer a randomly selected interaction, promoting the variability of the system whilst still retaining the logical and relational consistency of the narrative.
- The Agent’s selected interactions from this layer are recorded in the Core to be used in filtering future possible and expected interactions.
Interactions may also require dependencies with other interactions. Dependent interactions can be illustrated in this model as pole-to-pole segments of the sphere that are stacked on top of each other. In this example, lower level interactions may be necessary to be completed before higher level ones; however, the degree of necessity may also be variable. In one case, it may be literally impossible for an interaction to take place prior to another interaction (can’t happen) whereas in other cases it may simply be that an Agent is disinclined to engage in an interaction without another interaction occurring first (not likely to happen).
Proposed Model: Predictions
Because different types of interactions may be capable of fulfilling an expectation for a relationship, we can interpret an interaction as something involving inputs, outputs, and tags that help connect it to an expectation. Interactions will (maybe just optionally) require entities with a given state as inputs and would have outputs of a resulting (usually changed) state in 1 or more other entities. Relationships, and the roles composing them, are therefore functionally just an aggregation of the historical and expected interactions associated with them.
The logical assignment of roles and relationships assists in the modeling of Agent predictions. If a given Agent is to appear intelligent, it must be able to try and predict the actions of others. It would do so by analyzing perceived interactions and learning to associate them with given roles and relationships. The Agent would then use its native knowledge to examine its understanding of the expected interactions between entities with the given roles. Hopefully, it would be able to accurately identify the subsequent interactions of another Agent. These perceptions would additionally be colored by the Agent’s knowledge of the target Agent’s personality / traits / past, etc.
For example, suppose Agent A executes the interaction of pushing Agent B out of the way of an oncoming car. This interaction could be tagged by users of our application with words like “protect”, “safeguard”, and “selfless”. One relationship archetype that someone could logically assign to this kind of interaction could be a Parent-Child relationship where Agent A has the role of Parent and Agent B that of Child. Assuming people have also matched tags to the Parent-Child relationship already, the Storyteller and other Agents may begin to predict the history and expectations of A’s and B’s relationship. Those predictions may prove wrong as additional information comes to light that conflicts with that logically-assigned relationship, or future information could enhance the probability that it is an accurate assessment. Agent C, observing the interaction, could then form predictions of A’s future interactions based on the probability that it will engage in behavior that falls in line with the supposed role’s expected future actions.
Proposed Model: Perceptions
Perceptions can be included in the model as straight-line connections from a narrative entity to either another narrative entity, an interaction between two narrative entities, or another perception line. In this way, we can indicate a narrative entity’s awareness of other entities, their interactions/potential relationships, and an observed entity’s perception of others.
It is also important that we stack layers of inter-perception connections to around 5 layers. Mind games are an important aspect of simulating realistic human interactions. I’ll demonstrate this point with an example:
- There exists a secret, a type of Lore seeing as how it’s factual information for the narrative universe. I know that secret (a perception line from me, the Agent, to the other narrative entity, the secret). I may now take action that others wouldn’t since they aren’t privy to the information I have.
- My enemy, Agent B, is aware that I know the secret. This may prompt him to try and pry the information from me.
- My reaction to B’s attempts may be different based on whether or not I am aware he knows I hold the secret. If I know he knows I know, then I may be more cautious of handing out any related information if I don’t want to risk him learning the secret.
- If B in turn is aware that I’m on to him, he may change his tactics in attempting to acquire the information he seeks.
- I may also be self-aware enough that I realize my awareness of him could spook him, leading me to take actions that assume he might change tactics or come at me more directly, etc.
The degree to which an Agent plays these mind games could just be a function of the Agent’s insight (accuracy of predictions) and perceptiveness (breadth/depth of environmental understanding), attributes that are variable from Agent to Agent.
For the last category of perceptions, the indirect associations between objects, we must ensure that any perception of a narrative entity or interaction is mapped within a temporal-spatial rendition of reality unique to each Agent. That is, an individual Agent must be aware of where and when perceptions were encountered in order to be able to tie together connections between seemingly unrelated elements of their reality.
In order to have a realistic sense to Agents, each one will need to have a “recall” coefficient associated with perceptions that is a function of…
- how long it has been since the Agent perceived it.
- how strongly associated the Agent believes the perceived entity is to the Agent’s interests.
- If the Agents’ simulation of others’ interactions leads it to believe another Agent is highly relevant to an object of its concern (regardless of whether the target Agent is or is not, in fact, relevant), that should affect how accurately the Agent perceives and/or recalls details associated with that target Agent. Consider the example of an obsessed detective tracking the subtle details of an old cold case suspect.
- a generalized recognition attribute associated with the Agent directly (how good are they at remembering things in general?).
- (optionally) how close the perceived entity was to an Agent’s focal point of attention.
- (optionally) what method the Agent used to perceive it (sight, hearing, etc.)
Any model we use to simulate human relationships will best be served by analyzing the types of interactions that exist between humans and the potential effects of those interactions on relationships. It will also need to accurately simulate an Agent’s perception of its environment and the Agent’s formulation of predictions both regarding static surroundings and the behavior of other Agents.
The model I have proposed incorporates a combination of tri-layered sphere-connections to model interactions and traditional line-connections to model perceptions. Relationships and roles are assigned purely through logical interpretations of interactions, providing a highly robust and flexible presentation of the data linking narrative entities.
Hope you found this model useful and/or illuminating. Comments and suggestions are, as always, gladly accepted.