Godot: the Game-Changer for GameDevs

Edit 1: Updated Graphics Comparison section with before/after shots of Godot and accuracy corrections.

Edit 2: There was some confusion over whether the Deponia devs used Godot for their PS4 port. I had removed that content, but now they they have officially confirmed that it WAS in fact a PS4 port. Links in the Publishing section.

Edit 3: I’ve received questions regarding the preference for a 2D renderer option, so I’ve added a paragraph explaining my motivations in the Graphics section.

Edit 4: Godot lead developer Juan Linietsky mentioned in a comment a few points of advancement present in the new 3D renderer. The Graphics section has been updated with a link to his comment.

Edit 5: I have personally confirmed that the GDNative C++ scripting on Windows 10 x64 is now fully functional in a VS2015 project. Updating the Scripting section.

Edit 6: I have received new information regarding the relative performance of future Godot scripting options. I have therefore updated the scripting section.

Edit 7: Unity Technologies that they are phasing out UnityScript. Updating the Scripting section. Also, making a correction to Unreal’s 2D renderer description in Graphics.

Edit 8: Unreal recently added C# as a scripting language. I’ve updated the Scripting section accordingly.

Edit 9: More “official” blog posts / tutorials have been made explaining other aspects of the Godot Engine, so I am including links to those where appropriate.

Edit 10: Adding a GM:S API diff link.

Edit 11: Including addendums to each section for GameMaker: Studio, because I frequently see people despairing that this article doesn’t include it in its comparisons.


I’ve been tinkering with game development for around 5 years now. I’ve worked with GameMaker: Studio (1.x, though I’ve seen 2.0), Unity, and Unreal Engine 4 along with some exploration of Phaser, Construct 2, and custom C++ engine development (SFML, SFGUI, EntityX). Through my journey, I’ve struggled to find an engine that has just the right qualities, including…

  • Graphics:
    • Dedicated 2D renderer
    • Dedicated 3D renderer
  • Accessibility:
    • Powerful scripting capabilities
    • Strong community
    • Good documentation
    • Preferred: visual scripting (for simpler designer/artist/writer tooling)
    • Preferred: Simple and intuitive scripting architecture
  • Publishing:
    • A large variety of cross-platform support (non-negotiable)
    • Continuously expanding/improving cross-platform support (non-negotiable)
  • Cost:
    • Low monetary cost
    • Indie-friendly licensing options
  • Customization:
    • Ease of extending the editor for custom tool creation
    • Capacity for powerful optimizations (likely via C++)
    • Preferred: open source
    • Preferred: ease of editing the engine itself for community bug fixes / enhancements

For each associated field, I’ll examine some various features in the top engines and include a comparison with the new contender on the block: Godot Engine, a community-driven, free and open source engine that is beginning to expand its graphics rendering capabilities. Note that my comments on the Godot Engine will be in reference to the newest Godot 3.0 pre-alpha currently in development and nearing completion.

Initial Filtering Caveats

Outside of the big 3, i.e. GM:S, Unity and UE4, everything fails on the publishing criteria alone since any custom or web-based engine isn’t going to be easily or efficiently optimized for publishing to other platforms.

The new GitHub for Desktop app is an example of Electron (web browser dev-tools on right).

It is true that technologies such as Electron have been developed that ensure that it is possible to port HTML5 projects into desktop and mobile applications, but those will inherently suffer limitations in ways natively low-level engines will not. HTML5 games will always need to rely on 3rd party conversion tools in order to become available for more and more platforms. If we wish to avoid that limitation, then that leaves us with engines written in C++ that allow for scripting languages and optimizations of some sort.

GM:S’s scripting system is more oriented towards beginners and doesn’t have quite the same flexibility that C# (Unity) or Blueprint (UE4) has. In addition, GM:S has no capacity for extending the functionality of the engine or optimizing code. The latest version has a $100 buy-in (or a severely handicapped free version). Without a reasonable free-to-use option available in addition to all of the other issues, GM:S fails to meet our constraints. That leaves only Unity, Unreal, and Godot.

Edit: for people who decide to try Godot, someone has started a repository for collecting API differences between Godot and GM:S.

Edit: Due to comments I’ve received over the past several months, I will add sections covering GameMaker: Studio. Note that while I have first-hand experience with 1, I’ve only watched some comparison videos for version 2, so it is possible that I may have missed something.

Graphics Comparisons

There is no doubt that Unreal Engine 4 is currently the reigning champion when it comes to graphical power with Unity coming in at a close second. In regards to 2D renderers, it should be noted that…

  1. Unity has no dedicated 2D renderer as selecting a 2D environment just locks the z-axis / orthogonal view of the 3D cameras in the 3D environment.
  2. Unreal’s dedicated 2D renderer, Slate, can be leveraged through the UMG widget framework to be used in the 3D environment. In a project consisting solely of UMG content therefore, you can more or less use Unreal to get the benefits of operating solely within a 2D renderer. However, all of Unreal’s official tooling and features related to 2D are confined within the non-UMG content (collision, physics, etc.), so it’s not exactly a legitimate “support” for 2D rendering.
  3. GameMaker: Studio has a dedicated 2D renderer. From what I understand, they have begun to simplify the workflow for 3D rendering as well, although, it personally sounds like it’s still a laborious process.
  4. Godot has a dedicated 2D renderer (what it started with in fact) and a dedicated 3D renderer with similar APIs and functionality for each.

For those who might wonder why a dedicated 2D renderer is even significant, the main reason is the ease of position computation. If you were to shift an object’s position in 2D, the positions (at least in Godot) are described purely in terms of pixels, so it’s a simple pair of addition operations (one for each axis). An analogous operation in a 3D renderer requires one to map from pixels to world units (a multiplication), calculate the new position in world coordinates (3 additions) and then convert from world space to screen space (a matrix multiplication, so several multiplications and additions). Things are just a lot more efficient if you have the option of working directly with a 2D renderer.

For the 3D rendering side, Unity and Unreal are top dogs in the industry, no doubt about it.

This is a few iterations old, but you get the idea of where they stand (very impressive)

I had some trouble finding sensible 3D examples for GameMaker Studio 2. This was one of 3 images I discovered. It appears to be decent, but not quite of the same caliber as the others.


Godot 2.x’s 3D renderer left something to be desired compared with Unity/UE4 (similar to GMS, although the workflow for the same quality of work in Godot 2.x appears to be much simpler than in GMS). The showcased marketing materials on their website look like this:

Godot 2.x 3D demonstration, from the godotengine.org 3D marketing content

With Godot 3.0, steps are being taken to bring Godot closer to the fold. Pretty soon, we may start to see marketing materials like this:

A test demonstration of the 3.0 pre-alpha’s power, shared in the Godot Facebook group.

I’d say that’s a grand improvement, and this graphical foundation lays the groundwork for an impressive future if the words of Godot’s lead developer, Juan Linietsky, are to be taken to heart. (Edit: Juan recently published an article that goes into MUCH further detail regarding how the 3D renderer works. He also recently updated the docs for the Godot shading language)

It remains to be seen exactly how far this advancement will go, but if this jump in progress is any indication, I see potential for Godot 3.0 to enter the same domain of quality as Unity and Unreal.

Publishing Comparisons

Unity is by far the leader in publishing platform diversity with Unreal coming in second and Godot coming in last.

Unity’s cross-platform support as of July 2017


Unreal Engine 4’s cross-platform support as of July 2017
Game Maker’s platforms as of March, 2018

(Note that Switch support for GMS is on its way soon.)

Godot’s publicly disclosable cross-platform support as of July 2017

Note that for Godot specifically, it also has the capacity to be integrated with console platforms since it is natively written in C++; all you need is the development kit. For legal reasons, however, Godot’s free and open source GitHub cannot include (and thereby publicize freely) the integrated source code for these proprietary kits. Developers who already own a devkit can provide ports, but legally, the non-profit that manages Godot Engine (more on that later) cannot be involved. Despite this setback, the PS4 port of the game Deponia was implemented in Godot.

In addition, Godot 3.0 has recently conformed to the OpenHMD API, integrating functionality for all VR platforms that rely on that standard (so that would include HTC Vive, Oculus Rift, and PSVR). The community is gradually adding VR support, documentation, demonstrations, and tutorials.


All in all, Unity is still the clear leader here, but both Unreal and Godot provide a wealth of options for prospective developers to publish to the most notable and widespread platforms related to game development. As such, this factor tends to be somewhat irrelevant unless one is targeting release on one of the engine-specific platforms.

Licensing Comparisons

Unity’s free license permits the developer to craft projects with little-to-no feature limitations (only Unity-powered services are restricted, not the engine itself), so long as the user’s total revenue from a singular game title does not exceed $100,000 (since time of writing). The license does limit the number of installations you can have, however, as they are linked to a “Personal Edition” of the software. If you end up exceeding the usage limits, then you must pay for a premium license that involves a $35 (to double the monetary limit) or $125 (to remove the monetary limit) monthly subscription rate.

Unreal Engine 4 likewise has a free license, however its license has no restrictions whatsoever on the size of your team or the number of instances of the engine you are using (distinct from Unity). On the other hand, it has a revenue-sharing license in which 5% of all income over $3,000 per quarter is contributed to Epic Games.


The licensing between these two platforms therefore can be more or less beneficial depending on…

  1. How long you plan to spend developing (if using Unity professionally).
  2. How quickly you expect the revenue from your game to roll in (if it dips into UE4’s 5% cut trigger).
  3. How much total revenue you expect to make from a single game (Unity’s revenue cap per title).

GameMaker: Studio is priced from the very beginning. It supplies a free trial with limited asset usage (a bit of an extreme nerf in my opinion) and then ever-increasing purchase rates to get licenses that can publish to more platforms, ranging from $39 (desktop only) all the way to $399 (desktop, mobile, web, and console).

Godot, as you might expect, has no restrictions in any capacity since it is a free and open source engine. It uses the MIT license, effectively stating that you may use it for whatever purposes you wish, personal or commercial, and have no obligation to share any resources or accumulated revenue with the engine developers in any way. You can create as many copies of the engine as you like for as many people as you like. The engine itself is developed through the support of its contributors’ generous free work and through a Patreon that is filtered by the Software Freedom Conservancy.

In this domain, Godot is the obvious winner. The trade-off therefore comes in the form of the additional tooling and effort you as a developer have to invest to develop and publish your game with Godot. This and more we shall cover in the below examinations.

Scripting Comparisons

Unity officially supports Mono-powered C#. With some tweaking, you could potentially use other .NET languages too (like F#). If you end up needing optimizations, you are restricted to the high level language’s typical methods of speeding things up. It would be more convenient and vastly more efficient if one could just directly develop C++ code that can be called from the engine, but alas, this is not the case. Unity also doesn’t have any native visual scripting tools, although there are several paid-for extensions to the engine that people have developed.

Unreal Engine 4 is gaining a stronger and stronger presence due to its tight integration of C++, the powerful, native Blueprint visual scripting language, and its recent addition of Mono C#. Blueprint is flexible, effective, and can be compiled down into somewhat optimized C++ code. Unreal C++ is an impressive concoction of its own that adds reflection and garbage collection features commonly associated with high level languages like C#.

Unreal’s Blueprint visual scripting language

GameMaker: Studio has its own GameMaker Language scripting language. It’s simple enough to use for beginners. What’s also cool is an even more beginner-friendly drag-and-drop programming system that can auto-translate to GML (at least in version 2). Somehow, this all gets mixed into a visual scripting framework that connects things (only in version 2, these are all disparate windows in version 1).

GameMaker Studio’s GML

It is in this area that Godot especially shines out from the others. Previous iterations of Godot have had directly implemented C++ and an in-house python-like scripting language called GDScript. It was used after having already tried Python, Lua, and other scripting languages and found all of them lacking in efficiency when dealing with the unique architectural designs that the Godot Engine implements. As such, GDScript is uniquely tailored for Godot usability in the same way that Blueprints for UE4 are.

Later on, a visual scripting system called VisualScript was implemented that could function equivalently to GDScript. Godot 3.0 is also including native support for Mono C# to cater to Unity enthusiasts.

Godot’s VisualScript visual scripting language

The power that truly sets Godot 3.0 apart however is its inclusion of a new C API for binding scripted properties, methods, and classes to code implemented in other languages. This API allows any native or bound language’s capabilities to be automatically integrated with every other native or bound language’s dynamically linked functionality. The dynamically linked libraries are registered as “GDNative” code that points to the bound languages’ code rather than as an in-engine script, effectively creating a foreign interface to Godot’s API. This means that properties, methods, and classes declared and implemented in one language can be used by any other language that has also been bound. Bindings of this sort have already been implemented for C++ (Windows, Mac, and Linux). Developers are also testing bindings for Python (already in beta), Nim, and D. Rust and JavaScript bindings are in the works as well, if I understand correctly.

In comparing these various scripting options, C# will likely have better performance than GDScript, but GDScript is more tightly integrated and easier to use. VisualScript will be the least performant of these, but arguably the easiest for non-programmers to use. If raw performance is the goal, then GDNative will be the most effective (since it is literally native code), but it is the least easiest to use out of these as you have to create different builds of the dynamic library for each target platform.


The “loose integration” this enables will empower any Godot developer to leverage pre-existing libraries associated with any of the bound languages such as C++’s enhanced optimizations/data structures, any C# Unity plugins that are ported to Godot, pre-existing GDScript plugins, and the massive library of powerful statistical analysis and machine learning algorithms already implemented by data research scientists in Python. With every newly added language, users of Godot will not have to resign themselves to the daunting “language barrier” that haunts game development today. Instead, they’ll be able to create applications that take advantage of every conceivable library from every language they like.

Edit: C# was recently merged in, and someone ran a comparison of the performance between GDScript, C#/Mono, and GDNative C++. In addition, here is a post I made on Reddit that goes more in-depth into the relationship between the engine’s scripting languages.

Framework Comparisons

Unity and Unreal have very similar and highly analogous APIs when it comes to the basic assets developers work with. There are the loadable spaces in the game world (the Scene in Unity or Level in Unreal). They then have component systems and a discrete root entity that is used to handle logic referring to a collection of components (the GameObject in Unity or Actor in Unreal). Loadable spaces are organized as tree hierarchies of the discrete entities (Scene Hierarchy in Unity or World Outliner in Unreal) and the discrete entities can be saved into a generalizable format that can be duplicated or inherited from (the Prefab in Unity or Blueprint in Unreal).

If you want to extend the functionality of these discrete entities, you then must create scripts for them. In Unity this is done by adding a new MonoBehaviour component within the 1-dimensional list of components associated with a game object. You can add multiple scripts and each script can have its own properties that are exported to the editor’s property viewer (the “Inspector”).

Multiples of these scripts can be added to a single GameObject if desired, but there is also no relationship defined between components.

In Unreal, a discrete entity has an Actor-level tree-hierarchy showing its components. Scripts, however, are not components themselves (although scripts can extend components too), but rather things directly added to the Actor Blueprint as a whole. An individual function may be created from scratch or extending/overloading an existing function. One can also create Blueprint scripts disassociated from any entity as an engine asset (called a Blueprint Function Library). The bad news is that Blueprints aren’t just a file you point to, i.e. you can’t just add the same script file to different Blueprints like you can with Unity’s C# files.

Components have their own hierarchy, but are merely variables in the scripting organized by context (event/function/macro) in Actors.

In GameMaker: Studio, you create “rooms” in which to place your objects, sprites, tiles, backgrounds, etc. Version 2 has added the ability for rooms to inherit from one another, which is an interesting nuance compared to Unity/UE4’s methods, allowing you to sort of “Blueprint” your room layout and initialized properties. The objects you place in these rooms can then have scripted responses to in-game events. Objects can inherit from one another, but there is no notion of a component system. In order to construct any sort of composition, the “has” relationships need to be declared overtly by searching the room for the object you wish to own and then manually assigning that object id to a variable. It feels clunky to me personally, but its simplicity can simplify things for beginners who don’t want to concern themselves with the cleanliness of their code.

Objects in GM:S have a sprite, physics, a parent, and events to which they can attach scripts.

In Godot, things are simplified a great deal. Components, called “Nodes” are similarly organized into discrete entities that can be saved, duplicated, inherited and instanced; however, Godot sees no difference between the way a Prefab/Blueprint would organize their components and the way a Scene/Level would organize the entities. Instead, it unifies these concepts into a “scene” in its entirety, i.e. a Prefab/Blueprint is a GameObject/Actor is a Scene/Level; everything is just a gigantic set of instanceable and inheritable relationships between nodes. Scenes can be instanced within other scenes, so you might have one scene each for your bullet, your gun, your character, and your level (using them as you would a Prefab/Blueprint). Scripts to extend node functionality are attached 1-to-1 with nodes, and nodes can be cheaply added with the attached script being built-in (Saved into the scene file) or externally linked from a saved script file.


This section is more or less just to demonstrate how each engine has their own way of organizing the game data and highlighting the relationships between elements of functionality. In my personal experience, I find Godot’s model to be much more intuitive to reason about and work with once preconceptions from other engines’ tropes are discarded, but to be honest, this is really just a matter of personal taste.

(Edit: for lack of another place to put this, I’m inserting here; Godot will soon be integrating the Bullet physics engine as an option you can toggle in the editor settings.)

Community and Documentation Comparisons

The glaringly divergent quality between the engines is the documentation. Unreal’s Blueprint and C++ documentation pale in comparison to the breadth and depth of Unity’s massive array of concepts, examples, and tutorials, built both by Unity Technologies and the large community. This is a damaging blow, but wouldn’t be so bad if Unreal’s documentation were at least adequate. Unfortunately, this is not the case: Blueprints have some diversity of tutorials and documentation (nothing like Unity’s though), especially from the user base, but Unreal C++’s documentation is abhorrently lacking. In-house tutorials will often times be several versions behind and the Q&A forums can take anywhere from a few days to weeks, months, or even over a year to get a proper response (several engine iterations later when the same issue is popping up still).

The ironic curve-ball in the situation is that Unreal Engine 4 publishes its own source code to its licensed users. One could arguably reference the source code itself in order to teach themselves UE4’s API and best practices. Unfortunately, Unreal C++ tends to be a huge, intimidating beast with custom compilation rules that are not well documented, even in code comments, and very difficult-to-follow code due simply to the complexity of the content. A typical advantage of source code-publishing projects is the capacity to spot a problem with the application, identify a fix, implement it, and submit a pull request, but the aforementioned complexity makes taking full advantage of UE4’s visible source code much more difficult for the average programmer (at least, in my experience and that of other programmers I’ve discussed it with).

GameMaker: Studio actually has very nice documentation for its usage which is a testament to its high usability for new beginners. And you can easily jump between functions and topics as most topics will provide a list of related functions. Therefore, learning about HOW to use a topic is often intertwined with the documentation on what the topic is (very beginner friendly). This is probably one of the highlights of using GM:S in my eyes.


Godot Engine’s documentation is stronger than Unreal’s, but still a bit weaker than Unity’s. I would also say there are some ways in which it is both better and worse than GameMaker: Studio’s (community managed means things can change very quickly, but it also means things need to be reported / discussed and someone has to actually do it. Proactive-ness is key). A public document generator is used for the Godot website documentation while an in-engine XML file is used to generate the contents of the Godot API documentation. As such, anybody can easily open up the associated files and add whatever information may be helpful to users (although they are approved by the community through pull requests).

On the downside this means that it is the developers’ responsibility to learn how to use tools. On the upside, the engine’s source code is beautifully written (and therefore very easy to understand), so teaching yourself isn’t really difficult when you really have to do it; however, that is often unnecessary as the already small community is filled with developers who have created very in-depth YouTube tutorials for many major tasks and elements of the engine.

You can receive fully informative answers to questions within a few hours on the Q&A website, Reddit page, or Facebook group (so it’s much more responsive than Unreal’s community). In this sense, the community is already active enough to start approaching the breadth and depth of Unity’s documentation and this level of detail is achieved with a minute fraction of the user base. If given the opportunity, a fully grown, matured, and active Godot community could easily create documentation approaching the likes of Unity’s professional work.


So, while Unity is currently still the winner here, it is also clear from Godot’s accessibility and elegance that even with a larger community, Godot could easily enhance the dimensions of its documentation and tutorials to compensate for the community’s needs.

(Edit: note, that the community has been doing weekly documentation sprints in anticipation of the 3.0 release. Even with API revisions between 2.1 and 3.0, the docs have already improved by roughly 20% over the previous version’s content in the past 5 weeks alone. If you are interested in assisting, please visit the Class API contribution guide to get involved and discuss your plans / progress with the #documentation Discord channel (Discord link).)

Extension Comparisons

Due to UE4’s code complexity and Unity’s closed-source nature, both engines suffer from the disease of needing to wait for the development teams to implement new feature requests, bug fixes, and enhancements.

UE4 supposedly exposes itself for editor extensions with their Slate UI that can be coded in C++, but 1) Slate is incredibly hard to read and interpret and 2) it relies on the C++ code just to extend the editor as opposed to a simple scripting solution.

Unity does supply a strong API for creating editor extensions though. Creating certain types of C# scripts with Attributes above methods and properties can allow one to somewhat easily (with a little bit of learning) develop an understanding of how to create new tools and windows for the Unity engine. The relative simplicity of developing extensions for the editor is a prime reason why the Unity Asset Store is so replete with available options for high quality editor extensions.

As far as I’m aware, GM:S provides some customization options for the UI itself, but doesn’t really provide any means of extending the editor (correct me if I’m wrong). They provide a means of making “extension packages” that let you bundle in scripting functionality and assets, but they don’t give users the ability to modify how the editor works very effectively. It has an extension marketplace similar to other high-profile engines.

Godot has an even easier interface for creating editor extensions than Unity: Adding the ‘tool’ keyword to the top line of a script simply tells the script to run at design time rather than run-time, instantly empowering the developer to understand how to manipulate their scripts for tool development: they need only apply their existing script understanding to the design-time state of the scene hierarchy.


EditorPlugin scripts can also be written to edit the engine UI and create new in-engine types. The greatest boon is that all of the logic and UI of the scripting API is the exact same API that allows them to control the logic and UI of the engine itself, allowing these EditorPlugin scripts to operate using the same knowledge already accumulated during one’s ordinary development. These qualities together make creating tools in Godot unbelievably accessible.

In a completely unexpected, but bewilderingly helpful feature, Godot also helps to simplify the process of team-based / communal extension development: all engine assets can be saved with binary files (the standard option for Unreal and Unity) OR with text-based, VCS-friendly files (.scn and .tscn, respectively). Using the latter kinda makes pull requests and git diffs trivially simple to analyze, so it comes highly recommended.

Another significant difference between Unity and Godot’s extension development is the cultural shift: when looking up something in the Unity Asset Store, you’ll often times find a half-dozen or more plugins for the same feature with different APIs, feature-depth/breadth, and price points.

Godot’s culture on the other hand is one of “free and open source tools, proprietary games”. Plugins on the Godot Asset Library must be published with an open license (most of them use MIT), readily available for community enhancements and bug fixes. There is usually only 1 plugin for any given feature with a common, community-debated implementation that results in a common toolset for ALL developers working with the feature in the engine. This common foundation of developer knowledge and lack of any cost makes integrating and learning Godot plugins a joy.



Given a desire for high accessibility, a strong publishing and community foundation, minimal cost, powerful optimizations, and enhanced extensibility, I believe I’ve made the potential of Godot 3.0’s affect on the game industry quite clear. If offered a chance, it could become a new super-power in the world of top-tier game engines.

This article is the result of my working with the Godot 3.0 pre-alpha for approximately 3 months. I had never investigated it before, but was blown away by the engine when I first started working with it. I simply wished to convey my experience as a C++ programmer and my insight into what the future of Godot might hold. Hopefully you too will be willing to at least give it a try.

Who knows? You might find yourself falling in love all over again. I know I did.


Minecraftian Narrative: Part 7

Table of Contents

  1. What is “Minecraftian Narrative”?
  2. Is “Toki Pona” Suitable for Narrative Scripting?
  3. Interface and Gameplay Possibilities
  4. Toki Sona Implementation Quandries
  5. Dramatica and Narrative AI
  6. Relationship and Perception Modeling
  7. Evolution of Toki Sona to “tokawaje”


Unlike previous iterations of this series, today we’ll be diving into the field of linguistics a bit more intensely. The reason for my lack of any new posts in a month and a half has been the result of my work on a completely new language which is now approaching an alpha state (at which point, I will theoretically be able to build a full parser for it). Today, I’ll be covering why I decided to invent a language, where it came from, how it is different, and how it all ties into the overall goal of narrative scripting.

Future posts will most certainly reference this language, so if you aren’t interested in the background and just want the TL;DR of the language features and relevance to narrative scripting, then feel free to skip on down to the conclusion where I will review everything.

Without further ado, let’s begin!

Issues With “Toki Sona”

Prior to this post, I had puffed up the possibilities of using a toki pona-derived language, heretofore referred to as toki sona. While I was quite excited about tp’s potential to combine concepts together and support a minimal vocabulary with simple pronunciation and an intuitive second-hand vocabulary (e.g. “water-enclosedConstruction” = bathroom), there were also a variety of issues that forced me to reconsider its adaptation towards narrative scripting.


First and foremost is the ambiguity within the language. Syntactic ambiguity makes it nearly impossible for an algorithm to easily understand what is being stated, and tp has several instances of this lack of clarity. For example, “mi moku moku” could mean “the hungry me eats” or “I hungrily eat” or even some new double-word emphasis that someone is experimenting with: “I really dove into eating [something]” or “I am really hungry”.  With the language unable to clearly distinguish modifiers and verbs from each other, identifying parts of speech and therefore the semantic intent of a word and its relationship to other words is needlessly complicated.

The one remedy I thought of for this would be to add hyphens in between nouns/verbs and their associated modifiers, not only allowing us to instantly disambiguate this syntactic confusion, but also to simplify and accelerate the computer’s parsing with a clear delimeter (a special symbol to separate two ideas). However, this solution is not audibly communicable during speech and therefore retains all of these issues in spoken dialogue, violating our needs. Using an audible delimeter would of course be completely impractical.

The other problem of ambiguity with the language is the intense level of semantic ambiguity present due to the restricted nature of the language’s vocabulary. The previously mentioned “bathroom” (“tomo telo”) could also be a bathhouse, a pool, an outhouse, a shower stall, or any number of other related things. Sometimes, the distinction is minor and unimportant (bathroom vs. outhouse), but other times that distinction may be the exact thing you wish to convey. What happens then if we specify “bathroom of outside”?


One possibility is the use of “ma” meaning “the land, the region, the outdoors, the earth”, but then we don’t know if this is an outdoor bathroom or if it is the only indoor bathroom in the region, or if it is just a giant pit in the earth that people use. The other possibility could be “poka” meaning “nearby” or “around”, but that is even more unclear as it specifies a request purely for an indoor bathroom of a given proximity.

As you can see, communicating specific concepts is not at all tp’s specialty. In fact, it goes against the very philosophy of the language: the culture supported by its creators and speakers is one that stresses the UNimportance of knowing such details.

If you ever happen to speak with a writer, however, they will tell you the importance of words, word choice, and the evocative nature of speech. They can make you feel different emotions and manipulate the thoughts of the reader purely through the style of speech and the nuanced meanings of the terminology they have used. If we are to support this capacity in a narrative scripting language, we cannot be allowed to build its foundation on a philosophy prejudiced against good writing.

Lost and Confused Signpost

The final issue, related to the philosophy, is the grammatical limitations imposed by its syntax.

  1. Inter-sentence conjunctions like English’s FANBOYS (“I was tired, yet he droned on.”) are not entirely absent thankfully: they have an “also” (“kin”) and a “but” (“taso”) that can be used to start the following sentence and relate two ideas. One can even adverbial phrases (the only types of dependent clauses allowed) to assist in relating ideas. However, limitations are still present, and the reason for that is a mix of the philosophy and the (admirable) goal of keeping the vocabulary size compact.
  2. You cannot satisfactorily describe a single noun with adjectives of multiple, tiered details (“I like houses with toilet seats of gold and a green doorway”). This is a problem many have attempted to deal with revolving around the “noun1 pi multi-word modifier” technique that converts a set of words into an adjective describing noun1. Users of tp have debated on ways of combating this. One that I had considered, and which is mildly popular, was re-appropriating the “and” conjunction for nouns (“en”) as a way of connecting pi’s, but because you effectively need open and closed parentheses to accomplish the more complex forms of description, there isn’t really a clean way of handling this.

Through all of the limitations, prejudice of writing, and ambiguity, toki pona, and any language closely related to it, is inevitably going to find itself wanting in viability for narrative scripting. Time to move on.

Lojban: Let’s Speak Logically!

In an attempt to solve the problems of toki pona, a friend recommended to me that I check out the “logical language”, Lojban (that ‘j’ is soft, as in “beige”).

Lojban is unlike any spoken language you have ever learned: it borrows much of its syntax from functional programming languages like Haskell. Every phrase/clause is made up of a single word indicating a set of relationships and the other words are all things that act as “parameters” by plugging in concepts for the relations.


For example, “cukta” is the word for book. If you simply use it on its own, it plugs “book” into the parameter of another word. However, that’s not all it means. In full, “cukta” means…

x1 is a book containing work x2 by author x3 for audience x4 preserved in medium x5

If you were to have tons of cukta’s following each other (with the proper particles separating them), having a full version would mean…

A book is a book about books that is written by a book for books by means of a book.

You can also specifically mark which “x” a given word is supposed to plugin as, without needing to use any of the other x’s, so the word cukta can also function as the word for “topic”, “author”, “audience”, and “medium” (all in relation to books). If that’s not conservation of vocabulary, I don’t know what is.

It should be noted that 5-parameter words in Lojban are far more rare than simpler ones with only 2 or 3 parameters. Still, it’s impressive that, using this technique, Lojban is able to communicate a large number of topics using a compressed vocabulary, and yet remain extremely explicit about the meaning of its words.

Just as important to notice is how Lojban completely does away with the concept of a “noun”, “verb”, “object”, “preposition”, or anything of the sort. Concepts are simply reduced to a basic entity-relation-entity form: entity A has some relationship x? to entity B. This certainly makes things easier for the computer. In addition, while on the one hand one might think this would make things easier to understand when learning (since it is a much simpler system), the fact that it is so vastly different from the norm means that people coming from more traditional languages will have a more difficult time understanding this system, especially given the plurality of relationships that are possible with a single word.


Another strong advantage of Lojban is that it is structured to provide perfect syntactic clarity to a computer program and can be completely parsed by a computer in a single pass. In laymen’s terms, it means that the computer only needs to “read” the text one time to understand with 100% accuracy the “parts of speech” of every word in a sentence. There is no need for it to guess how a word is going to be syntactically interpreted.

In addition, Lojban employs a strict morphological structure on its words to indicate their meaning. For example, each of these “root set” words like “cukta” have one of the two following patterns: CVCCV and CCVCV (C being “consonant” and V being “vowel”). This makes it much easier for the computer to pick out these words in contrast to other words such as particles, foreign words, etc. Every type of word in the language conforms to morphological standards of a similar sort. The end result is that Lojban parsers, i.e. “text readers” are very very fast in comparison to those for other languages.

One more great advantage of Lojban is that it has these terrifically powerful words called “attitudinal indicators” that allow one to communicate a complex emotion using words on a spectrum. For example, “iu” is “love”, but alternative suffixes give you “iucu’i” (“lack of love”, a neutral state) and “iunai” (“hate/fear”, the opposite state). You can even combine these terms to compose new emotions like “iu.iunai” (literally “love-hate”).


For all of these great elements though, Lojban has two aspects that make it abhorrent to use for the simple narrative scripting we are aiming for. It is too large of a language: 1,350 words just for the “core” set that allows you to say reasonable sentences. While this is spectacularly small for a traditional language, in comparison to toki pona’s nicely compact 120, it is unacceptably massive. As game designers, we simply can’t expect people to devote the time needed to learn such a huge language within a reasonable play time.

The other damaging aspect is the sheer complexity of the language’s phonology and morphology. When someone wishes to invent a new word using the root terms, they essentially mash them together end-to-end. While this would be fine alone, switching letters around and having part of the latter consumed by the end of the former is unfortunately very difficult to follow. For example…

skami = “x1 is a computer used for purpose x2”
pilno = “x1 uses/employs x2 [tool, apparatus, machine, agent, acting entity, material] for purpose x3.”
skami pilno => sampli = “computer user”

Because “skami pilno” was a commonly occuring word in Lojban’s usage, a new word with the “root word” morphology can be invented on the fly by combining the letters. Obviously, this appears very difficult to do on the fly and effectively involves people learning an entirely new word for the concept.

All that to say that Lojban brings some spectacularly innovative concepts to the table, but due to its complex nature, fails to inspire any hope for an accessible scripting language for players.

tokawaje: The Spectral Language

We need some way of combining the computer-compatibility of Lojban with the elegance and simplicity of toki pona that omits as much ambiguity as possible, yet also allows the user to communicate as broadly and as specifically as needed using a minimal vocabulary.

Over the past month and a half, I’ve been developing just such a language, and it is called “tokawaje”. An overview of the language’s phonology, morphology, grammar, and vocabulary, along with some English and toki pona translations, can be found on my commentable Google Sheets page here (concepts on the Dictionary tab can be searched for with “abc:” where “abc” is the 3-letter root of the word). With grammar and morphology concepts derived from both Lojban and toki pona, and with a minimal vocabulary sized at 150 words, it approximates a toki pona-like simplicity with the potential depth of Lojban. While it is still in its early form, allow me to walk through the elements of tokawaje that capture the strengths of the other two despite avoiding their pitfalls.

Lojban has three advantages that improve its computer accessibility:

  1. The entity-relation-entity syntax for simpler parsing and syntactic analysis.
  2. Morphological and grammatical constraints: the word and grammar structure is directly linked to its meaning.
  3. The flexibility of meaning for every individual learned word: “cukta” means up to 5 different things.

toki pona has two advantages that improve its human accessibility:

  1. Words that are not present in the language can be estimated by combining existing words together and using composition to construct new words. This makes words much more intuitive.
  2. It is extremely easy to pronounce words due to its mouth-friendly word construction (every consonant must be followed by a single vowel).


“tokawaje” accomplishes this by…

  1. Using a similar, albeit heavily modified entity-relation-entity syntax.
  2. Having its own set of morphological constraints to indicate syntax.
  3. Using words that represent several things that are associated with one another on a spectrum.
  4. Relying on toki pona-like combinatoric techniques to compose new words as needed.
  5. Using a phonology and morphology focused on simple sound combinations that are easily pronounced. Must match the pattern: VCV(CV)*(CVCV)*.

Now, once more, but with much more detail:

1) Entity-Relation-Entity Syntax

Sentences are broken up into simple 1-to-1 relations that are established in a context. These contexts contain words that each require a grammar prefix to indicate their role in that context. After the prefix, each word then has some combination of concepts to make a full word. Concepts are each composed of some particletag, or root, (some spectrum of topic/s) followed by a precision marker that indicates the exact meaning on that spectrum.

The existing roles are… (pronounced like in Spanish):

  1. prefix ‘u’: a left-hand-side entity (lhs) similar to a subject.
  2. prefix ‘a’: a relation similar to a verb or preposition.
  3. prefix ‘e’: a right-hand-side entity (rhs) similar to an object.
  4. prefix ‘i’: a modifier for another word, similar to an adjective or adverb.
  5. prefix ‘o’: a vocative marker, i.e. an interjection meant to direct attention.

Sentences are composed of contexts. For example, “I am real to you,” is technically two contexts. One asserts that “I am real” while the other asserts that “my being real” is in “your” perspective. This nested-context syntax is at the heart of tokawaje.

These contexts are connected with each other using context particles:

  1. ‘xa’ (pronounced “cha”) meaning opening a new context (every sentence silently starts with one of these).
  2. ‘xo’ meaning close the current context.
  3. ‘xi’ meaning close all open contexts back to the original layer.

(These also must each be prefixed with a corresponding grammar prefix)

Examples of Concept Composition:

  1. ‘xa’ = an incomplete word composed of only a particle+precision.
  2. “uxa” = a full word with a concept composed of a grammar prefix and a particle+precision.
  3. “min” = root for pronouns, “mina” = “self”, full “umina” = “I”.
  4. “vel” = root for “veracity”, “vela”= “truth”, full “avela” = “is/are”.
  5. “sap” = root for object-aspects, “sapi” = “perspective”, full “asapi” = “from X’s perspective”.

Sample Breakdown:

“I am real to you” => “umina avela evela uxo asapi emino.”

  1. “umina” {u: subject, min/a: “pronoun=self”}
  2. “avela” {a: relation, vel/a: “veracity=true”}
  3. “evela” {e: object, vel/a: “veracity=true”}
  4. “uxo” {u: subject, xo: “context close”} // indicating the previous content was all a left-hand-side entity for an external context.
  5. “asapi” {a: relation, sap/i: “aspect=perspective”}
  6. “emino” {e: object, min/o: pronoun=you}


It’s no coincidence that the natural grammatical breakdown of a sentence looks very much like JSON data (web API anyone?). In reality, it would be closer to…

{ prefix: ‘u’, concepts: [ [“min”,”a”] ] }

…since the meanings would be stored locally between client and server devices.

This is DIFFERENT from Lojban in the sense that no single concept will encompass a variety of relations to other words, but it is SIMILAR in that the concept of a “subject”/”verb”/”object” structure isn’t technically there in reality. For example:

“umina anisa evelo” => “I -inside-> lie” => “I am inside a lie.”

In this case, “am inside” isn’t even a verb, but purely a relation simulating an English prepositional phrase where no “is” verb is technically present.

These contexts can be used without a complete context to create gerunds, adjective phrases, etc. For example, to create a gerund left-hand-side entity of “existing”, I might say

“avela uxo avela evelo.” => “Existing is (itself) a falsehood.”


You might ask, “how do we tell the difference with something like [uxavela]? Might it be {u: object, xav/e: something, la: something}? Actually, no. The reason the computer can immediately understand the proper interpretation is because of tokawaje’s second Lojban incorporation:

2) Strict Morphological Constraints for Syntactic Roles

Consonants are split up into two groups: those reserved for particles, such as ‘x’ and those reserved for roots, such as ‘v’. The computer will always know the underlying structure of a word’s morphology and consequent syntax. Therefore, given the word “uxavela” we will know with 100% certainty that the division is u (has the V-form common to all prefixes), xa (CV-form of all particles), and vela (CVCV-form of all roots).

Particles can be split up into two categories based on their usual placement in a word.

  1. Those that are usually the first concept in a word.
    1. ‘x’ = relating to contexts (as you have already seen previously)
      1. ‘xa’ = open
      2. ‘xo’ = close
      3. ‘xi’ = cascading close
      4. ‘xe’ = a literal grammar context (to talk about tokawaje IN tokawaje)
    2. ‘f’ = relating to irrelevant and/or non-tokawaje content
      1. ‘fa’ = name/foreign word with non-tokawaje morphology constraints
      2. ‘fo’ = name/foreign word with tokawaje morphology constraints
      3. ‘fe’ = filler word for something irrelevant
  2. Those that are usually AFTER a concept as a suffix (could be mid-word).
    1. ‘z’ = concept manipulation
      1. ‘za’ (zah) = shift meaning more towards the ‘a’ end of the spectrum
      2. ‘zo’ (zoh) = shift meaning more towards the ‘o’ end of the spectrum
      3. ‘zi’ (zee) = the source THING that assumes the left-hand-side of this relation.
        1. Ex. “uvelazi” => that which is something
        2. Shorthand for “ufe avela uxo”.
      4. ‘ze’ (zeh) = the object THING that assumes the right-hand-side of this relation.
        1. Ex. “uvelaze” => that which something is.
        2. Shorthand for “avela efe uxo”.
      5. ‘zu’ (as in “food”) = questioning suffix
      6. ‘zy’ (zai) = commanding suffix
      7. ‘zq’ (zow) = requesting suffix
    2. ‘c’ = tensing, pronounced “sh”
      1. ‘ca’ = future tense
      2. ‘ci’ = progressive tense
      3. ‘co’ = past tense
    3. ‘b’ = logical manipulation
      1. ‘be’ = not
      2. ‘ba’ = and
      3. ‘bi’ = to (actually, it is the “piping” functionality in programming, if you know about that)
      4. ‘bo’ = or
      5. ‘bu’ = xor

All other consonants in the language fall into the “root word” set. With these clear divisions, tokawaje will always know what role a concept has in manipulating the meaning of that word.

I’d also like to point out that informal, conversational uses of these two groups of particles may completely remove the distinction between them. For example, someone may simply say:

“uzq” => “Please.”

This would not actually impact the computer’s capacity to distinguish terms though. I even plan to make my own parser assume that lack of a grammar prefix implies an intended ‘u’ prefix (not that that’s encouraged)

3) Concepts in Tokawaje Exist on Spectra

Most every word in the language has exactly 4 meanings, with 3 non-root concepts using more than that: the grammar prefixes and ‘z’-based word manipulators (as you’ve already seen), and general expressive noises / sound effects which are vowel-only. This technique allows for vocabulary that is flexible, yet intuitive, despite its initial appearance of complexity.

4) Sounds and Structure are Designed for Clear, Flowing Speech

Every concept is restricted to a form that facilitates clear pronunciation and a consistent rhythm. Together, these elements ensure that the language is simple to learn phonetically.

Concepts have the form C (particles/tags) or CVC (roots) along with a vowel grammar prefix and a vowel precision suffix, resulting in a minimum word of VCV or VCVCV.

The rhythm to concepts emphasizes the middle CV: u-MI-na, a-VE-la, etc. Even with suffixes applied to words, this pattern never becomes unmanageable. The result is a nice, flowy-feeling language:

  1. uvelominacoze / avelominaco (“velomina” => a personal falsehood)
    1. u-VE-lo-MI-na-CO-ze (that which one lied to oneself about)
    2. a-VE-lo-MI-na-co (to lie to oneself in the past)


5) Tokawaje Employs Tiered Combinatorics to Invent New Concepts

The first concept always communicates the root “what” of a thing while the subsequent concepts add further description of the thing. This structure emulates toki pona’s noun-combining mechanics.

‘u’, ‘a’, and other non-‘i’ terms are primary descriptors and more closely adhere to WHAT a thing is. ‘i’ terms are secondary descriptors and approximate the additional properties of a thing BEYOND simply WHAT it is. Fundamentally, every concept follows these simple rules:

  1. Non-‘i’ words are more relevant to describing their role’s reality than ‘i’ words.
  2. However, individual words are described more strongly by their subsequent ‘i’ words than they are by other terms.
  3. Multiple non-‘i’ words will further describe that non-‘i’ term such that later non-‘i’ words act as descriptors for the next-left non-‘i’ word and its associated ‘i’ words.

Let’s say I have the following sentence (I’ll be using the filler particle “fe” with an artificially inserted number to reference more easily. Think of each of these as a root+precision CVCV form):

“ufe1fe2 ife3fe4 ife5 ufe6fe7 ife8 avela uxofe9 ife10 afe11”

This can be broken down in the following way:

  1. Any pairing of adjacent fe’s form a compound word in which the second fe is an adjective for the previous fe, but the two of them together form a single concept. For example, “ife3fe4”: fe4 is modifying fe3, but the two together form an adjective modifying the noun “ufe1fe2”.
  2. The subject is primarily described by “ufe1fe2” and secondarily by “ufe6fe7” since they are both prefixed with ‘u’, but one comes later. “ufe6fe7” is technically modifying “ufe1fe2”, even if “ufe1fe2” is also being more directly modified by the ‘i’-terms following it.
  3. Each of those ‘u’ terms are additionally modified by their adjacent ‘i’ term adjective modfiers.
  4. “ife5” is an adverb modifying “ife3fe4”.
  5. The “existence of ” the “ufe1-8” entity is the u-term of the “afe11” relation.
  6. The entirety of that u-term has a primary adjective descriptor of “-fe9” and a secondary adjective descriptor of “ife10”.


Suppose the word for “dog” were “uhumosoviloja” (u/lhs,humo/beast,sovi/land,loja/loyal = “loyal land-beast”). How might you describe a disloyal dog then? You would use an ‘u’ for stating it is a dog (that identifying aspect) and an ‘i’ for the disloyalty (the added on description). The spectrum of loyalty (“loj”) would therefore show up twice.

“uhumosoviloja ilojo” => “disloyal dog”

For clarity purposes, you may even split up the “loja”, but that wouldn’t impact the meaning since “uloja” still has a higher priority than “ilojo”.

“uhumosovi uloja ilojo” => “disloyal dog” (equivalent)

Let’s say there were actual distinctions between words though. How about we take the noun phrase “big wood box room”? Here’s the necessary vocabulary:

“sysa” => “big/large/to be big/amount”
“lijavena” => “rigid thing of plants” => “wood/wooden/to be wood”
“tema” => “of or relating to cubes”
“kita” => of or relating to rooms and/or enclosed spaces”

Now let’s see some adaptations:

  1. ukitasysa => an “atrium”, a “gym”, some space that, by definition, is large.
  2. ukita usysa => same thing.
  3. ukita isysa => a room that happens to be relatively big.
  4. ukita utema ulijavena usysa => cube room of large-wood.
  5. ukita utema ulijavena isysa => cube room of large-wood.
  6. ukita utema ilijavena usysa => room of wooden large-boxes.
  7. ukita itema ulijavena usysa => a cube-shaped room of large-wood.
  8. ikita utema ulijavena usysa => [something] related to rooms that is cube-shaped, wooden, and large.
  9. ukita utema ilijavena isysa => a room of large-wood cubes.
  10. ukita itema ilijavena usysa => the naturally large room associated with wooden cubes.
  11. ikita itema ulijavena usysa =>[something] related to cube-shaped rooms that is a large-plant.
  12. ukita itema ilijavena isysa => the room of large-wood boxes.
  13. ikita itema ilijavena usysa => [something] related to plant-box rooms that is an amount. (an inventory of greenhouses or something?)
  14. ikita itema ilijavena isysa => [something] related to rooms of large-wood boxes.
  15. ukita usysa utema ilijavena => large room of wooden boxes.
  16. ukita utema usysa ilijavena => room of wood-amount boxes.
  17. ukita utemasysa ilijavena => room of wooden big-boxes.
  18. ukita usysa iba ulijavena itema => A big-room and a cube-related plant.


Some of these are a little crazy and some of them are amazingly precise. The point is, we are achieving this level of precision using a vocabulary closer to the scope of toki pona. I can guarantee you that you would never have been able to say any of this in a language as vague as TP nor will it ever try to approximate this level of clarity. I can likewise guarantee that Lojban will never have a minified version of itself available for video games. Good thing we don’t need we have an alternative.


As you can see, tokawaje combines the breadth, depth and computer-accessibility of Lojban with the simplicity, intuitiveness, and human-accessibility of toki pona.

For those of you wanting the TL;DR:

The invented language, tokawaje, is a spectrum-based language. Clarity of pronunciation, compactness of vocabulary (150 words), and combinatoric techniques to invent concepts all lend the language to great accessibility for new users of the language. On the other hand, a sophisticated morphology and grammar with clear constraints on word formations, sentence structure, and their associated syntax and semantics result in a language that is well-primed for speedy parsing in software applications.

More information on the language can be found on my commentable Google Sheets page here (concepts on the Dictionary tab can be searched for with “abc:” where “abc” is the 3-letter root of the word).

This is definitely the longest article I’ve written thus far, but it properly illuminates the faults with pre-existing languages and addresses the potential tokawaje has to change things for the better. Please also note that tokawaje is still in an early alpha stage and some of its details are liable to change at this time.

If you have any comments or suggestions, please let me know in the comments below or, if you have specific thoughts that come up while perusing the Google Sheet, feel free to comment on it directly.

Next time, I’ll likely be diving into the topic of writing a parser. Hope you’ve enjoyed it.


Next Article: Coming Soon!
Previous Article: Relationship and Perception Modeling

Minecraftian Narrative: Part 1

Table of Contents:

  1. What is “Minecraftian Narrative”?
  2. Is “Toki Pona” Suitable for Narrative Scripting?
  3. Interface and Gameplay Possibilities
  4. Toki Sona Implementation Quandries
  5. Dramatica and Narrative AI
  6. Relationship and Perception Modeling
  7. Evolution of Toki Sona to “tokawaje”


Minecraft’s capacity for simple, direct, consumer-level editing of the 3D world has brought upon revolutions in game design and has joyously perforated industries across the world: education, design, architecture, and a whole host of other fields have felt its influence. Unfortunately, Minecraft’s revolution stops short of tangible representations; while it is excellent at modeling geometry and therefore the creation of concrete forms, it is not so great at enabling the same for ideological or abstract creations. That would be a task for some sort of new narrative scripting language.

What if we could take those same properties that made Minecraft so successful, its simplicity, its capacity to empower laymen for creation, and bring it into the realm of narrative development? What if the complex realm of natural language processing could be simplified to an extreme and made interactive such that even children could create characters, worlds, stories, and watch a computer bring them to life, nurture their growth, and develop them in real time? This series aims to suggest possibilities for just such a future and the remaining hurdles that must be dealt with.

Virtualization of Character

Emerging on the horizon is our imminent mastering of the “Turing” test, devised by Alan Turing. In it, one has a programmed machine anonymously speak with a series of judges. The judges vote on whether they are conversing with a human or a machine. One passes the test if the large majority of the judges are mistaken and are unable to tell the difference between a regular human and the designed machine. A chief example of this test in action is in the creation of software applications designed to communicate online via chat rooms and social media.

A community’s tribute to the Chinese chatbox Xiaoice

The development of these “chatbots” has advanced considerably. A teenage girl variety known as Xiaoice (Shao-ice) garnered over 663,000 live conversations in merely 72 hours. People were relying on her for support, companionship, and advice. She in turn responds dynamically, realistically, yet with her own personality, moving beyond the one-way static communication of traditional media.

Children of the 90’s and later are learning to bond emotionally with virtual characters in unprecedented ways with things like virtual pets, the evolving Toys-to-Life industry, and the Japanese personified, performing voice programs called “Vocaloids” with massively popular YouTube music videos, canonical relationships, and live concerts in New York with cheering fans.

A few of the Vocaloid performers growing famous around the world.

As the popularity of virtual characters increases and their commercial appeal grows, consumers’ desire to share memories, experiences, and relationships with them will also grow. The harbingers of these experiences will be restricted to the educated and practiced experts of writing and design historically responsible for creating such performances.

But what if this need not be the case, for games or any other sort of media? What if characters, and the stories surrounding them were just as editable as the blocks of Minecraft? What if they were “Minecraftian”?

What Does “Minecraftian” Actually Mean?

Minecraft: a video game that revolutionized the way people interact with 3D design and architecture. Fundamentally, it’s a video game that acts as a liberating force to unleash people’s creativity and bring to life the architectural and artistic wonders locked within one’s mind. And the medium of this transformation? Blocks. A limitless, vast world that’s most basic element is a square of space that can be directly interacted with by any average player.

A sampling of the blocks players may encounter.

There are a limited number of types of blocks. Some are for grass. Some are for wood. Others represent various types of stone or fluid. Some are static and some are animated. And as one becomes familiar with the basic types of blocks, they can then “mine” blocks for materials to be used in “crafting” new blocks. They can consume them, transform them, fuse them, add them back into the world, and in so doing directly edit every detail of their environment.

Take a second to imagine the power that this game presents. It isn’t very complicated; there simply aren’t too many types of blocks to remember. You never start out dealing with things you can’t handle. You steadily advance your knowledge of blocks as you play, learning to become proficient in your renderings. Your main activities involve mining or placing objects in the world and combining materials in menus. The more you interact with the world, the more you discover how to alter things and the greater your understanding of the relationships between blocks becomes. You soon get to a point where the path to realizing your vision forms in your mind effortlessly because you’ve mastered the simple mechanics ever so quickly. Soon a masterpiece stands before you and the journey there was far easier than you ever could have imagined.

“King’s Landing”, the capital city from the TV series Game of Thrones, built with blocks by a regular Minecraft player.

Some important things to consider about Minecraft’s block-building mechanic:

  1. It’s easy to learn and requires little overall knowledge. There aren’t so many different types of materials that you couldn’t memorize a reasonable set in a few dedicated hours of play.
  2. The basic editable element of the world is highly visual and interactive. The “resolution” of things is drastically reduced, facilitating comprehension and precision with a tangible granularity.
  3. Obscurity and concepts are your friends, not fine details. The limited degree of detail permitted ensures that players need not become experts in their creative portrayals as everyone is on an equal footing, complexity-wise.
  4. The “language” of creation is fully computerized. The medium is just as prone to manipulation by algorithms as it is the manual fine-tuning of a patient and determined player.
  5. The data is easily hack-able, mod-friendly, and adaptable. Everything exists on a simple grid, for which we have accumulated many algorithms already, i.e. there is already a vast array of knowledge for how to manipulate these data structures. Tinkerers and entrepreneurs everywhere can easily experiment and devise new ways of interacting with the system.

The Difficulties of Language

So how can we take the mechanics of Minecraft and the transformations it made to 3D design and bring it to narrative, character creation, and world-building? First step: identify our most fundamental element, our “block.” Perhaps words? After all, words – and the concepts associated with them – are what make up thoughts and ideas, right? We should just have everyone who plays our game learn the words that we use in our game/tool. Well, that certainly appears to be the solution, but it isn’t quite so simple unfortunately.

If you’ve ever tried to learn a second language, then you know it can be an arduous task, especially the further you go from your native tongue. Taxing hurdles build up one after another such as the varieties of grammar and syntax or the vast amounts of vocabulary. It can take years before one is competent enough to use a new language with any sort of astute precision. Compounding the issue is the practical element of whether or not the language can easily be comprehended, interpreted, and responded to by a computer system in real-time from multiple sources.


If our goal is to use a language that all of our players can be expected to engage with, then relying on any language as complex as English is still a disservice to the players of other cultures. It also loses much of its potential to appeal to younger audiences whose language skills may still be developing. What’s more, if we wish to make it as simple as possible for an average player to edit language contents to communicate with narrative tools and mechanics in-game, English and other common languages are too convoluted.

What we really need if we intend to apply a Minecraftian design to our linguistic mechanics is a revolutionary scripting language: something with a limited vocabulary that’s easy to learn, a simple granular syntax where you can easily pick out the parts of a sentence and the meaning therein, a language where a computer could easily understand the full breadth of its grammar at extremely efficient speeds to account for vast amounts of real-time processing. It wouldn’t have to be terribly detailed; just enough information for us to get a vague idea of what was meant. Ideally, the language would be highly visual and easily editable to appeal to children on the consumer end and hackers on the entrepreneur end.

In comes Toki Pona.

The entire Toki Pona language, minus compound words.

Toki Pona is an artificially designed language with a total of 120 basic words and particles and a syntax of a measly 10 rules. It can be completely learned fluently in a matter of weeks. Words can be clearly depicted as combinations of hieroglyphs that visually indicate the meaning of the word. The visual, minimalist approach of the language makes it highly accessible, adaptable, computational, universal, and overall plausible as a candidate for narrative editing, though further examination will be necessary to truly determine its utility for such a task.

If we simply 1) teach a computer to understand a Toki Pona-inspired language, 2) make interactions with such a system simple, intuitive, and visual, and 3) properly introduce the use of this visual language to players, we can devise gameplay mechanics that allow players to easily interact with and/or (re)define the narrative details of their world. In the next article, I’ll go into detail on how such a system might work and how it could be used for revolutionizing game design and narrative on a fundamental scale.

Please let me know what you think in the comments. I’m eager to hear people’s thoughts on the topic!

Next Article: Is “Toki Pona” Suitable for Narrative Scripting?

Game Visions: Modeling Human Behavior and Awareness

Edit: This is now part of a series of posts exclusively about the development of a procedural narrative system.
Part 1: Game Visions: A Roleplay-Inspired Procedural Narrative System
Part 2: Game Visions: Modeling Human Behavior and Awareness

This article is part two of a series covering interactive procedural narrative. Previously, we covered some elements that could be involved in the development of a robust, open-ended middleware application that can procedurally generate the building blocks of a narrative: sequences of associations between narrative entities. You can read the original article here.

As a quick review, here are the relevant terms…

  • Narrative Entity: a Character, Item, Place, piece of Lore, AKA “thing” that is relevant to the narrative in some way.
  • Storyteller facade: decides how content is generated (including the preparation of in-game events). Acts like a dungeon master.
  • Agent facade: responsible for the decision-making of a willed Narrative Entity. Acts like a role-player.

My first thought was the simplest: a model of relationships using a graph, nodes as narrative entities, the lines between them as the relationships. However, I saw several issues with this. It’s missing information about interactions, how narrative entities engage in interactions, and how those interactions affect the entities’ relationships.

In search of a more informative model, I found myself inspired by game engines such as Unity and Unreal Engine 4. Each of them sports an entity-component system whereby all game objects are composed of concrete behaviors that define them and the “type” of an object is interpreted logically based on the combination of its behaviors.

Under this model, I can’t create a “dragon” directly. I can create a “scale-armored”, “flying”, “fire-breathing”, “animalistic” entity that “periodically attacks villages” which I simply label as a dragon. Each of those attributes can be added or removed whenever the system needs, allowing for highly fluid and flexible actors. Relationship modeling could be improved by incorporating this design.

What We Need To Model

Assuming interactions involve an owning “source” entity and a targeted “destination” entity, we must model…

  • the entities that exist in a narrative.
  • the relationships they have to each other.
  • the role a given entity has in a given relationship.
  • the possible interactions that entities in a given relationship could engage in.
  • the probable interactions a given entity in a relationship would engage in.
  • the degree to which an owning entity’s interaction meets, exceeds, or fails to meet the target entity’s expectations of the relationship (which in turn implies the effect an interaction will have on the relationship).

In addition, we want the same model to be a viable method of simulating non-interactive information such as an Agent’s awareness of its relationships to other entities. This will allow for a full social simulation, complete with the need for information-gathering and the presence of limited information, misinformation, and deception. As such, we must also model…

  • an Agent’s awareness of a narrative entity.
  • an Agent’s awareness of another Agent’s interactions.
  • an Agent’s awareness of others’ perceptions, i.e. “Do I know that you know?”

Finally, it is possible that entities can be associated with one another indirectly via space or time. It will be important for an Agent to be able to identify associations of this sort if it is to draw accurate conclusions from perceptions.

Proposed Model: Interactions

In order to handle the additional complexity of these connections, let’s imagine a new logical representation. Suppose a graph models interactive connections not as lines, but as spheres where the poles of each sphere exist at each pair of nodes’ locations. Each node represents a narrative entity. Interactions between narrative entities involve a directed connection from a source node (the owner) to a destination node (the target) that runs along the surface of a given sphere layer. Interactions can be logically associated with a relationship where each endpoint of the connection is logically associated with a role that matches the relationship (such as “father”-“son” or “master”-“apprentice”). Each role has expectations (anticipated interactions) associated with it. To develop the quality of the relationship, expected actions must be executed. The combination of all minor relationships between two narrative entities establishes the major relationship, represented by the sphere. You can picture an application that can group or color-code the arcs running along the sphere based on a suspected role or relationship perception.

Each sphere would have three layers and a core.

  • Core: the data permanently associated with the relationship. Includes a history of interactions.
  • Outer Layer: “Possible Interactions”
    • The Storyteller pulls from a database of crowd-supplied interactions (assumed to be large), filters them based on dramatic relevance to the relationship using the story’s history and the relationship’s Core, and populates this layer with a highly narrowed subset of interactions from all of the possibilities.
    • This layer ensures that the only decisions that will be available to an Agent are decisions that lead to an interaction with something dramatically relevant.
  • Middle Layer: “Expected Interactions”
    • The Storyteller populates this layer by filtering interactions in the Outer Layer based on level of expectation in the relationship.
    • Each interaction in this layer is associated with a float value from 0 to 1 indicating the degree to which it is expected.
  • Inner Layer: “Probable Interactions”
    • The Agent pulls from the Outer Layer, factors in the relationship’s Core and the Agent’s associated Narrative Entities’ (usually a Character) traits/personality/goals/attributes to determine how it wants to update the status of the relationship. The more its own goals can be furthered by maintaining a good relationship, the more likely it is to engage in actions associated with the relationship’s expectations (to preserve a good relationship). Likewise, if it does not care for the relationship, it may decide those interactions are a lower priority and act against them or disregard them entirely.
    • The “quality” of a relationship can be calculated as how close the history of interactions matches each party’s expectations of the other (the Core mapped against the Middle Layer).
    • The Inner Layer is used exclusively as a decision-making pool for future actions, and has no direct bearing on the Outer or Middle Layers.
    • The Agent will choose from this layer a randomly selected interaction, promoting the variability of the system whilst still retaining the logical and relational consistency of the narrative.
    • The Agent’s selected interactions from this layer are recorded in the Core to be used in filtering future possible and expected interactions.

Interactions may also require dependencies with other interactions. Dependent interactions can be illustrated in this model as pole-to-pole segments of the sphere that are stacked on top of each other. In this example, lower level interactions may be necessary to be completed before higher level ones; however, the degree of necessity may also be variable. In one case, it may be literally impossible for an interaction to take place prior to another interaction (can’t happen) whereas in other cases it may simply be that an Agent is disinclined to engage in an interaction without another interaction occurring first (not likely to happen).

Proposed Model: Predictions

Because different types of interactions may be capable of fulfilling an expectation for a relationship, we can interpret an interaction as something involving inputs, outputs, and tags that help connect it to an expectation. Interactions will (maybe just optionally) require entities with a given state as inputs and would have outputs of a resulting (usually changed) state in 1 or more other entities. Relationships, and the roles composing them, are therefore functionally just an aggregation of the historical and expected interactions associated with them.

The logical assignment of roles and relationships assists in the modeling of Agent predictions. If a given Agent is to appear intelligent, it must be able to try and predict the actions of others. It would do so by analyzing perceived interactions and learning to associate them with given roles and relationships. The Agent would then use its native knowledge to examine its understanding of the expected interactions between entities with the given roles. Hopefully, it would be able to accurately identify the subsequent interactions of another Agent. These perceptions would additionally be colored by the Agent’s knowledge of the target Agent’s personality / traits / past, etc.

For example, suppose Agent A executes the interaction of pushing Agent B out of the way of an oncoming car. This interaction could be tagged by users of our application with words like “protect”, “safeguard”, and “selfless”. One relationship archetype that someone could logically assign to this kind of interaction could be a Parent-Child relationship where Agent A has the role of Parent and Agent B that of Child. Assuming people have also matched tags to the Parent-Child relationship already, the Storyteller and other Agents may begin to predict the history and expectations of A’s and B’s relationship. Those predictions may prove wrong as additional information comes to light that conflicts with that logically-assigned relationship, or future information could enhance the probability that it is an accurate assessment. Agent C, observing the interaction, could then form predictions of A’s future interactions based on the probability that it will engage in behavior that falls in line with the supposed role’s expected future actions.

Proposed Model: Perceptions

Perceptions can be included in the model as straight-line connections from a narrative entity to either another narrative entity, an interaction between two narrative entities, or another perception line. In this way, we can indicate a narrative entity’s awareness of other entities, their interactions/potential relationships, and an observed entity’s perception of others.

It is also important that we stack layers of inter-perception connections to around 5 layers. Mind games are an important aspect of simulating realistic human interactions. I’ll demonstrate this point with an example:

  1. There exists a secret, a type of Lore seeing as how it’s factual information for the narrative universe. I know that secret (a perception line from me, the Agent, to the other narrative entity, the secret). I may now take action that others wouldn’t since they aren’t privy to the information I have.
  2. My enemy, Agent B, is aware that I know the secret. This may prompt him to try and pry the information from me.
  3. My reaction to B’s attempts may be different based on whether or not I am aware he knows I hold the secret. If I know he knows I know, then I may be more cautious of handing out any related information if I don’t want to risk him learning the secret.
  4. If B in turn is aware that I’m on to him, he may change his tactics in attempting to acquire the information he seeks.
  5. I may also be self-aware enough that I realize my awareness of him could spook him, leading me to take actions that assume he might change tactics or come at me more directly, etc.

The degree to which an Agent plays these mind games could just be a function of the Agent’s insight (accuracy of predictions) and perceptiveness (breadth/depth of environmental understanding), attributes that are variable from Agent to Agent.

For the last category of perceptions, the indirect associations between objects, we must ensure that any perception of a narrative entity or interaction is mapped within a temporal-spatial rendition of reality unique to each Agent. That is, an individual Agent must be aware of where and when perceptions were encountered in order to be able to tie together connections between seemingly unrelated elements of their reality.

In order to have a realistic sense to Agents, each one will need to have a “recall” coefficient associated with perceptions that is a function of…

  • how long it has been since the Agent perceived it.
  • how strongly associated the Agent believes the perceived entity is to the Agent’s interests.
    • If the Agents’ simulation of others’ interactions leads it to believe another Agent is highly relevant to an object of its concern (regardless of whether the target Agent is or is not, in fact, relevant), that should affect how accurately the Agent perceives and/or recalls details associated with that target Agent. Consider the example of an obsessed detective tracking the subtle details of an old cold case suspect.
  • a generalized recognition attribute associated with the Agent directly (how good are they at remembering things in general?).
  • (optionally) how close the perceived entity was to an Agent’s focal point of attention.
  • (optionally) what method the Agent used to perceive it (sight, hearing, etc.)


Any model we use to simulate human relationships will best be served by analyzing the types of interactions that exist between humans and the potential effects of those interactions on relationships. It will also need to accurately simulate an Agent’s perception of its environment and the Agent’s formulation of predictions both regarding static surroundings and the behavior of other Agents.

The model I have proposed incorporates a combination of tri-layered sphere-connections to model interactions and traditional line-connections to model perceptions. Relationships and roles are assigned purely through logical interpretations of interactions, providing a highly robust and flexible presentation of the data linking narrative entities.

Hope you found this model useful and/or illuminating. Comments and suggestions are, as always, gladly accepted.

Game Visions: Ultrahaptics & the AR/VR Sensory Evolution

Developers will soon be able to write applications that enable a brain to learn custom information about any exposed perceptions, both real and virtual. Powering this functionality is a combination of technologies centered around the advent of Ultrahaptics and David Eagleman’s VEST concept. Ultrahaptics is a UK-based startup focused on using ultrasound technology to generate in-air tactile sensations. The VEST is a wearable technology that uses vibrations to physically feed digital information to one’s brain. Together, these technologies will revolutionize human-computer interactions completely.

We humans digitized physical data. We then integrated that digital data into almost every type of physical device. Now we are starting to introduce our digital products as physical elements of the world using holograms with technology such as Microsoft’s Hololens.

The next step is for information itself to become a physical entity by applying the motivations behind Eagleman’s VEST: feeding real-time data streams into interpreted sensory streams. Machine learning and data mining algorithms nowadays can derive valuable information from massive amounts of aggregate data. Computing these loads rapidly requires massive computing power, something not readily available on today’s smartphones. Many are shifting this load onto the Cloud, but another novel platform is available. After all, every one of us travels around with a free supercomputer in their head. Our brain can easily learn to use any information we serve it with. All it takes is the right delivery system.

The concept:

Eagleman’s prototype device:

What this really means is that any kind of blanket sensory information that could otherwise be interpreted by a machine learning algorithm can in fact be “learned” by our own bodies. Custom data feeds to pour into the sensory stream is all that is needed, and the sensory information itself need not only be vibrations.

Here’s an example of how people can gather data from video.

The sensory “streams” can be converted into other types of “streams” as needed, just like other types of computer data.

All manner of mixed reality technologies could facilitate the provision of custom sensory information. If people intend to “train” themselves in how to use a custom sense, it is likely that future applications will involve simulated environments that people interact with to practice a given sense, i.e. gamified training software.

The applications of this technology is more “limitless” than even the AR/VR excitement in 2016. While the technology is still down the road a ways, people should be keeping an eye out for the rise of tactile-feedback hardware, if only for the ramifications it has for bringing about a sensory evolution.


Game Visions: A Roleplay-Inspired Procedural Narrative System

Edit: This is now part of a series of posts exclusively about the development of a procedural narrative system.
Part 1: Game Visions: A Roleplay-Inspired Procedural Narrative System
Part 2: Game Visions: Modeling Human Behavior and Awareness

The game industry has come a long way exploring narrative systems in recent decades. The crown jewel however, procedural narrative, has not yet been fully attained. Procedural narratives are generated on-the-spot by advanced algorithms, allowing for a level of content ordinarily impossible to produce. Several conversations are now taking place discussing how one might go about designing these narrative systems. After reviewing their contributions, I will present my own suggestion for an implementable solution.

Previous Contributors: Research

Ruth Aylett, Sandy Louchart, and Allan Weallans published an essay in 2011 called “Research in Interactive Drama Environments, Role-play, and Story-telling“. It summarizes attempts thus far at simulating procedural narrative and the problems therein:

  • Procedural narrative requires a structured, dramatic sequence of behaviors to render entertaining experiences whilst simultaneously requiring autonomous AI characters that act believably human, but without regard to whether they make dramatic, “story-relevant” decisions.
  • Procedural narrative requires an abundance of hard assets to present generated content to users (written or recorded lines of dialogue, art, animations, etc.)
  • Creating these experimental systems requires an interdisciplinary team of people to build the AI logic, compose the narrative, design art and animations, etc. These combinations are typically found in large companies that can’t afford to risk investments on risky products.
  • Current suggestions for content creation include…
    • Building tools for users to create limited content for a given game.
    • Relying on machine learning to derive structured content from analysis of existing works.

Nathanael Chambers and Dan Jurafsky have demonstrated machine learning’s effectiveness at narrative induction (identifying a narrative’s protagonist, narrative history, & possible holes or extensions to the narrative) in their essay Unsupervised Learning of Narrative Event Chains.

Previous Contributors: Games

Ken Levine (@IGLevine on Twitter), writer for Bioshock and Bioshock Infinite, suggested the idea of “Narrative Legos” in his talk at the Game Developer Conference of 2014 (full video below). In it, he discussed his ideas for how to implement autonomous actors in a simulated narrative:

  • Characters are defined by an owned number of “Passions”, which…
    • represent character desires that determine AI behavior.
    • can be known, hidden, and/or discoverable.
    • pulled from a procedural pool of potential Passions.
    • each consist of a linear spectrum of support or criticism of the player character’s actions.
    • includes reward/punishment tiers at given levels of support/criticism.
  • The passion system is meant to generate zero-sum environments where the player must lose support (gain criticism) from one character to gain support from another.
  • In-game quests are either…
    • scripted to result from character passions (“You’ve pissed me off for the last time. Let’s fight!”). Quest = Fight
    • introduced via scripted narrative, but then impact a set of characters’ passions (“Zombies are invading. You’ve killed them, so I’d like to offer you my services.”). Quest = Zombies
  • Different characters will represent different factions.
    • Character A vs. Character B (Individual)
    • Elves vs. Dwarves vs. Orcs (Civilization | Group)
    • Servant of Chaos vs. Servant of Order (Ideology)
  • Passions are highly modular and integrated; therefore, DLC can be added “into” the game rather than “onto” the end of it.

In response to this, Krystian Majewski (@krystman on Twitter) wrote an article on his blog Game Design Scrapbook called “Stepped on a Narrative Lego“. In it, he noted things he saw to be problems with Levine’s design:

  • Modeling characters’ behavior on a linear spectrum renders human memory nonexistent. Lost favor can be regained through positive actions, regardless of the crime committed.
  • The reward-tier emphasis of the Passion system and its zero-sum game concept encourages a dynamic in which players “game” the narrative system purely to acquire in-game benefits. This trivializes narrative relationships and renders human emotion nonexistent.
  • Ones actions shouldn’t automatically generate favor in others: a player’s existing social relationship (or lack thereof) with a character should influence the effect of the player’s actions. “Buying doughnuts and coffee for your colleagues at the office is a way to live out that relationship [while doing so] for the anonymous cashier at the supermarket is weird, creepy, and inappropriate.”
  • Information simulation would be an important component of such a system: leveraging / manipulating misinformation, deception, and limited knowledge to determine actionable character options.
  • It isn’t interesting for characters to only react based on player actions. Characters should develop alongside the actions of other NPCs, reacting to and developing them in turn. This is far more interesting than the player’s interactions with the same characters.
  • Rather than attempting to tackle an interactive system right off the bat, developers should first prioritize developing a solitary environment to simplify the testing of the narrative system.
  • Any narrative system that functions will need to rely on…
    • functional human emotion (they exhibit and elicit emotional reactions and are relatable / believable).
    • simulation of social relationships and social histories (do/do not recognize you or your previous actions associated with them)

Russian Developer Twisted Tree Games (@atwistedtree on Twitter) is hard at work on an indie game called Forest of Sleep that relies on pure imagery to communicate a procedurally generated story to players. It does not require explicit lines of dialogue and instead relies on “emergent narrative” where players craft the narrative using their own imaginations.

My Own Thoughts

Just as Majewski believes our primary focus should first be on crafting a non-interactive system, I believe it would be to the advantage of the game development community at large to focus first on designing a robust, free and open-source middle-ware application that could be integrated into a variety of games to supply narrative content and manage character data.

Consequently, the software would need to focus less on producing direct game assets and more on identifying concepts, relationships, and actions in the abstract sense; that way, developers would be able to match the character behavioral predictions and world scene-descriptions to whatever type of concrete asset they prefer. Developers from any background could then adapt the software to almost any kind of gameplay system.

Incorporating all of the above research, I propose a number of suggestions regarding design concepts that should bolster the fidelity of any product crafted towards the purpose of narrative generation.

Idea: Role-play

The most powerful form of interactive, narrative-infused, & dynamic content generation is table-top role-playing games. A storyteller describes a world while players collaborate to design creative solutions to the problems presented to them.

  • Players are limited only by their imagination, ingenuity, character abilities, and available technology.
  • The storyteller produces most content in-place: dialogue, scene descriptions, lore-crafting, etc.
  • The storyteller manages story tension behind-the-scenes.
    • Carefully designs suspenseful engagements and problems.
    • Ensures that players are able to survive to be proud of their achievements or review what went wrong in failure.

If we are to develop a procedural system inspired by roleplay, then we should have a similar distribution of responsibilities: a Storyteller fulfilling the role of the world, and the Agents who participate in that world independently of the Storyteller’s influence.

A Storyteller facade would largely be responsible for moving the narrative towards a targeted plot structure.

  • Full power to craft new content that will make progress towards the targeted structure.
  • Limited power to mutate the properties of existing content. In essence, the storyteller should be free to change any element of the story or the characters so long as unmodified characters are unable to perceive a difference in the resulting story-space/timeline (which implies new content is compatible with the relational requirements of the previous content).
    • Example: if it has been indicated to a plot-relevant character that a monarch has an heir, then assuming there is a dramatic reason for doing so, the storyteller can change the heir from being a boy to a girl if the gender of the character has yet to be published.
    • The Storyteller may need to have even more severe limitations over characters that are, in fact, controlled by live players.

An Agent facade would be responsible for…

  • gathering observations in a supplied environment (what can this character perceive as opposed to others?)
  • deriving a list of possible actions to take based on narrative relevance/personality/skillset/perceived technology/etc.
    • The narrative relevance bit would be the manner in which the Storyteller might have some measure of influence over a given character
    • The Storyteller might also indirectly manipulate Agents by manipulating access to technologies/abilities/skills in an environment.
  • making a decision about what to do based on the character motivations and priority of relationships.

Idea: Machine Learning

As suggested by Aylett et. al., machine learning algorithms have the potential to drastically change the way game narrative is produced.

We could design algorithms to help…

  • AI Agents…
    • determine the plot-relevance of a potential action.
    • learn makes decisions that trick players into believing they are making human-like decisions vs. random ones.
  • a Storyteller learn how to…
    • determine what constraints the environment should place on characters.
      • Example of a social constraint: generate an unawares bad guy nearby to limit what the player can visibly do, creating tension.
      • Example of a technological constraint: add the detail that someone has tampered with a needed tool, creating tension. Perhaps accompanied by a perpetrator (a compatible, existing character or a manifested one).
    • determine what types of potential actions make sense in a given environment.
    • determine what new characters/items/places to introduce that are likely to pull characters towards a dramatic sequence.

Any solution we come up with should rely on a computer’s ability to improve itself over time, to make progress towards an ideal narrative portrayal indistinguishable from that of a human being.

Idea: Theatricality, Literature, and Sociology

As software developers begin putting together narrative systems, it is important that they keep writers and sociologists close by as expert consultants during the process.

Writers will be able to provide much-needed insight into…

  • strong forms of plot structure such as nested loops, en media res, and the hero’s journey.
  • various types of narrative patterns, themes, and motifs (which a Storyteller could dynamically draw upon to generate content).

Likewise, sociologists can help to derive the types of variables needed (esp. for ML algorithms) to simulate realistic interactions between characters, including…

  • how to model relationships.
  • the influence different types of relationships have over different people’s decision-making.
  • the ways in which a given type of interaction affects the Agents involved.

Idea: Crowdsourcing

In order for developers to properly accumulate the data for a large-scale procedural narrative system, a public database of associations between concepts, relationships, and interactions will likely be necessary. To build these associations, it is best if we leverage the input of the global community.

We might have a given concept, like a “king”, that requires us to know about what many kings are like and what kinds of actions they are likely to engage in. We can add an interaction with the concept, such as “kill king” which may generate different related events based on how the king was killed, etc. Finally, by adding a relationship such as “vassal kills king”, we can identify what emotions are typically evoked and by whom, helping us determine how agents might respond to a vassal killing a king, etc.

The vast number of possibilities for these logical associations make compiling a database of this sort a futile mission for any single studio. With the public’s involvement, however, its invention is plausible.

A free side-benefit would be the simple localization of the software to other parts of the world. If society is contributing associations for narrative themselves using their local language, then the content produced by a middle-ware application, namely word-based descriptions of characters, items, places, interactions, and events, would be created already in the native tongue as needed.


The gaming community, and the indie community in particular, could benefit greatly from an open-source, large-scale, & robust solution to producing narrative content. To that end, we should pull from a variety of disciplines pertinent to writing realistic narrative in the first place, develop smart algorithms that can simulate narrative non-player agents, and work with our community to build the system.

Comments, criticisms, and suggestions are encouraged of course. As always, I look forward to future conversations with everyone.

Rhetorical Game Narrative


Edit: Courtesy of Tim Carter, I have added a section on authorship. Thanks Tim!


Games are brimming with narrative potential and we have done an adequate job of exploring narrative development within games. However, if we are to design games that have rhetorical value, that are designed from the ground up to persuade players of a given ideal, then we may want to re-examine our games’ content. How exactly should we maximize the influence of our games?

Why do this?

While games today have heavily invested in the field of narrative in comparison to games from previous decades, it is still in an early stage of its lifetime. Only recently have studios like BiowareTelltaleGames, and Quantic Dream appeared, dedicated to crafting experiences filled with emotion and strong characters.

Art is about communicating with an audience, and these types of games are no different. When we begin to throw religion and faith into the mix, however, we all of a sudden run into a relative lack of narrative exploration. Mark Filipowich’s article on faith in games presents the problem quite nicely…

“Games don’t have anything to say about the purpose of existence or where humans fit in the universe. The existential questions and answers posed by religion and belief rarely make their way into games. Gods and churches are just obstacles or assets.”
– Mark Filipowich, Machine Gods: Religion in Games

Philosophical and ethical considerations are a necessary point of discussion in games if the medium is to evolve, but such an evolution requires a firm understanding of how best to make use of games’ core quality: interactivity. Assuming one wishes to craft an interactive experience of a rhetorical nature (that is, one which attempts to affect the players’ beliefs), then an analysis of how best to deliver such an experience is in order.

Note: The following types of games are beyond the scope of this article:

  • Games with no rhetorical narrative:
    • The narrative elements are more of a utilitarian nature and simply supply the context for the gameplay.
    • Asteroids, Super Meat Boy
  • Games with narrative of a primarily cinematic nature:
    • These games have no intention of relying on interactive content to deliver the narrative. This article therefore has no bearing on assisting their development.
    • Final Fantasy XIII

The Fundamental Principle:
It’s not just “Show, don’t tell.” It’s also sometimes “Do, don’t show.”

Everything that follows is more or less a manifestation of the following two-fold motivation:

  1. Allow the player the freedom to invest themselves in exploring an interactive world, and…
  2. Craft your world in such a way that the player’s interactions sculpt their perception of that world, instilling themes and philosophy by experiencing them in your virtual reality.

Don’t: Subvert the player’s sense of control.
Do: Deliver limited cinematic perspective.

Cinematics are about creators presenting something to the player. Games are about players discovering what the creators want to show them.

Cinematics seize interactive control, delivering a dramatic, yet passive experience. Games grant interactive freedom, delivering a dramatic and active experience.

Cinematics most definitely have their place in games, but due to their interruption of gameplay, are best used as pacing tools and gameplay transitions.

Occasionally The Last of Us actually uses an innovative technique that hybridizes the intent here: they allow the players to hold down or tap a button to toggle or trigger a cinematic camera view or action whenever something of interest is happening in the scene. Things you can do with this…

  • Interesting event? Hold down Y for a zoomed in, closer look.
    • Could be adapted to even trigger nearby characters to comment on the scene.
  • Nearby NPC invoking some plot-significant dialogue? Tap Y to toggle the camera to always keep the character in focus / the center of attention.
    • Player is still free to walk around, inspect the area, or even deactivate the camera focus should they want to.
  • An item of note in the scene of just gameplay significance? Have a character comment on it, and allow the player to optionally have the camera hone in on its location to help them find it.

Check out this masterwork of a scene from The Last Of Us (seriously, take notes). In particular, track the following…

  • suspenseful buildup.
  • NPC-assisted player direction.
  • the player directly triggering the progression of the narrative.
  • the story camera transitions that connect gameplay and cutscene.

Like The Last of Us, Mass Effect 3 also makes clever use of cinematic snippets that lean in from the player perspective and lean back out which, if brief enough, are an elegant means of delivering cinematic effects without losing player engagement. Here’s an example:

Another terrific example where the camera view actually tracks the player’s avatar throughout the narrative sequence:

As narrative designers, we need to find these kinds of creative solutions to deliver cinematic elements without compromising player immersion.

Don’t: Believe player dialogue is everything.
Do: Rely on your environment to tell the story.

Part of a game’s strong point is the fact that players interact with a crafted environment: the level designer and/or narrative designer will explicitly arrange the details of the environment to be optimal in some regard.

This optimal design is frequently reserved for gameplay functionality, but is sometimes used to enhance the narrative through world-building elements instead. Narrative enhancements improve the sense of presence and immersion games depend on to evoke emotions in players. As such, they are of critical importance to our pathos arguments.

Consider the impressive narrative feats accomplished by games that have no interactive dialogue whatsoever. JourneyGone Home, & Everybody’s Gone to Rapture each are highly praised for their narrative exploits, but the bulk of the story is accumulated by exploring an environment, witnessing events in the world, and interacting with elements that merely inform the player’s understanding of past events.

Virtually any aspect of the game world can be leveraged to deliver narrative. The best case scenario is when you can devise world-interactions that not only teach the player about the world or evoke an emotion, but also teach them about a critical gameplay mechanic.

Chris Winters illustrates a great example in his discussion of Portal:

“[Valve] successfully managed to tie player emotions to an inanimate object: the Weighted Companion Cube. Gamers had to carry the Cube from room to room only to be told later on to incinerate it, which in turn spawned a slew of fan-made Weighted Companion Cube tributes and, later on, Valve’s very own plush toy…And the Weighted Companion Cube was there for a very important reason, fan obsession notwithstanding. With its demise, it taught the player a new gameplay mechanic that was instrumental in the final boss battle with GLaDOS.”

Mind you, this is just a random block the player interacts with that has a heart on it. It’s not a character or a fancy gameplay object. There are plenty of functionally identical blocks the player encounters. And yet, it holds a significant influence over the narrative of the game. Portal just wouldn’t have been the same without the companion cube.

Don’t: Attempt to tell the player what to think.
Do: Ask the player questions.

People predominantly play games to be entertained (edutainment not-withstanding). To craft a game meant for a traditional audience that argues for particular ideals, the key is to expand the player’s mind.

When people are playing for fun, they don’t want to be lectured to. They don’t play to hear someone argue a given perspective.

Interactions are a games strong suit. Therefore, the game should rely on interactions to persuade people. Interactions involve us presenting a situation, and having the player respond.

Emphasizing interactions means our goal is to craft a world that begs for a response from the player. We need to ask the player questions.

What questions should we ask?  Clearly they should be questions that make them think twice about significant issues.

Keep in mind, the priority should never be to outright force the player to believe in our goal: merely to 1) break down their biases and resistances and 2) subsequently be open-minded when exposed to our ideas.

What sorts of questions are useful? Why, quite a large variety in fact:

  • Questions regarding the subject of our rhetorical goal.
    • We can manipulate the narrative result of these decisions to suggest the results of player actions.
    • Be careful not to make these “narrative results” correspond to the success of the game. If a given choice is better for the player’s gameplay by default, the player will feel as if the game is trying to “make them” agree with it (read my article on Narrative-Gameplay Dissonance if interested in this topic).
  • Questions regarding related subjects that ease the player into the main topic in the first place.
    • Gotta start somewhere.
  • Questions that provide the player with doubt concerning the validity of common preconceptions or misconceptions they may believe in.
    • This is where breaking down resistances is key.
  • Questions that explore multiple perspectives concerning the main topic.
    • Different politics / cultures / religions / layers of society may examine the topic on different terms, in different contexts.
  • Questions that explore multiple facets of the theme itself.
    • What are the consequences or implications of our proposed idea? What does it mean if we are right? Wrong?

Also understand that these are absolutely not necessarily questions in dialogue. We are equating questions with interactions as the player’s input is a response to a question we have posed. Therefore, gameplay tasks we give the player are, taken in narrative context, a form of questioning them.

Don’t: Make your point just through story.
Do: Evoke player emotion via gameplay dynamics and relatable characters.

An average narrative design includes a story alongside the gameplay to promote a given theme.

An exquisite narrative design builds the story and gameplay within one another from the beginning, integrating the two as a cohesive, reciprocating harmony.

An average narrative design includes characters that inform the player of a crucial problem they must solve and what they must do gameplay-wise to fix that problem.

An exquisite narrative design includes characters with their own goals, motivations, histories, and relations to others, and who become equally as aware of problems as the player; these characters in turn have their own evaluations of whether “it” is a problem, how “it” should be handled, and how “it” will affect themselves and others. Players then “answer” these gameplay questions by evaluating character relationships and the narrative consequences of the options before them.

Pathological arguments are our primary goal. Logic is useful, sure, usually delivered via character conversations. Ethos is also useful, delivered via our reputation/credibility (or our game characters? Untested, but an intriguing idea…). However, the strongest weapon of a game’s arsenal lies in its ability to deliver enhanced pathos through a strong sense of presence.

Let’s do an example. We want to convince people that animals should be perceived on equal terms with humans. We wanna use games because we think we can make a strong appeal with that medium.

Here’s a terrible elevator pitch: “It’s a RPG where you solve puzzles and fight animal thieves to protect local wildlife who are under your care.” We’ve got some gameplay (puzzles and combat), and we’ve got a narrative (caring for animals). That’s pretty much it. It sounds like crap, and nobody would ever invest in the title (let alone play it).

Instead, let’s rely on a mechanic that delivers engaging dynamics intertwined with character relationships: “In an adventure game leveraging deep relationships akin to Mass Effect 2, a shy girl without any friends finds herself connecting to others in mysterious ways when she suddenly understands the speech of the neighborhood wildlife.”

Right off the bat, we know the player has an interesting narrative mechanic: they will need to interact with animals in order to develop relationships with the people in the story. This guarantees that the player will in turn  develop relationships with the animals as well, something they may not have initially cared about narrative-wise, but are interested in gameplay-wise. Regardless of whether they care about animals, many people will care about two things: 1) trying to make friends in an uncomfortable environment is something many can relate to, and 2) the novelty of the puzzle-solving interactions. Together, these may pull together an audience.

The second step is to make clear that the player’s interactions with the wildlife will be personal. Not “I can ask a cat to spy on a hidden conversation for me”, but rather, “I can ask Mr. Tibbles to spy on a hidden conversation for me. He’s quite interested in playing the role of a spy, so I’ll play along with his fantasy.” The deeper narrative context has several opportunities associated with it.

  • The interaction may initially just be utilitarian: the player could just be using the character. But the result is that the player knows Mr. Tibbles trusts her to share in his fantasy. Mr. Tibbles now has expectations of the player. He is now “real” in that the player must consider his perspective.
  • The next time the player has the option of supporting Mr. Tibbles’ fantasy, there are several types of responses available, several “answers” to our question for the player.
    • Will they appeal to those expectations and continue to build his fantasy? Or will they squander the expectations and tell the cat that he’s just a cat? There are plenty more as well.
    • A question such as this may have little to do with whether the player is directly comparing the animal to a human, but we are at the very least having them treat the cat similarly in an elevated context (making progress towards our goal): considering a deep philosophical question in regards to Mr. Tibbles, the cat.
  • Subsequent interactions should lead to the player forming a bond with Mr. Tibbles (this relies on having interesting and relatable characters). The player cares about Mr. Tibbles’ relationship to them.
  • The player’s original goal was to connect with people and develop friends. Assuming the player has worked with Mr. Tibbles to help a classmate named Jenna from afar, we can later present the player with our primary question: Without the ability to do both, the player can choose to maintain their relationship with Mr. Tibbles or start a new relationship with Jenna in line with the original player goal.
    • Our design wouldn’t punish the player for the choice they make, but would simply acknowledge their choice and highlight the effect it has on the characters involved.
    • It wouldn’t be your job to make the player choose whom to favor. Instead, your job would be to make this a difficult and heart-wrenching decision: for them to consider Mr. Tibbles on equivalent terms as the human character, to recognize them equally as valuable relationships and members of the community.

By nature of your gameplay dynamics and emotional, relatable characters, you can manipulate your quests and narrative interactions to achieve your rhetorical goal effectively.

Don’t: Overlook the value of the people.
Do: Provide recognition for creators.

While not directly pertaining to game design, it is also important for narrative-focused games to highlight not just game designers, but also the other leads responsible for delivering the necessary immersion: writers, composers, lead artists, etc. The illumination of authorship for the experience is important both to gamers who would like to know the people responsible for crafting their cherished stories as well as the creators who work so hard.

A rhetorical reasoning for doing so is that it increases the likelihood that gamers, and not just people in the industry, will begin identifying the work of particular individuals and bring the power of ethos more closely at play in the realm of marketing games. In the end, it’s better for gamers’ experience, creators’ careers, designer’s rhetorical goals, and the industry’s future.


Games have the potential to host powerful conversations within society, and they are most influential when they leverage their interactive nature. We must therefore learn to focus our efforts on highly immersive interactions with narrative systems if we are to hone the art of narrative design. Some methods for doing so include relying on…

  • cinematic effects that don’t compromise player immersion
  • our virtual environment to enhance the player’s sense of presence in the world.
  • our gameplay options to pose narrative questions to the player, allowing them to consider the philosophical or ethical implications of their decisions.
  • narrative-inspired, engaging gameplay interactions to draw in players
  • relatable characters to evoke player emotion and incorporate a pathos argument for our cause.
  • Recognition for individual creators closely attached with the game.

Hope you all enjoyed the article. Feedback of all kinds is much appreciated! I welcome future conversations on the topic.

Narrative-Gameplay Dissonance

The Problem

Many gamers have experienced the scenario where they must sacrifice their desire to roleplay in order to optimize their gameplay ability. Maybe you betray a friend with a previously benevolent character or miss out on checking out the scenery in a particular area, all just to get that new ability or character that you know you would like to have for future gameplay.

The key problem here is one of Narrative-Gameplay Dissonance. The immersion of the game is destroyed so that you will confront the realities that…

  1. the game has difficulties.
  2. it is in your best interest to optimize your character for those difficulties.
  3. it may be better for you the player, not you the character, to choose one gameplay option over another despite the fact that it comes with narrative baggage.

What To Do…

One of the most important elements of any role-playing game is the sense of immersion players have. An experience can be poisoned if the game doesn’t have believability, consistency, and intrigue. As such, when a player plays a game that is advertised as having a strong narrative, there is an implied relationship between the narrative designer and the player. The player agrees to invest their time and emotions in the characters and world. In return designers craft an experience that promises to keep them immersed in that world, one worth living in. In the ideal case, the player never loses the sense that they are the character until something external jolts them out of flow.

To deal with the problem we are presented with, we must answer a fundamental question:

Do you want narrative and gameplay choices intertwined such that decisions in one domain preclude a player’s options in the other?

If you would prefer that player’s make narrative decisions for narrative reasons and gameplay decisions for gameplay reasons, then a new array of design constraints must be established.

  1. Narrative decisions should not…
    • impact the types of gameplay mechanics the player encounters.
    • impact the degree of difficulty.
    • impact the player’s access to equipment and/or abilities.
  2. Gameplay decisions should not…
    • impact the player’s access to characters/environments/equipment/abilities.
    • impact the direction of plot points, both minor and major.

Examples of these principles in action include The Witcher 2: Assassins of Kings and Shadowrun: Dragonfall.

In the Witcher 2, I can go down two entirely distinct narrative paths, and while the environments/quests I encounter may be different, I will still encounter…

  1. the same diversity/frequency of combat encounters and equipment drops.
  2. the same level of difficulty in the levels’ challenges.
  3. the same quality of equipment.

In Shadowrun, players can outline a particular knowledge base for their character (Gang, Street, Academic, etc.) that is independent of their role or abilities. You can be a spirit-summoning Shaman that knows about both street life and high society. The narrative decisions presented to players are then localized to a narrative decision made at the start rather than on the gameplay decision that affects what skills/abilities they can get.


To be fair, there are a few caveats to these constraints; it can be perfectly reasonable for a roleplay decision to affect the game mechanics. An obvious example of this would simply be the goal of having gameplay that supports the narrative. While that is extremely important in its own right (in most ANY game), our concern is mainly for dealing with situations where the attempt to follow a narrative or gameplay motivation begins to conflict over our motivations for the other. Assuming that you wish to have a narrative integrated with gameplay at all, the following are exceptions to the previously stated goals.

if you wanted to pull a Dark Souls, you could implement a natural game difficulty assignment based on the mechanics your character exploits. Dark Souls allow you to experience an “easy mode” in the form of playing as a mage. Investing in range-based skills that have auto-refilling ammo fundamentally makes the game easier to beat compared to short-range skills that involve more risk. It is important to note, however, that the game itself is still very difficult to beat, even with a mage-focus, so the premise of the series’ gameplay (“Prepare to Die”) remains in effect despite the handicap.

Another caveat scenario is when the player makes a decision at the very beginning of the game that impacts what portions of the game they can access or which equipment/abilities they can use. As an example, Star Wars: The Old Republic has drastically different content and skills available based on your initial class decision. You are essentially playing a different game, and while they may have similar mechanics, they nonetheless possess a prime independence. It is not as if choosing to be a Jedi in one playthrough somehow affects your options as a Smuggler the next go around. And even if it could, the only way to breach our implied contract would be to make it more economically advantageous for the player to start their SWTOR experience with a particular class.

There are two dangers inherent in this second scenario though. Players may become frustrated if they can reasonably see two roles having access to the same content, but are limited by these initial role decisions. This applies in both a gameplay and narrative sense. If one type of Jedi got a fancy lightsaber and the other one got a fancy cloak (perhaps because one type is offensive focused and the other, defense-focused), then player’s may be frustrated by their inability to choose. In a narrative sense, if there were some type of Force-specific plotline that was denied one or the other Jedi-types, players may be confused/annoyed (what if they wanted to play through that same storyline but with a different class?).

The other danger is that if different “paths” converge into a central path, then players may also dislike facing a narrative decision that clearly favors one class over another in a practical sense, resulting in a decision becoming a mere calculation. Continuing the Star Wars example, if one class has a natural affinity for gaining influence over conversations and the other doesn’t, and if the player sees that they can very clearly get more control over the narrative using the first class, then that incentivizes them to NOT play the second class at all.


While you may not necessarily wish to implement them, here are some suggestions for particular cases that might help ensure that your gameplay and narrative decisions remain independent from each other.

Case 1: Multiple Allied or Playable Characters

Conduct your narrative design such that the skills associated with a character are not directly tied to their nature, but instead to some independent element that can be switched between characters. The goal here is to ensure that a player is able to maintain both a preferred narrative state and a preferred gameplay state when selecting skills or abilities for characters and/or selecting team members for their party.


The skills associated with a character are based on weapon packs that can be swapped at will. The skills for a given character are completely determined by the equipment they carry. Because any character can then fill any combat role, story decisions are kept independent from gameplay decisions. Regardless of how the player wants to design their character or team, the narrative interaction remains firmly in their control.

Case 2: Branching Storyline

Design your quests such that…

  1. gameplay-related artefacts (either awarded by quests or available within a particular branching path) can be found in all quest paths so that no path is followed solely for the sake of acquiring the artefact. Or at the very least, allow the player to acquire similarly useful artefacts so that the difference does not affect the player’s success rate of overcoming obstacles.
  2. level design is kept unique between branches, but those paths have comparable degrees of difficulty / gameplay diversity / etc.
  3. narrative differences are the primary distinctions you emphasize.


I’ve been promised a reward by the mayor if I can solve the town’s troubles. A farmer and a merchant are both in need of assistance. I can choose which person to help first. With the farmer, I must protect his farm from bandits. With the merchant, I must identify who stole his merchandise. Who I help first will have ramifications later on. No matter what I do, I will encounter equally entertaining gameplay, the same amount of experience, and the same prize from the mayor. Even if I only had to help one of them, I should still be able to meet these conditions. I also have the future narrative impacted by my decision, implying a shift in story and/or level design later on.

Case 3: Exclusive Skill-Based Narrative Manipulation

These would be cases where your character can exclusively invest in a stat or ability that gives them access to unique dialogue choices. In particular, if you can develop your character along particular “paths” of a tree (or some equivalent exclusive choice) and if the player must ultimately devote themselves to a given sub-tree of dialogue abilities, then there is the possibility that the player may lose the exact combination they long for.

Simply ensure that the decision of which super-dialogue-ability can be used is separated from the overall abilities of the character. Therefore, the player doesn’t have to compromise their desire to explore a particular path of the narrative simply because they wish to also use particular combat abilities associated with the same sub-set of skills. I would also suggest providing methods for each sub-tree of skills to grant abilities which eventually bring about the same or equivalently valuable conclusions to dialogue decisions.


I can lie, intimidate, or mind control people based on my stats. If I wish to fight with melee stuff, then I really need to have high Strength. In other games, that might assume an inefficiency in mind control and an efficiency with intimidation (but I really wanna roleplay as a mind-hacking warrior). Also, there are certain parts of the game I want to experience that can only be done when selecting mind-control-associated dialogue options. Thankfully, I actually do have this option. And even if I had the option of using intimidation or lying where mind control is also available, regardless of my decisions, my quest will be completed and I will receive the same type of rewards (albeit with possibly different narrative consequences due to my method).


If you are like me and you get annoyed when narrative and gameplay start backing each other into corners, then I hope you’ll be able to take advantage of these ideas. Throw in more ideas in the comments below if you have your own. Comments, criticisms, suggestions, all welcome in further discussion. Let me know what you think. Happy designing!