Explorations in AR, PCG & Game AI

Mixed reality games are games in which virtual graphical assets are added to a physical environment. This project explores the use of procedural content generation to enhance the gameplay experience in a prototype mixed reality game. Procedural content generation is used to design levels that make use of the affordances in the player’s physical environment.

To do this, the real environment surrounding the player is augmented with virtual characters, enemies, obstacles, rewards, and platforms. The virtual elements are selected and positioned in order to take advantage of the effect of the room configuration. They are also tailored to gameplay difficulty and to affect how the player moves their physical body in the real world.

This approach is intended to be a starting point for future mixed reality games that try to reason or make intelligent decisions about the physical environment that they are being played in. This research was done under the guidance of Prof. Mark Riedl in the Entertainment Intelligence Lab at Georgia Tech.

Demo Prototype

Initial Concept

With the launch of the Microsoft Hololens device, developers and researchers are creating new augmented reality experiences for their users. However most of the demos so far make little use of the users surroundings, whether it is finding the nearest wall or finding flat surfaces to run on.

Unlike in VR, MR experiences can change significantly based on the configuration of the player’s physical environment. Our game uses Artificial Intelligence to identify the playable surfaces in the space and then personalize a game to that space. We also are trying to make intelligent decisions about the game based on where you are playing it (the kitchen vs the living room). Thus, rooms with different arrangements of furniture could result in more enjoyable gameplay experiences.

Development: Spatial Mapping & Path Generation


The initial aim of the project was to recreate the popular Super Mario game in an augmented reality environment. We first map the players environment into an Object Wavefront File (.obj) using Kinect Fusion. This gives us a depth map of the surfaces in the room.

We created an OBJ parser in Python that used Surface Detection algorithms to identify playable surfaces (eg. tops of furniture) from the players environment. The set of vertices are then clustered and partitioned into a set of concave hulls using the Union-Find data-structure. After the noise was filtered out, each hull represented an individual surface in the depth map. We then group these concave hulls into larger convex polygons using Delaunay Triangulation

We have been able to construct a 3D model representing the player’s environment and use surface detection to find playable furniture surfaces. We have used search based procedural content generation algorithms to identify and evaluate all possible routes for the virtual player character across surfaces for the optimal path.

To calculate the path we use a hierarchical A* algorithm for demonstration purposes.  Currently, The algorithm computes an inter-surface path between concave surfaces at one level of abstraction, and an intra-surface path between the larger convex polygons on another level. We then pick 2 surfaces in the scene and plan a path between them using the A*. We can make a new plan as new surfaces come in from the detection pipeline. In the future we plan to replace this with a genetic algorithm that uses player modeling as a fitness function. In the A* we use the distance between surfaces as our heuristic.

The game uses a set of rules to decide on the placement of virtual collectibles (enemies, power ups, rewards, etc) and obstacles along the generated path based on the semantics ascribed to the objects.

Development: Unity

The application user interface was developed in Unity. Since we currently do not have access to a Hololens the Unity development was approached from a Virtual Reality perspective to test the procedurally generated levels. The work was tested on an Oculus Rift. One of the issues we are currently facing is the lack of natural gesture recognition and realtime perception from the Oculus. We are currently integrating a Leap Motion controller into the Oculus prototype to add interaction. The Unity application rendered the original 3D model of the room and overlayed the model with the generated track.

Technologies Used

  • Python – Surface Detection from Kinect Obj File (Github)
  • Unity (C#) – Gameplay Frontend (Github Enterprise)
  • Django REST APIs – AI & PCG Server (Github Enterprise)

Next Steps

In the future we plan to continue working on player interaction with the mixed reality objects, and evaluate the game play experience on player testers. The player data we will acquire can then be used to model the level difficulty and challenges faced by the player. Additionally we plan to integrate a neural net that will be able to identify the type of room the player is in. This will allow us to generate room specific interactions and objects (for instance, enemies in a kitchen could be virtual knives, or power ups in a living room could be a virtual book on a table).

0