Dissertation/Applied Research Project

December 10, 2010

My final year Applied Research Project was entitled ‘Investigating a model of place for game AI and evaluating its performance as a method for authoring agent reactions‘. It received an ‘A’ grade.

Download

Applied Research Project (PDF, 1.93 MB)

Abstract

This project examines whether concepts of place can be usefully applied in game AI, specifically in the context of authoring agent reactions. It reviews current spatial approaches to world representation in games and literature on place from other disciplines. A conceptual model of place for use in game AI is then proposed, influenced by the theories of implacement and affordances. A place agnostic test case is presented, together with an analysis of two parallel implementations of it; a control implementation and an implementation derived from the proposed model of place. The implementations are then analyzed in objective terms, revealing that the place-based implementation performs worse but supports authoring better. The recommendation of this report is to use place-based representations judiciously, bearing in mind their additional overheads as well as their authoring benefits.

Critical Evaluation (abridged)

I am generally satisfied with the outcome of this project although I would have preferred (if time had allowed) to employ a more thorough methodology. As it was, I felt that the scope of the project was too limited, given the scope of the topic. I am pleased that a model of place was synthesized, but disappointed that the scope of the project prevented me from exercising it fully. If I could do it again I would focus more on layering behaviours rather than agent reactions.

There are many sources of information on concepts of place, across a wide range of disciplines, and their study has been very interesting. One point of difficulty was the high level of noise to signal found in the more philosophical writings, which might make this material inaccessible to game AI practitioners. In this respect I feel the industry could benefit from more literature reviews and summaries of thought in this area, and perhaps more multidisciplinary collaboration in the study of place. I am pleased with the literature review section of the project, which I feel provides a useful summary of thought in this area in fields which might not usually be considered by game AI practitioners.

There is definitely a movement in the game AI field towards more complex, semantic representations of spaces (driven, I believe, by the increasing complexity of agent behaviour) and concepts of place have been proposed as the way to achieve this. From my experiences throughout this project, I tend to agree. Implacement is the foundation of our experience of the world around us, and games have always striven to create the feeling of implacement in virtual environments. As graphics technology improves, virtual worlds have become more and more realistic, yet virtual agents often appear to behave unnaturally; unbelievably. I suspect that the reason this is so jarring to a human observer is because it makes it apparent that the agent itself is not implaced in its environment, a similar phenomenon to the ‘uncanny valley’ in robotics (Mori, 1970). I believe that applying concepts of place to agent architectures, for the purpose of implacing agents in their environments, is the most natural approach to overcoming this problem.

Overall I have found this to be a very worthwhile project. I feel well placed to apply the knowledge gained during the project to my game programming practice.


Videogame Middleware

November 10, 2010

The final year Videogame Middleware module comprised two distinct phases. Firstly, the design and implementation of a middleware component. Secondly, putting together a short Gamebryo game experience leveraging as many middleware components as possible.

Through this process I was able to gain insight into the challenges facing both middleware providers and integrators.

Build Phase

I chose to design and implement an input abstraction layer called HInput. The primary purpose of HInput is to decouple game/application code from specific input implementations. For example, rather than checking whether a gamepad button is down, or a key was pressed, we just query whether a named input was satisfied without needing to know how exactly it was implemented. This extra layer of abstraction gives us the flexibility to:

  1. Change input schemes without code changes or recompilation.
  2. Dynamically switch between input schemes at runtime (e.g. we might choose to load a new scheme if we detect that a gamepad was connected). We can mix and match schemes, and different player can use different schemes. All this is completely transparent to the game code.
  3. Using input metadata, it is possible to allow dynamic configuration of the system at runtime (which is how the HInput Editor works).

HInput also has various layers of complexity in its interface:

  1. The simplest way of interacting with the system is through the Editor application (written in C#).
  2. The InputManager singleton [HInput::GetManager()] provides everything you need to use an input configuration to provide input to your game.
  3. Advanced users can write custom Inputs, InputDeviceServices, and plugins that can be loaded dynamically at runtime, as well as custom GUIs using the InputMetaData stored in the system.

Full source code and compiled distributions of HInput, complete with documentation, example projects and editor application and plugins for keyboard and gamepad are available from Sourceforge. For an overview of the system, please read Getting Started with HInput.

Leverage Phase

In the leverage phase I chose to build a tongue-in-cheek action RPG style demo based on the Gamebryo ‘Kinslayer’ sanple, called ‘Kinslayer-Ville’.

Conversations are all voiced at runtime using Microsoft SAPI. There is also voice recognition to choose conversation options. The Environmentz day/night cycle skydome is supplemented by CIELO clouds. Inventory is with Happy Sacks. LibCT provides the conversations and QuickQuest provides the quest log. Input is through my own HInput input abstraction layer. The opening cinematic is powered by Cinesigner, while FMOD Ex provides 3D positional sound.


Simple SAPI – Text to Speech and Speech Recognition

May 18, 2010

This is a simple C++ integration of basic Text to Speech (TTS) and Speech Recognition (SR) features provided by the Microsoft Speech API 5.3. The SAPI itself if a very good and full featured API, but a little arcane for first time users. This code is intended to be an easy way to get basic functionality without much effort.

It provides a basic interface for TTS, and a dynamic ‘command and control’ grammar for SR (as opposed to the ‘dictation’ grammar which is easier to get working but fairly useless for games). You can specify a word or phrase at runtime and this code will create a grammar rule to listen for it.

The download includes source code and a sample Visual Studio 2008 project.

Download Source (9KB)


Search for a Star

March 17, 2010

This was my entry for Round 2 of the ‘Search for a Star‘ competition run by Aardvark Swift with Develop and Relentless. Of 150 people who entered I was one of 5 finalists who progressed to Round 3 (a panel interview), ultimately winning third place in the competition.

Develop Magazine, June 2010, p. 64

Develop Magazine, June 2010, p. 64

Before

After

Key Improvements

  • Fixed numerous deliberate bugs including buffer overruns.
  • General graphical improvements.
  • Gameplay improvements (enemy behaviour, player control, scoring).
  • Implemented support for DirectInput and XInput. Gamepad allows analogue control of rotation and thrust.
  • Implemented a gun base class and three guns; machine gun, bomb (falls under gravity) and splitter gun. Machine gun is weak but fast, splitter is devastating at mid range but not so good at short or long range, bomb will take out anything it touches but is hard to use.
  • Used XAudio2 (encapsulated in a singleton manager) to implement music and sfx. Base game state automatically changes music. Each gun has a firing sfx and there is an explosion sound for enemy/player destruction. There is also an engine noise which changes pitch and volume with the throttle (right trigger on gamepad). There is a built in interpolation so that it works with the digital keyboard input too.
  • Implemented 3 different types of enemies, with differing sizes, colors and health.

Download

Download executable (7.8MB)


AI Demo

January 7, 2010

This app (in C++ using GDI+ for the graphics) demonstrates some key game AI techniques, including:

  • Finite state machines
  • Steering (seek, flee, path follow)
  • A* graph search for pathfinding
  • Goal Oriented Action Planning, using the same A* graph search code used for pathfinding

The work was graded A- with these comments from marking tutor (and noted game AI practitioner) Adam Russell:

Highly challenging extension topic (planning). Great documentation, with UML diagrams and discussion of reference texts. Full coverage of requirements, in a manager showing great evidence of orthogonality, abstraction and asynchronicity. Steering is solid…, pathfinding is really the highlight though as an abstract graph search powers both GOAP plans and an asynchronous pathfinder! Very very nice.

Download

Executable demo –  AIDemo.zip (47.4KB)

-


Advanced 3D Graphics

January 7, 2010

A final year assignment at university consisting of three mini-projects: shader techniques, character animation and large scene rendering.


The Brett Butcher Prize for Outstanding Contribution to Graphics

December 9, 2009

Recently, the University of Derby awarded me The Brett Butcher Prize for Outstanding Contribution to Graphics, which was a lovely surprise.


Follow

Get every new post delivered to your Inbox.