Exploring Human-Centric Interactions in Video Game Worlds with Machine Learning

Building interactive agents for video game worlds

The human behaviour is complex. Even a simple command like \”Put ball near box\” requires a deep understanding of the context and the language. It can be hard to understand the meaning of words like \”close\” – the ball might be technically the closest to the box, but the speaker may want the ball next to the container. To correctly respond to a request, a person must understand and judge the context and situation.

Researchers in artificial intelligence (AI), however, believe it is impossible to write computer code that can capture the subtleties of situated interactions. Modern machine learning (ML), on the other hand, has focused on data-driven learning of these interactions. We created a research environment within a video-game environment to explore these learning approaches and build agents quickly that can understand human instructions and safely execute actions in open-ended situations.

We’re releasing a paper today, along with a collection of videos that show our first steps in creating video game AIs capable of understanding fuzzy human concepts, and can therefore begin to interact on people’s terms.