WalkNet is a neural network-based agent contoller with navigation capabilities, as well as modulating the agent's expression of affect and personal signature through its movements. WalkNet is part of my PhD research and is the second iteration of my agent system, which extends the AffectNet project. WalkNet can be used for autonomous agent movement, or by direct control of the user or a video games AI.
Machine Learning Model
In the first verion of the model, we train a single-layer Factored Conditional Restricted Botlzmann Machine to learn, generate, and control movement.
In order to improve the generalization abilities of the system, we are developing deeper networks, in which each layer is responsible for controlling a different movement quality.
WalkNet takes a semi-supervised approach for learning to navigate the agent. A proposed agent model, during a perception-action loop, controls the agent's movements based on its internal state and perception of its environment.