Deep learning model for generating nonverbal social behaviors in robots
Researchers from the Electronics and Telecommunications Research Institute in Korea developed a model based on deep learning that could be used to create engaging nonverbal behaviors in robots, like hugging someone or shaking their hand. The model presented in an arXiv paper can learn context-appropriate new social behaviors through observing human interactions.
Woo-Ri Ko told TechXplore that deep learning techniques had produced interesting results, such as in computer vision and understanding natural language. We set out to apply social robotics using deep learning, specifically by allowing robots learn social behavior through human-human interaction on their own. No prior knowledge of human behaviour models is required. This method is usually expensive and time-consuming.
The artificial neural network (ANN)-based architecture developed by Ko and his colleagues combines the Seq2Seq (sequence-to-sequence) model introduced by Google researchers in 2014 with generative adversarial networks (GANs). The new architecture was developed using the AIR Act2Act dataset. This is a collection 5,000 human-human interaction in 10 different scenarios.