Robots Learn to Speak Body Language

Robots we know to be machines which are to receive signals of just zeros (0 ) and one (1) and respond in respect to those signals received. Now Robots can even learn from human behavior and advance (or reprogram) themselves. A new renovation in the aspect of robotics is the ability of robots learning to read and speak body language

If your friend says he feels relaxed, but you see that his fists are clenched, you might doubt his sincerity. This is basically because you are a human and understand the expect of humanity that relate body posture with feelings. On the other hand Robots might take his word for it. Body language says a lot, but even with advances in computer vision and facial recognition technology, robots struggle to notice subtle body movement and can miss important social cues as a result. 

Researchers at Carnegie Mellon University have developed a control body that can help solve this problem. They called it “OpenPose”, the system can track body movement, including hands and face, in real time. Utilizes computer vision and machine learning to process video frames, and can even track multiple people at the same time. This ability can facilitate human-robot interaction and the way for virtual reality and interactive enhanced and intuitive user interface.

A striking feature of the OpenPose system is that it can track not only the head, torso and limbs of a person, but also individual fingers. For this purpose, the researchers used the Panoptic Studio of CMU, a dome with 500 cameras, where they capture the body at different angles and then use those images to build a dataset.

Then they came across the images by using what is called a “keypoint detector” to identify and label specific parts of the body point. The software also teaches to associate the body parts of people so that you know, for example, that the hand of a particular person will always be near his elbow. Allows you to track multiple people at once.

The dome of the images was captured in 2D. However, the researchers took the keypoints detected in triangular 3D to help your body follow algorithms to understand how each posture appears in different angles. With all of these processed data, the system can determine how your whole hand is when it is in a certain position, although some fingers darken.

Now that the system has this data set it can be operated with a single camera and a laptop. And the dome is not required to determine truck body positions, making mobile technology accessible. The researchers have published their code to encourage the public to experiment.

They say that this technology can be applied to all types of interactions between people and machines. This would play an important role in virtual reality experiences, resulting in a finer detection of the physical movement of the user without additional sensors such as gloves or hardware.


It would also naturally interact with an internal robot. You could say that you "repair" your robot and could instantly understand what you're saying. To perceive and interpret their physical gestures, the robot can even learn to read emotions by observing body language. So when you cry in silence, his face is in his hands, because a robot took his job, he can provide a tissue.
Previous
Next Post »