Researchers at UC Berkeley have recently developed software that maps professional dance movements from one video to another. This allows the subject in the later video to move its body the way we intend it to.
This method for “do as I do” method captures the standard movements from the source video and generates a pose stick figure. This stick figure is then mapped to the subject in the target video to replicate the dance moves in the latter. It is interesting to note that this transfer of movements is done within a few minutes of source subject’s performance.
The transfer motion between the videos is done via end-to-end pixel-based pipeline, according to the paper giving the details of the software “Everybody Dance Now”.This pipeline is divided into three stages namely pose detection from the source video, global pose normalization and finally mapping from normalized pose stick to target subject.
To improve the quality of the result, the researchers added two components to help temporal smoothness and increase facial realism. Though the results are good, there are a few noticeable glitches in the final output. Hence, there is still work wanting in the project.
Watch the video of the project here:https://www.youtube.com/watch?v=PCBTZh41Ris