Human Motion Diffusion as a Generative Prior
Auteurs : Yonatan Shafir, Guy Tevet, Roy Kapon, Amit H. Bermano
Résumé : In recent months, we witness a leap forward as denoising diffusion models were introduced to Motion Generation. Yet, the main gap in this field remains the low availability of data. Furthermore, the expensive acquisition process of motion biases the already modest data towards short single-person sequences. With such a shortage, more elaborate generative tasks are left behind. In this paper, we show that this gap can be mitigated using a pre-trained diffusion-based model as a generative prior. We demonstrate the prior is effective for fine-tuning, in a few-, and even a zero-shot manner. For the zero-shot setting, we tackle the challenge of long sequence generation. We introduce DoubleTake, an inference-time method with which we demonstrate up to 10-minute long animations of prompted intervals and their meaningful and controlled transition, using the prior that was trained for 10-second generations. For the few-shot setting, we consider two-person generation. Using two fixed priors and as few as a dozen training examples, we learn a slim communication block, ComMDM, to infuse interaction between the two resulting motions. Finally, using fine-tuning, we train the prior to semantically complete motions from a single prescribed joint. Then, we use our DiffusionBlending to blend a few such models into a single one that responds well to the combination of the individual control signals, enabling fine-grained joint- and trajectory-level control and editing. Using an off-the-shelf state-of-the-art (SOTA) motion diffusion model as a prior, we evaluate our approach for the three mentioned cases and show that we consistently outperform SOTA models that were designed and trained for those tasks.
Explorez l'arbre d'article
Cliquez sur les nœuds de l'arborescence pour être redirigé vers un article donné et accéder à leurs résumés et assistant virtuel
Recherchez des articles similaires (en version bêta)
En cliquant sur le bouton ci-dessus, notre algorithme analysera tous les articles de notre base de données pour trouver le plus proche en fonction du contenu des articles complets et pas seulement des métadonnées. Veuillez noter que cela ne fonctionne que pour les articles pour lesquels nous avons généré des résumés et que vous pouvez le réexécuter de temps en temps pour obtenir un résultat plus précis pendant que notre base de données s'agrandit.