Learning to dance : a graph convolutional adversarial network to generate realistic dance motions from audio.

dc.contributor.authorFerreira, João Pedro Moreira
dc.contributor.authorCoutinho, Thiago Malta
dc.contributor.authorGomes, Thiago Luange
dc.contributor.authorSilva Neto, José Francisco da
dc.contributor.authorAzevedo, Rafael Augusto Vieira de
dc.contributor.authorMartins, Renato José
dc.contributor.authorNascimento, Erickson Rangel do
dc.date.accessioned2022-09-15T21:16:41Z
dc.date.available2022-09-15T21:16:41Z
dc.date.issued2021pt_BR
dc.description.abstractSynthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly. Each dance movement is unique, yet such movements maintain the core characteristics of the dance style. Most approaches addressing this problem with classical convolutional and recursive neural models un- dergo training and variability issues due to the non-Euclidean geometry of the motion manifold struc- ture. In this paper, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learn- ing scheme conditioned on the input music audios to create natural motions preserving the key move- ments of different music styles. We evaluate our method with three quantitative metrics of generative methods and a user study. The results suggest that the proposed GCN model outperforms the state-of- the-art dance generation method conditioned on music in different experiments. Moreover, our graph- convolutional approach is simpler, easier to be trained, and capable of generating more realistic mo- tion styles regarding qualitative and different quantitative metrics. It also presented a visual movement perceptual quality comparable to real motion data. The dataset and project are publicly available at: https://www.verlab.dcc.ufmg.br/motion-analysis/cag2020.pt_BR
dc.identifier.citationFERREIRA, J. P. et al. Learning to dance: a graph convolutional adversarial network to generate realistic dance motions from audio. Computers & Graphics-UK, v. 94, p. 11-21, 2021. Disponível em: <https://www.sciencedirect.com/science/article/pii/S0097849320301436?via%3Dihub>. Acesso em: 29 abr. 2022.pt_BR
dc.identifier.doihttps://doi.org/10.1016/j.cag.2020.09.009pt_BR
dc.identifier.issn00978493
dc.identifier.urihttp://www.repositorio.ufop.br/jspui/handle/123456789/15316
dc.identifier.uri2https://www.sciencedirect.com/science/article/pii/S0097849320301436?via%3Dihubpt_BR
dc.language.isoen_USpt_BR
dc.rightsrestritopt_BR
dc.subjectHuman motion generationpt_BR
dc.subjectSound and dance processingpt_BR
dc.subjectMultimodal learningpt_BR
dc.subjectConditional adversarial netspt_BR
dc.subjectGraph convolutional neural networkspt_BR
dc.titleLearning to dance : a graph convolutional adversarial network to generate realistic dance motions from audio.pt_BR
dc.typeArtigo publicado em periodicopt_BR

Arquivos

Pacote original

Agora exibindo 1 - 1 de 1
Nenhuma Miniatura Disponível
Nome:
ARTIGO_LearningDanceGraph.pdf
Tamanho:
3.26 MB
Formato:
Adobe Portable Document Format
Descrição:

Licença do pacote

Agora exibindo 1 - 1 de 1
Nenhuma Miniatura Disponível
Nome:
license.txt
Tamanho:
1.71 KB
Formato:
Item-specific license agreed upon to submission
Descrição: