Navegando por Autor "Gomes, Thiago Luange"
Agora exibindo 1 - 3 de 3
Resultados por página
Opções de Ordenação
Item Efficiently computing the drainage network on massive terrains using external memory flooding process.(2015) Gomes, Thiago Luange; Magalhães, Salles Viana Gomes de; Andrade, Marcus Vinícius Alvim; Franklin, W. Randolph; Pena, Guilherme de CastroWe present EMFlow, a very efficient algorithm and its implementation, to compute the drainage network (i.e. the flow direction and flow accumulation) on huge terrains stored in external memory. Its utility lies in processing the large volume of high resolution terrestrial data newly available, which internal memory algorithms cannot handle efficiently. The flow direction is computed using an adaptation of our previous method RWFlood that uses a flooding process to quickly remove internal depressions or basins. Flooding, proceeding inward from the outside of the terrain, works oppositely to the common method of computing downhill flow from the peaks. To reduce the number of I/O operations, EMFlow adopts a new strategy to subdivide the terrain into islands that are processed separately. The terrain cells are grouped into blocks that are stored in a special data structure managed as a cache memory. EMFlow’s execution time was compared against the two most recent and most efficient published methods: TerraFlow and r.watershed.seg. It was, on average, 25 and 110 times faster than TerraFlow and r.watershed.seg respectively. Also, EMFlow could process larger datasets. Processing a 50000 × 50000 terrain on a machine with 2GB of internal memory took about 4500 seconds, compared to 87000 seconds for TerraFlow while r.watershed.seg failed on terrains larger than 15000×15000. On very small, say1000×1000 terrains, EMFlow takes under a second, compared to 6 and 20 seconds in r.watershed.seg and TerraFlow respectively. So EMFlow could be a component of a future interactive system where a user could modify terrain and immediately see the new hydrographyItem Learning to dance : a graph convolutional adversarial network to generate realistic dance motions from audio.(2021) Ferreira, João Pedro Moreira; Coutinho, Thiago Malta; Gomes, Thiago Luange; Silva Neto, José Francisco da; Azevedo, Rafael Augusto Vieira de; Martins, Renato José; Nascimento, Erickson Rangel doSynthesizing human motion through learning techniques is becoming an increasingly popular approach to alleviating the requirement of new data capture to produce animations. Learning to move naturally from music, i.e., to dance, is one of the more complex motions humans often perform effortlessly. Each dance movement is unique, yet such movements maintain the core characteristics of the dance style. Most approaches addressing this problem with classical convolutional and recursive neural models un- dergo training and variability issues due to the non-Euclidean geometry of the motion manifold struc- ture. In this paper, we design a novel method based on graph convolutional networks to tackle the problem of automatic dance generation from audio information. Our method uses an adversarial learn- ing scheme conditioned on the input music audios to create natural motions preserving the key move- ments of different music styles. We evaluate our method with three quantitative metrics of generative methods and a user study. The results suggest that the proposed GCN model outperforms the state-of- the-art dance generation method conditioned on music in different experiments. Moreover, our graph- convolutional approach is simpler, easier to be trained, and capable of generating more realistic mo- tion styles regarding qualitative and different quantitative metrics. It also presented a visual movement perceptual quality comparable to real motion data. The dataset and project are publicly available at: https://www.verlab.dcc.ufmg.br/motion-analysis/cag2020.Item A shape-aware retargeting approach to transfer human motion and appearance in monocular videos.(2021) Gomes, Thiago Luange; Martins, Renato José; Ferreira, João Pedro Moreira; Azevedo, Rafael Augusto Vieira de; Torres, Guilherme Alvarenga; Nascimento, Erickson Rangel doTransferring human motion and appearance between videos of human actors remains one of the key challenges in Computer Vision. Despite the advances from recent image-to-image translation approaches, there are several transferring contexts where most end-to-end learning-based retargeting methods still perform poorly. Transferring human appearance from one actor to another is only ensured when a strict setup has been complied, which is generally built considering their training regime’s specificities. In this work, we propose a shape-aware approach based on a hybrid image-based rendering technique that exhibits competitive visual retargeting quality compared to state-of-the-art neural rendering approaches. The formulation leverages the user body shape into the retargeting while considering physical constraints of the motion in 3D and the 2D image domain. We also present a new video retargeting benchmark dataset composed of different videos with annotated human motions to evaluate the task of synthesizing people’s videos, which can be used as a common base to improve tracking the progress in the field. The dataset and its evaluation protocols are designed to evaluate retargeting methods in more general and challenging conditions. Our method is validated in several experiments, comprising publicly available videos of actors with different shapes, motion types, and camera setups. The dataset and retargeting code are publicly available to the community at: https://www.verlab.dcc.ufmg.br/retargeting-motion.