Drag-A-Video:
Non-rigid Video Editing with Point-based Interaction

Yao Teng1 Enze Xie2 Yue Wu2 Haoyu Han3 Zhenguo Li2 Xihui Liu1

1 The University of Hong Kong 2 Huawei Noah's Ark Lab 3 Tsinghua University


 

 

Abstract

Video editing is a challenging task that requires manipulating videos on both the spatial and temporal dimensions. Existing methods for video editing mainly focus on changing the appearance or style of the objects in the video, while keeping their structures unchanged. However, there is no existing method that allows users to interactively "drag" any points of instances on the first frame to precisely reach the target points with other frames consistently deformed. In this paper, we propose a new diffusion-based method for interactive point-based video manipulation, called Drag-A-Video. Our method allows users to click pairs of handle points and target points as well as masks on the first frame of an input video. Then, our method transforms the inputs into point sets and propagates these sets across frames. To precisely modify the contents of the video, we employ a new video-level motion supervision to update the features of the video and introduce the latent offsets to achieve this update at multiple denoising timesteps. We propose a temporal-consistent point tracking module to coordinate the movement of the points in the handle point sets. We demonstrate the effectiveness and flexibility of our method on various videos. Codes will be released upon acceptance.

 

Method


The first framework for video-level point-based manipulation.

 

Results




 

Bibtex


    @article{teng2023drag,
        title={Drag-A-Video: Non-rigid Video Editing with Point-based Interaction},
        author={Teng, Yao and Xie, Enze and Wu, Yue and Han, Haoyu and Li, Zhenguo and Liu, Xihui},
        journal={arXiv preprint arXiv:2312.02936},
        year={2023}
    }