MotionStream: Real-Time Video Generation with Interactive Motion Controls

1Adobe Research, 2Carnegie Mellon University, 3Seoul National University

Best viewed in Chrome. Please wait for videos to be loaded.

TL;DR: We present MotionStream, a streaming (real-time, long-duration) video generation system with motion controls, unlocking new possibilities for interactive content generation. More examples below!
âš¡ Note: Our model runs causally in real time on a single NVIDIA H100 GPU (29 FPS, 0.4s Latency).
All video results shown here are raw screen captures without any post-processing.

Abstract

Current motion-conditioned video generation methods suffer from prohibitive latency (minutes per video) and non-causal processing that prevents real-time interaction. We present MotionStream, enabling sub-second latency with up to 29 FPS streaming generation on a single GPU. Our approach begins with augmenting a text-to-video model with motion control, which generates high-quality videos that adhere to the global text prompt and local motion guidance, but does not perform inference on-the-fly. As such, we distill this bidirectional teacher into a causal student through Self Forcing paradigm with distribution matching loss, enabling real-time streaming inference. Several key challenges arise when generating videos of long, potentially infinite time-horizons – (1) bridging the domain gap from training on finite length and extrapolate under infinite-horizon, (2) sustaining high quality, preventing error accumulations, and (3) maintaining fast inference, without incurring growth in computational costs due to increasing context windows. A key to our approach is introducing carefully designed sliding window causal attention with KV cache combined with attention sinks. By incorporating self-rollout with attention sinks and KV cache rolling during training, we properly simulate inference-time extrapolations with a fixed context window, enabling constant-speed generation of arbitrarily long videos. Our models achieve state-of-the-art results in motion following and video quality while being two order magnitude faster, uniquely enabling infinite-length streaming. With MotionStream, users can paint trajectories, control cameras, or transfer motion, and see results unfold in real-time, delivering a truly interactive experience.

Method Overview

Method Architecture

To build a teacher motion-controlled video model, we extract and encode 2D tracks from input video using a lightweight track head. These track embeddings are combined with image, noisy video latents, and text embeddings as input to a bidirectional diffusion transformer trained with flow-matching loss (top). We introduce joint motion-text guidance as distillation target and train a few-step causal student model through Self Forcing-style DMD distillation with autoregressive rollout, rolling kv cache, and attention sink (bottom). By properly simulating inference-time extrapolation wiht attention sink and rolling kv cache during training, our method can generate long videos at constant throughput and latency.

More Results and Details

Real-time Streaming Demo on a Single GPU: Through simple click-and-drag sequences, MotionStream enables real-time control of diverse scenarios, including both object motion and camera movement, across various grid configurations. Given its autoregressive nature, users can also pause/resume (using the space key) and add static points or multiple moving tracks to better specify control. Benefitting from constant attention context with sink and KV cache rolling, users can experience 29 FPS at 480p using our 1.3B model and 24 FPS at 720p with our 5B model—all with subsecond latency—on a single H100 GPU (we did not use other optimization techniques beyond mixed precision and Flash Attention 3). By anchoring attention to the clean first chunk at all times (keeping minimal drifted chunks within the attention context), the model often recovers quality even after disruptions, showcasing its resilience in long video streaming scenarios. Check out our long video example in the Full Gallery, where it generates 5,000 frames. Due to the streaming nature, our current demo is highly susceptible to network latency and instability (due to multiple latency bottlenecks, your grid might be in the wrong place by the time it reaches the generation pipeline), and performance is optimal with local clusters. Note: our initial demos were recorded on a system equipped with Flash Attention 2 (FA2) resulting in ~25 FPS, while videos with FA3 tag are newly recorded using Flash Attention 3 (~30 FPS, lower latency) and an updated front-end. All streaming demo videos are from 1.3B model variant. In the demo, tracks are color-coded: green for online user-dragged motion, red for static points, and blue indicates pre-drawn paths for moving multiple points simultaneously.
Camera Control: Using monocular depth estimation models, we can lift an image to 3D and derive 2D motion trajectories by projecting each point to camera coordinates. For LLFF evaluation, we obtained trajectories by interpolating the input and target frames as discussed in the main paper (top 2 examples). We can also perform pre-defined camera motions such as dolly zoom or arcing (bottom 2 examples). For benchmarking, we found that appending static templates such as "static scene, only camera motion, no object is moving" to be helpful.
Input Image
Input Image
Motion Track
Result
Input Image
Input Image
Motion Track
Result
Input Image
Input Image
Motion Track
Result
Input Image
Input Image
Motion Track
Result
Impact of Sparse Attention Parameters in Extrapolation Scenarios: Results demonstrate that maintaining at least one sink chunk is crucial for stable long-video generation. Experiments show that larger attention windows do not improve performance in motion-controlled scenarios. We found chunk size of 3 with both sink and window size of one chunk to be optimal. Please refer to the main text and limitation section for advantages and disadvantages of fixed attention context.
Chunk: 3 - Sink: 0 - Local window: 1
Chunk: 3 - Sink: 0 - Local window: 6
Chunk: 3 - Sink: 1 - Local window: 1
Chunk: 3 - Sink: 0 - Local window: 1
Chunk: 3 - Sink: 0 - Local window: 6
Chunk: 3 - Sink: 1 - Local window: 1
Qualitative Comparison with Other Baselines: We compare our methods with recent motion-controlled video generation approaches using samples from Sora demos. We denote our models as 1.3B-T (1.3B parameter teacher model), 1.3B-S (1.3B parameter student/distilled model), 5B-T (5B parameter teacher model), and 5B-S (5B parameter student/distilled model). Our methods consistently demonstrate high video quality and motion adherence compared to baseline approaches, with the student models achieving real-time performance.
Motion Track
GWTF
DAS
ATI
Ours 1.3B-T
Ours 1.3B-S
Ours 5B-T
Ours 5B-S
Motion Track
GWTF
DAS
ATI
Ours 1.3B-T
Ours 1.3B-S
Ours 5B-T
Ours 5B-S
Motion Track
GWTF
DAS
ATI
Ours 1.3B-T
Ours 1.3B-S
Ours 5B-T
Ours 5B-S
Motion Transfer: Given an initial image of similar structure, MotionStream can naturally transfer motions in a streaming fashion for arbitrarily long videos when tracks maintain sufficient quality. It can also be combined with real-time trackers (e.g., facial keypoints or pose estimators) to enable online motion transfer.
Source
Motion Track
Result
Source
Motion Track
Result
Source
Motion Track
Result
Source
Motion Track
Result
Guidance Ablation Study: Using high motion guidance leads to overly rigid translations due to strict track adherence and ignores text cues (below sample is prompted with "rainbow appearing at the back"). While prompt guidance alone shows inferior quantitative metrics for motion reconstruction, it enables flexible text-based control. Our proposed joint guidance strategy effectively balances these two.
Motion Track
Motion Only
Joint Guidance
Prompt Only
Motion Track
Motion Only
Joint Guidance
Prompt Only
Failure Cases: Our model can produce artifacts when motion trajectories are extremely rapid or physically implausible, and it sometimes struggles to preserve source details in highly complex scenes. In the cat, Mona Lisa, and turtle examples below, the intention was to 1) bring the cat out of the box, 2) flip the book pages, and 3) make the turtle hatch out of the egg. These examples result in physically implausible movements due to both imperfect user-drawn drag motions (limitation in the accuracy of hand-drawn trajectories) and the backbone model's limited generalization capacity. In the Diverse People case, we also observe that detailed human identities change and artifacts occur with rapid motion patterns, some of which could be partially alleviated with an improved model backbone. For additional discussions, please see the Limitation and Future Work section in our paper's supplementary materials.
Cat
Diverse People
Mona Lisa
Turtle

BibTeX

@article{shin2025motionstream,
  title={MotionStream: Real-Time Video Generation with Interactive Motion Controls},
  author={Shin, Joonghyuk and Li, Zhengqi and Zhang, Richard and Zhu, Jun-Yan and Park, Jaesik and Schechtman, Eli and Huang, Xun},
  journal={arXiv preprint arXiv:2511.01266},
  year={2025}
}