basic
¶
Basic inference pipelines for fastvideo.
This package contains basic pipelines for video and image generation.
Modules¶
fastvideo.pipelines.basic.cosmos
¶
Modules¶
fastvideo.pipelines.basic.cosmos.cosmos2_5_pipeline
¶
Cosmos 2.5 pipeline entry (staged pipeline).
Classes¶
fastvideo.pipelines.basic.cosmos.cosmos2_5_pipeline.Cosmos2_5Pipeline
¶Cosmos2_5Pipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Cosmos 2.5 video generation pipeline.
Source code in fastvideo/pipelines/composed_pipeline_base.py
Functions¶
fastvideo.pipelines.basic.cosmos.cosmos_pipeline
¶
Cosmos video diffusion pipeline implementation.
This module contains an implementation of the Cosmos video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.cosmos.cosmos_pipeline.Cosmos2VideoToWorldPipeline
¶Cosmos2VideoToWorldPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.cosmos.cosmos_pipeline.Cosmos2VideoToWorldPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/cosmos/cosmos_pipeline.py
Functions¶
fastvideo.pipelines.basic.gamecraft
¶
HunyuanGameCraft pipeline implementations.
Modules¶
fastvideo.pipelines.basic.gamecraft.gamecraft_pipeline
¶
HunyuanGameCraft video diffusion pipeline implementation.
This module implements the HunyuanGameCraft pipeline for camera/action-conditioned video generation with the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.gamecraft.gamecraft_pipeline.HunyuanGameCraftPipeline
¶HunyuanGameCraftPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Pipeline for HunyuanGameCraft video generation.
This pipeline supports: - Text-to-video generation with camera/action conditioning - Autoregressive generation with history frames - 33-channel input (16 latent + 16 gt_latent + 1 mask) - CameraNet for encoding Plücker coordinates
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.gamecraft.gamecraft_pipeline.HunyuanGameCraftPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/gamecraft/gamecraft_pipeline.py
Functions¶
fastvideo.pipelines.basic.gen3c
¶
GEN3C is a 3D-informed world-consistent video generation model with precise camera control.
Classes¶
fastvideo.pipelines.basic.gen3c.Cache3DBase
¶
Cache3DBase(input_image: Tensor, input_depth: Tensor, input_w2c: Tensor, input_intrinsics: Tensor, input_mask: Tensor | None = None, input_format: list[str] | None = None, input_points: Tensor | None = None, weight_dtype: dtype = float32, is_depth: bool = True, device: str = 'cuda', filter_points_threshold: float = 1.0)
Base class for 3D cache management.
The cache maintains: - input_image: RGB images stored in the cache - input_points: 3D world coordinates for each pixel - input_mask: Validity mask for each pixel
Initialize the 3D cache.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_image
|
Tensor
|
Input image tensor with varying dimensions |
required |
input_depth
|
Tensor
|
Depth map tensor |
required |
input_w2c
|
Tensor
|
World-to-camera transformation matrix |
required |
input_intrinsics
|
Tensor
|
Camera intrinsic matrix |
required |
input_mask
|
Tensor | None
|
Optional validity mask |
None
|
input_format
|
list[str] | None
|
Dimension labels for input_image (e.g., ['B', 'C', 'H', 'W']) |
None
|
input_points
|
Tensor | None
|
Pre-computed 3D world points (alternative to depth) |
None
|
weight_dtype
|
dtype
|
Data type for computations |
float32
|
is_depth
|
bool
|
If True, input_depth is z-depth; if False, it's distance |
True
|
device
|
str
|
Computation device |
'cuda'
|
filter_points_threshold
|
float
|
Threshold for filtering unreliable depth |
1.0
|
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 | |
Functions¶
fastvideo.pipelines.basic.gen3c.Cache3DBase.render_cache
¶render_cache(target_w2cs: Tensor, target_intrinsics: Tensor, render_depth: bool = False, start_frame_idx: int = 0) -> tuple[Tensor, Tensor]
Render the cached 3D points from new camera viewpoints.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
target_w2cs
|
Tensor
|
(b, F_target, 4, 4) target camera transformations |
required |
target_intrinsics
|
Tensor
|
(b, F_target, 3, 3) target camera intrinsics |
required |
render_depth
|
bool
|
If True, return depth instead of RGB |
False
|
start_frame_idx
|
int
|
Starting frame index in the cache |
0
|
Returns:
| Name | Type | Description |
|---|---|---|
pixels |
Tensor
|
(b, F_target, N, c, h, w) rendered images or depth |
masks |
Tensor
|
(b, F_target, N, 1, h, w) validity masks |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 | |
fastvideo.pipelines.basic.gen3c.Cache3DBase.update_cache
¶
fastvideo.pipelines.basic.gen3c.Cache3DBuffer
¶
Cache3DBuffer(frame_buffer_max: int = 2, noise_aug_strength: float = 0.0, generator: Generator | None = None, **kwargs)
Bases: Cache3DBase
3D cache with frame buffer support.
This class manages multiple frame buffers for temporal consistency and supports noise augmentation for training stability.
Initialize the buffered 3D cache.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
frame_buffer_max
|
int
|
Maximum number of frames to buffer |
2
|
noise_aug_strength
|
float
|
Strength of noise augmentation per buffer |
0.0
|
generator
|
Generator | None
|
Random generator for reproducibility |
None
|
**kwargs
|
Arguments passed to Cache3DBase |
{}
|
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
Functions¶
fastvideo.pipelines.basic.gen3c.Cache3DBuffer.render_cache
¶render_cache(target_w2cs: Tensor, target_intrinsics: Tensor, render_depth: bool = False, start_frame_idx: int = 0) -> tuple[Tensor, Tensor]
Render the cache with optional noise augmentation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
target_w2cs
|
Tensor
|
(b, F_target, 4, 4) target camera transformations |
required |
target_intrinsics
|
Tensor
|
(b, F_target, 3, 3) target camera intrinsics |
required |
render_depth
|
bool
|
If True, return depth instead of RGB |
False
|
start_frame_idx
|
int
|
Starting frame index (must be 0 for this class) |
0
|
Returns:
| Name | Type | Description |
|---|---|---|
pixels |
Tensor
|
(b, F_target, N, c, h, w) rendered images |
masks |
Tensor
|
(b, F_target, N, 1, h, w) validity masks |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
fastvideo.pipelines.basic.gen3c.Cache3DBuffer.update_cache
¶update_cache(new_image: Tensor, new_depth: Tensor, new_w2c: Tensor, new_mask: Tensor | None = None, new_intrinsics: Tensor | None = None)
Update the cache with a new frame.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
new_image
|
Tensor
|
(B, C, H, W) new RGB image |
required |
new_depth
|
Tensor
|
(B, 1, H, W) new depth map |
required |
new_w2c
|
Tensor
|
(B, 4, 4) new world-to-camera transformation |
required |
new_mask
|
Tensor | None
|
Optional (B, 1, H, W) validity mask |
None
|
new_intrinsics
|
Tensor | None
|
(B, 3, 3) camera intrinsics (optional) |
None
|
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
fastvideo.pipelines.basic.gen3c.Gen3CConditioningStage
¶
Bases: PipelineStage
3D cache conditioning stage for GEN3C.
This stage performs the core GEN3C innovation: 1. Loads the input image 2. Predicts depth via MoGe 3. Initializes a 3D point cloud cache 4. Generates a camera trajectory 5. Renders warped frames from the cache at each target camera pose 6. Stores rendered warps on the batch for VAE encoding in the latent prep stage
Source code in fastvideo/pipelines/stages/gen3c_stages.py
Functions¶
fastvideo.pipelines.basic.gen3c.Gen3CConditioningStage.forward
¶forward(batch: ForwardBatch, fastvideo_args: FastVideoArgs) -> ForwardBatch
Run 3D cache conditioning pipeline.
Source code in fastvideo/pipelines/stages/gen3c_stages.py
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 | |
fastvideo.pipelines.basic.gen3c.Gen3CDenoisingStage
¶
Bases: DenoisingStage
Denoising stage for GEN3C models.
This stage extends the base denoising stage with support for: - condition_video_input_mask: Binary mask indicating conditioning frames - condition_video_pose: VAE-encoded 3D cache buffers - condition_video_augment_sigma: Noise augmentation sigma
Source code in fastvideo/pipelines/stages/gen3c_stages.py
fastvideo.pipelines.basic.gen3c.Gen3CLatentPreparationStage
¶
Bases: LatentPreparationStage
Latent preparation stage for GEN3C.
This stage prepares latents and encodes 3D cache buffers through the VAE. If rendered warped frames are available on the batch (from Gen3CConditioningStage), they are VAE-encoded to produce real conditioning. Otherwise falls back to zeros.
Source code in fastvideo/pipelines/stages/gen3c_stages.py
Functions¶
fastvideo.pipelines.basic.gen3c.Gen3CLatentPreparationStage.encode_warped_frames
¶encode_warped_frames(condition_state: Tensor, condition_state_mask: Tensor, vae: Any, frame_buffer_max: int, dtype: dtype) -> Tensor
Encode rendered 3D cache buffers through VAE.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
condition_state
|
Tensor
|
(B, T, N, 3, H, W) rendered RGB images in [-1, 1]. |
required |
condition_state_mask
|
Tensor
|
(B, T, N, 1, H, W) rendered masks in [0, 1]. |
required |
vae
|
Any
|
VAE encoder. |
required |
frame_buffer_max
|
int
|
Maximum number of buffers. |
required |
dtype
|
dtype
|
Target dtype. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
latent_condition |
Tensor
|
(B, buffer_channels, T_latent, H_latent, W_latent) |
Source code in fastvideo/pipelines/stages/gen3c_stages.py
fastvideo.pipelines.basic.gen3c.Gen3CLatentPreparationStage.forward
¶forward(batch: ForwardBatch, fastvideo_args: FastVideoArgs) -> ForwardBatch
Prepare latents and encode 3D cache buffers.
Source code in fastvideo/pipelines/stages/gen3c_stages.py
223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 | |
fastvideo.pipelines.basic.gen3c.Gen3CPipeline
¶
Gen3CPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
GEN3C Video Generation Pipeline.
This pipeline extends Cosmos with 3D cache support for camera-controlled video generation. When an input image is provided, it runs the full 3D cache conditioning pipeline (depth estimation -> point cloud -> camera trajectory -> forward warping -> VAE encoding).
Source code in fastvideo/pipelines/composed_pipeline_base.py
Functions¶
fastvideo.pipelines.basic.gen3c.Gen3CPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/gen3c/gen3c_pipeline.py
Functions¶
fastvideo.pipelines.basic.gen3c.forward_warp
¶
forward_warp(frame1: Tensor, mask1: Tensor | None, depth1: Tensor | None, transformation1: Tensor | None, transformation2: Tensor, intrinsic1: Tensor | None, intrinsic2: Tensor | None, is_image: bool = True, is_depth: bool = True, render_depth: bool = False, world_points1: Tensor | None = None) -> tuple[Tensor, Tensor, Tensor | None, Tensor]
Forward warp frame1 to a new view defined by transformation2.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
frame1
|
Tensor
|
(b, c, h, w) source frame in range [-1, 1] for images |
required |
mask1
|
Tensor | None
|
(b, 1, h, w) valid pixel mask |
required |
depth1
|
Tensor | None
|
(b, 1, h, w) depth map (required if world_points1 is None) |
required |
transformation1
|
Tensor | None
|
(b, 4, 4) source camera w2c (required if depth1 is provided) |
required |
transformation2
|
Tensor
|
(b, 4, 4) target camera w2c |
required |
intrinsic1
|
Tensor | None
|
(b, 3, 3) source camera intrinsics |
required |
intrinsic2
|
Tensor | None
|
(b, 3, 3) target camera intrinsics |
required |
is_image
|
bool
|
If True, output will be clipped to (-1, 1) |
True
|
is_depth
|
bool
|
If True, depth1 is z-depth; if False, it's distance |
True
|
render_depth
|
bool
|
If True, also return the warped depth map |
False
|
world_points1
|
Tensor | None
|
(b, h, w, 3) pre-computed world points (alternative to depth1) |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
warped_frame2 |
Tensor
|
(b, c, h, w) warped frame |
mask2 |
Tensor
|
(b, 1, h, w) validity mask |
warped_depth2 |
Tensor | None
|
(b, h, w) warped depth (if render_depth=True) |
flow12 |
Tensor
|
(b, 2, h, w) optical flow |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 | |
fastvideo.pipelines.basic.gen3c.generate_camera_trajectory
¶
generate_camera_trajectory(trajectory_type: str, initial_w2c: Tensor, initial_intrinsics: Tensor, num_frames: int, movement_distance: float, camera_rotation: str = 'center_facing', center_depth: float = 1.0, device: str = 'cuda') -> tuple[Tensor, Tensor]
Generate camera trajectory for GEN3C video generation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
trajectory_type
|
str
|
One of "left", "right", "up", "down", "zoom_in", "zoom_out", "clockwise", "counterclockwise". |
required |
initial_w2c
|
Tensor
|
Initial world-to-camera matrix (4, 4). |
required |
initial_intrinsics
|
Tensor
|
Camera intrinsics matrix (3, 3). |
required |
num_frames
|
int
|
Number of frames in the trajectory. |
required |
movement_distance
|
float
|
Distance factor for camera movement. |
required |
camera_rotation
|
str
|
"center_facing", "no_rotation", or "trajectory_aligned". |
'center_facing'
|
center_depth
|
float
|
Depth of the scene center point. |
1.0
|
device
|
str
|
Computation device. |
'cuda'
|
Returns:
| Name | Type | Description |
|---|---|---|
generated_w2cs |
Tensor
|
(1, num_frames, 4, 4) world-to-camera matrices. |
generated_intrinsics |
Tensor
|
(1, num_frames, 3, 3) camera intrinsics. |
Source code in fastvideo/pipelines/basic/gen3c/camera_utils.py
fastvideo.pipelines.basic.gen3c.project_points
¶
Project 3D world points to 2D pixel coordinates.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
world_points
|
Tensor
|
(b, h, w, 3) 3D world coordinates |
required |
w2c
|
Tensor
|
(b, 4, 4) world-to-camera transformation matrix |
required |
intrinsic
|
Tensor
|
(b, 3, 3) camera intrinsic matrix |
required |
Returns:
| Name | Type | Description |
|---|---|---|
projected_points |
Tensor
|
(b, h, w, 3, 1) projected 2D coordinates (x, y, z) |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
fastvideo.pipelines.basic.gen3c.unproject_points
¶
unproject_points(depth: Tensor, w2c: Tensor, intrinsic: Tensor, is_depth: bool = True, mask: Tensor | None = None) -> Tensor
Unproject depth map to 3D world points.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
depth
|
Tensor
|
(b, 1, h, w) depth map |
required |
w2c
|
Tensor
|
(b, 4, 4) world-to-camera transformation matrix |
required |
intrinsic
|
Tensor
|
(b, 3, 3) camera intrinsic matrix |
required |
is_depth
|
bool
|
If True, depth is z-depth; if False, depth is distance to camera |
True
|
mask
|
Tensor | None
|
Optional (b, h, w) or (b, 1, h, w) mask for valid pixels |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
world_points |
Tensor
|
(b, h, w, 3) 3D world coordinates |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
Modules¶
fastvideo.pipelines.basic.gen3c.cache_3d
¶
This module implements the 3D cache system for GEN3C video generation with camera control. The cache maintains a point cloud representation of the scene, enabling: - Unprojecting depth maps to 3D world points - Forward warping rendered views to new camera poses - Managing multiple frame buffers for temporal consistency
Classes¶
fastvideo.pipelines.basic.gen3c.cache_3d.Cache3DBase
¶Cache3DBase(input_image: Tensor, input_depth: Tensor, input_w2c: Tensor, input_intrinsics: Tensor, input_mask: Tensor | None = None, input_format: list[str] | None = None, input_points: Tensor | None = None, weight_dtype: dtype = float32, is_depth: bool = True, device: str = 'cuda', filter_points_threshold: float = 1.0)
Base class for 3D cache management.
The cache maintains: - input_image: RGB images stored in the cache - input_points: 3D world coordinates for each pixel - input_mask: Validity mask for each pixel
Initialize the 3D cache.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
input_image
|
Tensor
|
Input image tensor with varying dimensions |
required |
input_depth
|
Tensor
|
Depth map tensor |
required |
input_w2c
|
Tensor
|
World-to-camera transformation matrix |
required |
input_intrinsics
|
Tensor
|
Camera intrinsic matrix |
required |
input_mask
|
Tensor | None
|
Optional validity mask |
None
|
input_format
|
list[str] | None
|
Dimension labels for input_image (e.g., ['B', 'C', 'H', 'W']) |
None
|
input_points
|
Tensor | None
|
Pre-computed 3D world points (alternative to depth) |
None
|
weight_dtype
|
dtype
|
Data type for computations |
float32
|
is_depth
|
bool
|
If True, input_depth is z-depth; if False, it's distance |
True
|
device
|
str
|
Computation device |
'cuda'
|
filter_points_threshold
|
float
|
Threshold for filtering unreliable depth |
1.0
|
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 | |
fastvideo.pipelines.basic.gen3c.cache_3d.Cache3DBase.render_cache
¶render_cache(target_w2cs: Tensor, target_intrinsics: Tensor, render_depth: bool = False, start_frame_idx: int = 0) -> tuple[Tensor, Tensor]
Render the cached 3D points from new camera viewpoints.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
target_w2cs
|
Tensor
|
(b, F_target, 4, 4) target camera transformations |
required |
target_intrinsics
|
Tensor
|
(b, F_target, 3, 3) target camera intrinsics |
required |
render_depth
|
bool
|
If True, return depth instead of RGB |
False
|
start_frame_idx
|
int
|
Starting frame index in the cache |
0
|
Returns:
| Name | Type | Description |
|---|---|---|
pixels |
Tensor
|
(b, F_target, N, c, h, w) rendered images or depth |
masks |
Tensor
|
(b, F_target, N, 1, h, w) validity masks |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 | |
fastvideo.pipelines.basic.gen3c.cache_3d.Cache3DBase.update_cache
¶ fastvideo.pipelines.basic.gen3c.cache_3d.Cache3DBuffer
¶Cache3DBuffer(frame_buffer_max: int = 2, noise_aug_strength: float = 0.0, generator: Generator | None = None, **kwargs)
Bases: Cache3DBase
3D cache with frame buffer support.
This class manages multiple frame buffers for temporal consistency and supports noise augmentation for training stability.
Initialize the buffered 3D cache.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
frame_buffer_max
|
int
|
Maximum number of frames to buffer |
2
|
noise_aug_strength
|
float
|
Strength of noise augmentation per buffer |
0.0
|
generator
|
Generator | None
|
Random generator for reproducibility |
None
|
**kwargs
|
Arguments passed to Cache3DBase |
{}
|
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
fastvideo.pipelines.basic.gen3c.cache_3d.Cache3DBuffer.render_cache
¶render_cache(target_w2cs: Tensor, target_intrinsics: Tensor, render_depth: bool = False, start_frame_idx: int = 0) -> tuple[Tensor, Tensor]
Render the cache with optional noise augmentation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
target_w2cs
|
Tensor
|
(b, F_target, 4, 4) target camera transformations |
required |
target_intrinsics
|
Tensor
|
(b, F_target, 3, 3) target camera intrinsics |
required |
render_depth
|
bool
|
If True, return depth instead of RGB |
False
|
start_frame_idx
|
int
|
Starting frame index (must be 0 for this class) |
0
|
Returns:
| Name | Type | Description |
|---|---|---|
pixels |
Tensor
|
(b, F_target, N, c, h, w) rendered images |
masks |
Tensor
|
(b, F_target, N, 1, h, w) validity masks |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
fastvideo.pipelines.basic.gen3c.cache_3d.Cache3DBuffer.update_cache
¶update_cache(new_image: Tensor, new_depth: Tensor, new_w2c: Tensor, new_mask: Tensor | None = None, new_intrinsics: Tensor | None = None)
Update the cache with a new frame.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
new_image
|
Tensor
|
(B, C, H, W) new RGB image |
required |
new_depth
|
Tensor
|
(B, 1, H, W) new depth map |
required |
new_w2c
|
Tensor
|
(B, 4, 4) new world-to-camera transformation |
required |
new_mask
|
Tensor | None
|
Optional (B, 1, H, W) validity mask |
None
|
new_intrinsics
|
Tensor | None
|
(B, 3, 3) camera intrinsics (optional) |
None
|
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
Functions¶
fastvideo.pipelines.basic.gen3c.cache_3d.bilinear_splatting
¶bilinear_splatting(frame1: Tensor, mask1: Tensor | None, depth1: Tensor, flow12: Tensor, flow12_mask: Tensor | None = None, is_image: bool = False, depth_weight_scale: float = 50.0) -> tuple[Tensor, Tensor]
Bilinear splatting for forward warping.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
frame1
|
Tensor
|
(b, c, h, w) source frame |
required |
mask1
|
Tensor | None
|
(b, 1, h, w) valid pixel mask (1 for known, 0 for unknown) |
required |
depth1
|
Tensor
|
(b, 1, h, w) depth map |
required |
flow12
|
Tensor
|
(b, 2, h, w) optical flow from frame1 to frame2 |
required |
flow12_mask
|
Tensor | None
|
(b, 1, h, w) flow validity mask |
None
|
is_image
|
bool
|
If True, output will be clipped to (-1, 1) range |
False
|
depth_weight_scale
|
float
|
Scale factor for depth weighting |
50.0
|
Returns:
| Name | Type | Description |
|---|---|---|
warped_frame2 |
Tensor
|
(b, c, h, w) warped frame |
mask2 |
Tensor
|
(b, 1, h, w) validity mask for warped frame |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 | |
fastvideo.pipelines.basic.gen3c.cache_3d.create_grid
¶Create a dense grid of (x, y) coordinates of shape (b, 2, h, w).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
b
|
int
|
Batch size |
required |
h
|
int
|
Height |
required |
w
|
int
|
Width |
required |
device
|
str
|
Device for tensor creation |
'cpu'
|
dtype
|
dtype
|
Data type for tensor |
float32
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Grid tensor of shape (b, 2, h, w) |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
fastvideo.pipelines.basic.gen3c.cache_3d.forward_warp
¶forward_warp(frame1: Tensor, mask1: Tensor | None, depth1: Tensor | None, transformation1: Tensor | None, transformation2: Tensor, intrinsic1: Tensor | None, intrinsic2: Tensor | None, is_image: bool = True, is_depth: bool = True, render_depth: bool = False, world_points1: Tensor | None = None) -> tuple[Tensor, Tensor, Tensor | None, Tensor]
Forward warp frame1 to a new view defined by transformation2.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
frame1
|
Tensor
|
(b, c, h, w) source frame in range [-1, 1] for images |
required |
mask1
|
Tensor | None
|
(b, 1, h, w) valid pixel mask |
required |
depth1
|
Tensor | None
|
(b, 1, h, w) depth map (required if world_points1 is None) |
required |
transformation1
|
Tensor | None
|
(b, 4, 4) source camera w2c (required if depth1 is provided) |
required |
transformation2
|
Tensor
|
(b, 4, 4) target camera w2c |
required |
intrinsic1
|
Tensor | None
|
(b, 3, 3) source camera intrinsics |
required |
intrinsic2
|
Tensor | None
|
(b, 3, 3) target camera intrinsics |
required |
is_image
|
bool
|
If True, output will be clipped to (-1, 1) |
True
|
is_depth
|
bool
|
If True, depth1 is z-depth; if False, it's distance |
True
|
render_depth
|
bool
|
If True, also return the warped depth map |
False
|
world_points1
|
Tensor | None
|
(b, h, w, 3) pre-computed world points (alternative to depth1) |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
warped_frame2 |
Tensor
|
(b, c, h, w) warped frame |
mask2 |
Tensor
|
(b, 1, h, w) validity mask |
warped_depth2 |
Tensor | None
|
(b, h, w) warped depth (if render_depth=True) |
flow12 |
Tensor
|
(b, 2, h, w) optical flow |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 | |
fastvideo.pipelines.basic.gen3c.cache_3d.inverse_with_conversion
¶Compute matrix inverse with float32 conversion for numerical stability.
fastvideo.pipelines.basic.gen3c.cache_3d.project_points
¶Project 3D world points to 2D pixel coordinates.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
world_points
|
Tensor
|
(b, h, w, 3) 3D world coordinates |
required |
w2c
|
Tensor
|
(b, 4, 4) world-to-camera transformation matrix |
required |
intrinsic
|
Tensor
|
(b, 3, 3) camera intrinsic matrix |
required |
Returns:
| Name | Type | Description |
|---|---|---|
projected_points |
Tensor
|
(b, h, w, 3, 1) projected 2D coordinates (x, y, z) |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
fastvideo.pipelines.basic.gen3c.cache_3d.reliable_depth_mask_range_batch
¶reliable_depth_mask_range_batch(depth: Tensor, window_size: int = 5, ratio_thresh: float = 0.05, eps: float = 1e-06) -> Tensor
Compute a mask for reliable depth values based on local variation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
depth
|
Tensor
|
(b, h, w) or (b, 1, h, w) depth map |
required |
window_size
|
int
|
Size of the local window (must be odd) |
5
|
ratio_thresh
|
float
|
Threshold for depth variation ratio |
0.05
|
eps
|
float
|
Small epsilon for numerical stability |
1e-06
|
Returns:
| Name | Type | Description |
|---|---|---|
reliable_mask |
Tensor
|
Boolean mask where True indicates reliable depth |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
fastvideo.pipelines.basic.gen3c.cache_3d.unproject_points
¶unproject_points(depth: Tensor, w2c: Tensor, intrinsic: Tensor, is_depth: bool = True, mask: Tensor | None = None) -> Tensor
Unproject depth map to 3D world points.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
depth
|
Tensor
|
(b, 1, h, w) depth map |
required |
w2c
|
Tensor
|
(b, 4, 4) world-to-camera transformation matrix |
required |
intrinsic
|
Tensor
|
(b, 3, 3) camera intrinsic matrix |
required |
is_depth
|
bool
|
If True, depth is z-depth; if False, depth is distance to camera |
True
|
mask
|
Tensor | None
|
Optional (b, h, w) or (b, 1, h, w) mask for valid pixels |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
world_points |
Tensor
|
(b, h, w, 3) 3D world coordinates |
Source code in fastvideo/pipelines/basic/gen3c/cache_3d.py
fastvideo.pipelines.basic.gen3c.camera_utils
¶
Camera trajectory generation utilities for GEN3C 3D cache conditioning.
Functions¶
fastvideo.pipelines.basic.gen3c.camera_utils.apply_transformation
¶Apply batch transformation to a matrix.
Source code in fastvideo/pipelines/basic/gen3c/camera_utils.py
fastvideo.pipelines.basic.gen3c.camera_utils.create_horizontal_trajectory
¶create_horizontal_trajectory(world_to_camera_matrix: Tensor, center_depth: float, positive: bool = True, n_steps: int = 13, distance: float = 0.1, device: str = 'cuda', axis: str = 'x', camera_rotation: str = 'center_facing') -> Tensor
Create a linear camera trajectory along a specified axis.
Source code in fastvideo/pipelines/basic/gen3c/camera_utils.py
fastvideo.pipelines.basic.gen3c.camera_utils.create_spiral_trajectory
¶create_spiral_trajectory(world_to_camera_matrix: Tensor, center_depth: float, radius_x: float = 0.03, radius_y: float = 0.02, radius_z: float = 0.0, positive: bool = True, camera_rotation: str = 'center_facing', n_steps: int = 13, device: str = 'cuda', start_from_zero: bool = True, num_circles: int = 1) -> Tensor
Create a spiral/circular camera trajectory.
Source code in fastvideo/pipelines/basic/gen3c/camera_utils.py
fastvideo.pipelines.basic.gen3c.camera_utils.generate_camera_trajectory
¶generate_camera_trajectory(trajectory_type: str, initial_w2c: Tensor, initial_intrinsics: Tensor, num_frames: int, movement_distance: float, camera_rotation: str = 'center_facing', center_depth: float = 1.0, device: str = 'cuda') -> tuple[Tensor, Tensor]
Generate camera trajectory for GEN3C video generation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
trajectory_type
|
str
|
One of "left", "right", "up", "down", "zoom_in", "zoom_out", "clockwise", "counterclockwise". |
required |
initial_w2c
|
Tensor
|
Initial world-to-camera matrix (4, 4). |
required |
initial_intrinsics
|
Tensor
|
Camera intrinsics matrix (3, 3). |
required |
num_frames
|
int
|
Number of frames in the trajectory. |
required |
movement_distance
|
float
|
Distance factor for camera movement. |
required |
camera_rotation
|
str
|
"center_facing", "no_rotation", or "trajectory_aligned". |
'center_facing'
|
center_depth
|
float
|
Depth of the scene center point. |
1.0
|
device
|
str
|
Computation device. |
'cuda'
|
Returns:
| Name | Type | Description |
|---|---|---|
generated_w2cs |
Tensor
|
(1, num_frames, 4, 4) world-to-camera matrices. |
generated_intrinsics |
Tensor
|
(1, num_frames, 3, 3) camera intrinsics. |
Source code in fastvideo/pipelines/basic/gen3c/camera_utils.py
fastvideo.pipelines.basic.gen3c.camera_utils.look_at_matrix
¶look_at_matrix(camera_pos: Tensor, target: Tensor, invert_pos: bool = True) -> Tensor
Create a 4x4 look-at view matrix pointing camera toward target.
Source code in fastvideo/pipelines/basic/gen3c/camera_utils.py
fastvideo.pipelines.basic.gen3c.depth_estimation
¶
MoGe-based monocular depth estimation for GEN3C 3D cache conditioning.
Functions¶
fastvideo.pipelines.basic.gen3c.depth_estimation.load_moge_model
¶Load MoGe depth estimation model from HuggingFace.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_name
|
str
|
HuggingFace model identifier. |
'Ruicheng/moge-vitl'
|
device
|
str | device
|
Device to load model on. |
'cuda'
|
Returns:
| Type | Description |
|---|---|
MoGeModel
|
Loaded MoGe model. |
Source code in fastvideo/pipelines/basic/gen3c/depth_estimation.py
fastvideo.pipelines.basic.gen3c.depth_estimation.predict_depth_from_path
¶predict_depth_from_path(image_path: str, target_h: int, target_w: int, device: device, moge_model: MoGeModel) -> tuple[Tensor, Tensor, Tensor, Tensor, Tensor]
Predict depth, intrinsics, and mask from an image file path.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image_path
|
str
|
Path to input image (RGB or BGR, any format cv2 supports). |
required |
target_h
|
int
|
Target height for output tensors. |
required |
target_w
|
int
|
Target width for output tensors. |
required |
device
|
device
|
Computation device. |
required |
moge_model
|
MoGeModel
|
Loaded MoGe model. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
image |
Tensor
|
(1, 1, 3, target_h, target_w) image tensor in [-1, 1]. |
depth |
Tensor
|
(1, 1, 1, target_h, target_w) depth map. |
mask |
Tensor
|
(1, 1, 1, target_h, target_w) confidence mask. |
w2c |
Tensor
|
(1, 1, 4, 4) world-to-camera matrix (identity). |
intrinsics |
Tensor
|
(1, 1, 3, 3) camera intrinsics. |
Source code in fastvideo/pipelines/basic/gen3c/depth_estimation.py
fastvideo.pipelines.basic.gen3c.depth_estimation.predict_depth_from_tensor
¶predict_depth_from_tensor(image_tensor: Tensor, moge_model: MoGeModel) -> tuple[Tensor, Tensor]
Predict depth and mask from an image tensor (for autoregressive generation).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image_tensor
|
Tensor
|
(C, H, W) image tensor in [0, 1] range. |
required |
moge_model
|
MoGeModel
|
Loaded MoGe model. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
depth |
Tensor
|
(1, 1, H, W) depth map. |
mask |
Tensor
|
(1, 1, H, W) confidence mask. |
Source code in fastvideo/pipelines/basic/gen3c/depth_estimation.py
fastvideo.pipelines.basic.gen3c.gen3c_pipeline
¶
GEN3C video diffusion pipeline wiring.
Classes¶
fastvideo.pipelines.basic.gen3c.gen3c_pipeline.Gen3CPipeline
¶Gen3CPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
GEN3C Video Generation Pipeline.
This pipeline extends Cosmos with 3D cache support for camera-controlled video generation. When an input image is provided, it runs the full 3D cache conditioning pipeline (depth estimation -> point cloud -> camera trajectory -> forward warping -> VAE encoding).
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.gen3c.gen3c_pipeline.Gen3CPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/gen3c/gen3c_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan
¶
Modules¶
fastvideo.pipelines.basic.hunyuan.hunyuan_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan.hunyuan_pipeline.HunyuanVideoPipeline
¶HunyuanVideoPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan.hunyuan_pipeline.HunyuanVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan/hunyuan_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan15
¶
Modules¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_2sr_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_2sr_pipeline.HunyuanVideo152SRPipeline
¶HunyuanVideo152SRPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_2sr_pipeline.HunyuanVideo152SRPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_2sr_pipeline.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_2sr_pipeline.HunyuanVideo152SRPipeline.forward
¶forward(batch: ForwardBatch, fastvideo_args: FastVideoArgs) -> ForwardBatch
Generate a video or image using the pipeline.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
ForwardBatch
|
The batch to generate from. |
required |
fastvideo_args
|
FastVideoArgs
|
The inference arguments. |
required |
Returns: ForwardBatch: The batch with the generated video or image.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_2sr_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_i2v_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_i2v_pipeline.HunyuanVideo15ImageToVideoPipeline
¶HunyuanVideo15ImageToVideoPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_i2v_pipeline.HunyuanVideo15ImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_pipeline.HunyuanVideo15Pipeline
¶HunyuanVideo15Pipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_pipeline.HunyuanVideo15Pipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_sr_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_sr_pipeline.HunyuanVideo15SRPipeline
¶HunyuanVideo15SRPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_sr_pipeline.HunyuanVideo15SRPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_sr_pipeline.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_sr_pipeline.HunyuanVideo15SRPipeline.forward
¶forward(batch: ForwardBatch, fastvideo_args: FastVideoArgs) -> ForwardBatch
Generate a video or image using the pipeline.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
ForwardBatch
|
The batch to generate from. |
required |
fastvideo_args
|
FastVideoArgs
|
The inference arguments. |
required |
Returns: ForwardBatch: The batch with the generated video or image.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_sr_pipeline.py
Functions¶
fastvideo.pipelines.basic.hyworld
¶
Modules¶
fastvideo.pipelines.basic.hyworld.hyworld_pipeline
¶
HYWorld video diffusion pipeline implementation.
This module contains an implementation of the HYWorld video diffusion pipeline using the modular pipeline architecture with HYWorld-specific denoising stage for chunk-based video generation with context frame selection.
Classes¶
fastvideo.pipelines.basic.hyworld.hyworld_pipeline.HYWorldPipeline
¶HYWorldPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
HYWorld video diffusion pipeline.
This pipeline implements chunk-based video generation with context frame selection for 3D-aware generation using HYWorldDenoisingStage.
Note: HYWorld only uses a single LLM-based text encoder, unlike SDXL-style dual encoder setups. The text_encoder_2/tokenizer_2 are not used.
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hyworld.hyworld_pipeline.HYWorldPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with HYWorld-specific denoising stage.
Source code in fastvideo/pipelines/basic/hyworld/hyworld_pipeline.py
Functions¶
fastvideo.pipelines.basic.lingbotworld
¶
fastvideo.pipelines.basic.longcat
¶
LongCat pipeline module.
Classes¶
fastvideo.pipelines.basic.longcat.LongCatImageToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Image-to-Video pipeline.
Generates video from a single input image using Tier 3 I2V conditioning: - Per-frame timestep masking (timestep[:, 0] = 0) - num_cond_latents parameter to transformer - RoPE skipping for conditioning frames - Selective denoising (skip first frame in scheduler)
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.LongCatImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up I2V-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatImageToVideoPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.LongCatPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatVideoContinuationPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Video Continuation pipeline.
Generates video continuation from multiple conditioning frames using optional KV cache for 2-3x speedup.
Key features: - Takes video input (13+ frames typically) - Encodes conditioning frames via VAE - Optionally pre-computes KV cache for conditioning - Uses cached K/V during denoising for speedup - Concatenates conditioning back after denoising
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.LongCatVideoContinuationPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up VC-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatVideoContinuationPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
Modules¶
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline
¶
LongCat Image-to-Video pipeline implementation.
This module implements I2V (Image-to-Video) generation for LongCat using Tier 3 conditioning with timestep masking, num_cond_latents support, and RoPE skipping.
Supports: - Basic I2V (50 steps, guidance_scale=4.0) - Distilled I2V with LoRA (16 steps, guidance_scale=1.0) - Refinement I2V for 720p upscaling (with refinement LoRA + BSA)
Classes¶
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline.LongCatImageToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Image-to-Video pipeline.
Generates video from a single input image using Tier 3 I2V conditioning: - Per-frame timestep masking (timestep[:, 0] = 0) - num_cond_latents parameter to transformer - RoPE skipping for conditioning frames - Selective denoising (skip first frame in scheduler)
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline.LongCatImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up I2V-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline.LongCatImageToVideoPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.longcat_pipeline
¶
LongCat video diffusion pipeline implementation.
This module implements the LongCat video diffusion pipeline using FastVideo's modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.longcat.longcat_pipeline.LongCatPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_pipeline.LongCatPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_pipeline.LongCatPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline
¶
LongCat Video Continuation (VC) pipeline implementation.
This module implements VC (Video Continuation) generation for LongCat with KV cache optimization for 2-3x speedup.
Supports: - Basic VC (50 steps, guidance_scale=4.0) - Distilled VC with LoRA (16 steps, guidance_scale=1.0) - KV cache for conditioning frames
Classes¶
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVCLatentPreparationStage
¶LongCatVCLatentPreparationStage(scheduler, transformer, use_btchw_layout: bool = False)
Bases: LongCatI2VLatentPreparationStage
Prepare latents with video conditioning for first N frames.
Extends I2V latent preparation to handle video_latent (multiple frames) instead of image_latent (single frame).
Source code in fastvideo/pipelines/stages/latent_preparation.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVCLatentPreparationStage.forward
¶Prepare latents with VC conditioning.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVideoContinuationPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Video Continuation pipeline.
Generates video continuation from multiple conditioning frames using optional KV cache for 2-3x speedup.
Key features: - Takes video input (13+ frames typically) - Encodes conditioning frames via VAE - Optionally pre-computes KV cache for conditioning - Uses cached K/V during denoising for speedup - Concatenates conditioning back after denoising
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVideoContinuationPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up VC-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVideoContinuationPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
Functions¶
fastvideo.pipelines.basic.ltx2
¶
Modules¶
fastvideo.pipelines.basic.ltx2.continuation
¶
Typed continuation state for the LTX-2 streaming pipeline.
Segment N+1 conditions on segment N's trailing decoded frames and
denoised audio latents. The streaming runtime used to hold this state as
per-worker globals; lifting it into a typed, JSON-serializable object
lets clients snapshot, migrate, or round-trip it through an HTTP/RPC
boundary. The envelope ContinuationState(kind, payload) is the
shared public API; the typed class here owns the LTX-2 payload shape.
Serialization contract:
- Video frames → PNG bytes + base64, or a :class:
BlobStoreid. - Audio latents → a self-describing safetensors blob + base64, or a
:class:
BlobStoreid. safetensors preservesbfloat16, which a raw-numpy round-trip cannot. - The returned payload is always a plain JSON-serializable dict.
Attributes¶
fastvideo.pipelines.basic.ltx2.continuation.DEFAULT_INLINE_THRESHOLD_BYTES
module-attribute
¶Tensors larger than this go to the blob store (if available). 2 MiB is below typical single-JSON-message limits (Dynamo: 4 MiB, Postgres TOAST: 1 GiB) and well above per-frame PNG payloads (~200 KiB at 512x512).
fastvideo.pipelines.basic.ltx2.continuation.LTX2_CONTINUATION_KIND
module-attribute
¶Public ContinuationState.kind for LTX-2 payloads.
fastvideo.pipelines.basic.ltx2.continuation.LTX2_CONTINUATION_SCHEMA_VERSION
module-attribute
¶Payload schema version carried inside payload.schema_version.
Classes¶
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState
dataclass
¶LTX2ContinuationState(segment_index: int = 0, video_frames: list[ndarray] | None = None, video_frames_blob_id: str | None = None, video_conditioning_frame_idx: int = 0, video_conditioning_strength: float = 1.0, audio_latents: Tensor | None = None, audio_latents_blob_id: str | None = None, audio_sample_rate: int | None = None, audio_conditioning_num_frames: int = 0, audio_conditioning_strength: float = 1.0, video_position_offset_sec: float = 0.0, metadata: dict[str, Any] = dict())
Typed LTX-2 continuation state carried between streaming segments.
video_frames hold trailing decoded RGB frames (uint8 HxWx3) from
segment N for conditioning segment N+1 via the VAE encode path.
audio_latents is the cached denoised audio latent tensor of shape
[B, C, T, mel] that segment N+1 will copy into the overlap
region of its clean-latent conditioning.
Most fields map 1:1 onto the internal gpu_pool's per-worker state;
the only new concept is the *_blob_id fields, which allow large
tensors to live outside the JSON payload. See module docstring.
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.audio_conditioning_num_frames
class-attribute
instance-attribute
¶audio_conditioning_num_frames: int = 0
Number of trailing audio frames that carry over as clean context into segment N+1.
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.audio_conditioning_strength
class-attribute
instance-attribute
¶audio_conditioning_strength: float = 1.0
Clean-latent mask value applied to the overlap region; 0.0 keeps the cached audio entirely, 1.0 renoises from scratch.
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.audio_latents
class-attribute
instance-attribute
¶Denoised audio latent tensor of shape [B, C, T, mel].
None when the state is blob-backed or unset.
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.audio_latents_blob_id
class-attribute
instance-attribute
¶audio_latents_blob_id: str | None = None
Blob store id when audio latents live outside the payload.
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.audio_sample_rate
class-attribute
instance-attribute
¶audio_sample_rate: int | None = None
Sample rate for the audio side (e.g. 24000).
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.metadata
class-attribute
instance-attribute
¶Opaque metadata bag for forward-compat fields that don't need their own typed slot yet (e.g. custom knob experiments).
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.segment_index
class-attribute
instance-attribute
¶segment_index: int = 0
Index of the just-completed segment. Segment 0 has no history;
state returned after segment 0 carries segment_index=0 and the
caller uses segment_index + 1 as the next segment number.
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.video_conditioning_frame_idx
class-attribute
instance-attribute
¶video_conditioning_frame_idx: int = 0
Target frame index inside the next segment that the trailing
frames align with (matches the LTX-2 ltx2_video_conditions
tuple's frame_idx slot).
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.video_conditioning_strength
class-attribute
instance-attribute
¶video_conditioning_strength: float = 1.0
Conditioning strength in [0, 1]. Matches the ltx2_video_
conditions tuple's strength slot.
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.video_frames
class-attribute
instance-attribute
¶video_frames: list[ndarray] | None = None
Trailing decoded frames, each an RGB uint8 np.ndarray shaped
(H, W, 3). None when the state is blob-backed or unset.
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.video_frames_blob_id
class-attribute
instance-attribute
¶video_frames_blob_id: str | None = None
Blob store id when the frames live outside the payload.
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.video_position_offset_sec
class-attribute
instance-attribute
¶video_position_offset_sec: float = 0.0
Seconds by which video RoPE is shifted forward so the audio
prefix can sit at t >= 0 when audio conditioning is longer than
video conditioning.
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.from_continuation_state
classmethod
¶from_continuation_state(state: ContinuationState, *, blob_store: BlobStore | None = None) -> LTX2ContinuationState
Rebuild a typed state from a public :class:ContinuationState.
Raises :class:ValueError when the kind doesn't match or the
schema version is unsupported.
Source code in fastvideo/pipelines/basic/ltx2/continuation.py
fastvideo.pipelines.basic.ltx2.continuation.LTX2ContinuationState.to_continuation_state
¶to_continuation_state(*, blob_store: BlobStore | None = None, inline_threshold_bytes: int = DEFAULT_INLINE_THRESHOLD_BYTES) -> ContinuationState
Serialize into a public :class:ContinuationState.
When blob_store is given, tensors larger than
inline_threshold_bytes are stored via
:meth:BlobStore.put and referenced by id; otherwise all data
is base64-encoded inline. The payload is always a plain
JSON-serializable dict.
Source code in fastvideo/pipelines/basic/ltx2/continuation.py
Functions¶
fastvideo.pipelines.basic.ltx2.pipeline_configs
¶
Classes¶
fastvideo.pipelines.basic.ltx2.pipeline_configs.LTX2T2VConfig
dataclass
¶LTX2T2VConfig(model_path: str = '', pipeline_config_path: str | None = None, embedded_cfg_scale: float = 6.0, flow_shift: float | None = None, flow_shift_sr: float | None = None, disable_autocast: bool = False, is_causal: bool = False, dit_config: DiTConfig = LTX2VideoConfig(), dit_precision: str = 'bf16', upsampler_config: UpsamplerConfig = UpsamplerConfig(), upsampler_precision: str = 'fp32', vae_config: VAEConfig = LTX2VAEConfig(), vae_precision: str = 'bf16', vae_tiling: bool = True, vae_sp: bool = False, image_encoder_config: EncoderConfig = EncoderConfig(), image_encoder_precision: str = 'fp32', text_encoder_configs: tuple[EncoderConfig, ...] = (lambda: (LTX2GemmaConfig(),))(), text_encoder_precisions: tuple[str, ...] = (lambda: ('bf16',))(), preprocess_text_funcs: tuple[Callable[[str], str], ...] = (lambda: (preprocess_text,))(), postprocess_text_funcs: tuple[Callable[[BaseEncoderOutput], Tensor], ...] = (lambda: (ltx2_postprocess_text,))(), dmd_denoising_steps: list[int] | None = None, ti2v_task: bool = False, boundary_ratio: float | None = None, audio_decoder_config: ModelConfig = LTX2AudioDecoderConfig(), vocoder_config: ModelConfig = LTX2VocoderConfig(), audio_decoder_precision: str = 'bf16', vocoder_precision: str = 'bf16')
fastvideo.pipelines.basic.ltx2.stage_overrides
¶
Typed override surfaces for the LTX-2 two-stage refine flow.
preset_overrides.refine— init-time knobs (see :class:LTX2RefinePresetOverride).stage_overrides.refine— per-request knobs (see :class:LTX2RefineStageOverride).
Asset paths live on :class:~fastvideo.api.schema.ComponentConfig
(upsampler_weights and lora_path).
Classes¶
fastvideo.pipelines.basic.ltx2.stage_overrides.LTX2RefinePresetOverride
dataclass
¶Init-time refine wiring under preset_overrides.refine.
fastvideo.pipelines.basic.ltx2.stage_overrides.LTX2RefineStageOverride
dataclass
¶LTX2RefineStageOverride(num_inference_steps: int | None = None, guidance_scale: float | None = None, image_crf: int | None = None, video_position_offset_sec: float | None = None)
Per-request refine tuning under stage_overrides.refine.
Functions¶
fastvideo.pipelines.basic.ltx2.stage_overrides.refine_override_to_dict
¶refine_override_to_dict(override: LTX2RefinePresetOverride | LTX2RefineStageOverride) -> dict[str, Any]
Serialise a refine override, dropping None entries so only
user-set fields reach preset_overrides.refine or
stage_overrides.refine.
Source code in fastvideo/pipelines/basic/ltx2/stage_overrides.py
fastvideo.pipelines.basic.ltx2.stages
¶
LTX-2 family pipeline stages.
Classes¶
fastvideo.pipelines.basic.ltx2.stages.LTX2AudioDecodingStage
¶
Bases: PipelineStage
Decode LTX-2 audio latents into a waveform.
Source code in fastvideo/pipelines/basic/ltx2/stages/ltx2_audio_decoding.py
fastvideo.pipelines.basic.ltx2.stages.LTX2DenoisingStage
¶
Bases: PipelineStage
Run the LTX-2 denoising loop over the sigma schedule.
Source code in fastvideo/pipelines/basic/ltx2/stages/ltx2_denoising.py
fastvideo.pipelines.basic.ltx2.stages.LTX2LatentPreparationStage
¶
Bases: PipelineStage
Prepare initial LTX-2 latents without relying on a diffusers scheduler.
Source code in fastvideo/pipelines/basic/ltx2/stages/ltx2_latent_preparation.py
fastvideo.pipelines.basic.ltx2.stages.LTX2TextEncodingStage
¶
Bases: TextEncodingStage
LTX2 text encoding stage with sequence parallelism support.
When SP is enabled (sp_world_size > 1), only rank 0 runs the text encoder and broadcasts embeddings to other ranks. This avoids I/O contention from all ranks loading the Gemma model simultaneously, which can cause text encoding to take 100+ seconds instead of ~5 seconds.
Source code in fastvideo/pipelines/stages/text_encoding.py
Modules¶
fastvideo.pipelines.basic.ltx2.stages.ltx2_audio_decoding
¶Audio decoding stage for LTX-2 pipelines.
fastvideo.pipelines.basic.ltx2.stages.ltx2_audio_decoding.LTX2AudioDecodingStage
¶
Bases: PipelineStage
Decode LTX-2 audio latents into a waveform.
Source code in fastvideo/pipelines/basic/ltx2/stages/ltx2_audio_decoding.py
fastvideo.pipelines.basic.ltx2.stages.ltx2_denoising
¶LTX-2 denoising stage using the native sigma schedule.
fastvideo.pipelines.basic.ltx2.stages.ltx2_denoising.LTX2DenoisingStage
¶
Bases: PipelineStage
Run the LTX-2 denoising loop over the sigma schedule.
Source code in fastvideo/pipelines/basic/ltx2/stages/ltx2_denoising.py
fastvideo.pipelines.basic.ltx2.stages.ltx2_latent_preparation
¶Latent preparation stage for LTX-2 pipelines.
fastvideo.pipelines.basic.ltx2.stages.ltx2_latent_preparation.LTX2LatentPreparationStage
¶
Bases: PipelineStage
Prepare initial LTX-2 latents without relying on a diffusers scheduler.
Source code in fastvideo/pipelines/basic/ltx2/stages/ltx2_latent_preparation.py
fastvideo.pipelines.basic.ltx2.stages.ltx2_text_encoding
¶LTX2-specific text encoding stage with sequence parallelism broadcast support.
When running with sequence parallelism (SP), the Gemma text encoder is only executed on rank 0, and the embeddings are broadcast to all other ranks. This avoids I/O contention from all ranks loading the Gemma model simultaneously.
fastvideo.pipelines.basic.ltx2.stages.ltx2_text_encoding.LTX2TextEncodingStage
¶
Bases: TextEncodingStage
LTX2 text encoding stage with sequence parallelism support.
When SP is enabled (sp_world_size > 1), only rank 0 runs the text encoder and broadcasts embeddings to other ranks. This avoids I/O contention from all ranks loading the Gemma model simultaneously, which can cause text encoding to take 100+ seconds instead of ~5 seconds.
Source code in fastvideo/pipelines/stages/text_encoding.py
fastvideo.pipelines.basic.matrixgame
¶
fastvideo.pipelines.basic.sd35
¶
Modules¶
fastvideo.pipelines.basic.sd35.presets
¶
Stable Diffusion 3.5 model family pipeline presets.
Classes¶
fastvideo.pipelines.basic.sd35.sd35_pipeline
¶
Classes¶
fastvideo.pipelines.basic.sd35.sd35_pipeline.SD35Pipeline
¶SD35Pipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Minimal SD3.5 Medium text-to-image pipeline (treat as num_frames=1).
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.sd35.sd35_pipeline.StableDiffusion3Pipeline
¶StableDiffusion3Pipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: SD35Pipeline
Alias name to match SD3.5 diffusers model_index.json _class_name.
Source code in fastvideo/pipelines/composed_pipeline_base.py
Functions¶
fastvideo.pipelines.basic.turbodiffusion
¶
Classes¶
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionI2VPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion I2V pipeline for 1-4 step image-to-video generation.
Uses RCM scheduler, SLA attention, and dual model switching for high-quality I2V generation.
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionI2VPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_i2v_pipeline.py
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion video pipeline for 1-4 step generation.
Uses RCM scheduler and SLA attention for fast, high-quality video generation.
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_pipeline.py
Modules¶
fastvideo.pipelines.basic.turbodiffusion.presets
¶
TurboDiffusion model family pipeline presets.
Classes¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_i2v_pipeline
¶
TurboDiffusion I2V (Image-to-Video) Pipeline Implementation.
This module contains an implementation of the TurboDiffusion I2V pipeline for 1-4 step image-to-video generation using rCM (recurrent Consistency Model) sampling with SLA (Sparse-Linear Attention).
Key differences from T2V: - Uses dual models (high/low noise) with boundary switching - sigma_max=200 (vs 80 for T2V) - Mask conditioning with encoded first frame
Classes¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_i2v_pipeline.TurboDiffusionI2VPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion I2V pipeline for 1-4 step image-to-video generation.
Uses RCM scheduler, SLA attention, and dual model switching for high-quality I2V generation.
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_i2v_pipeline.TurboDiffusionI2VPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_pipeline
¶
TurboDiffusion Video Pipeline Implementation.
This module contains an implementation of the TurboDiffusion video diffusion pipeline for 1-4 step video generation using rCM (recurrent Consistency Model) sampling with SLA (Sparse-Linear Attention).
Classes¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_pipeline.TurboDiffusionPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion video pipeline for 1-4 step generation.
Uses RCM scheduler and SLA attention for fast, high-quality video generation.
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_pipeline.TurboDiffusionPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan
¶
Modules¶
fastvideo.pipelines.basic.wan.presets
¶
Wan model family pipeline presets.
Each preset is a named inference preset that declares the user-facing
stage topology, default sampling values, and which per-stage overrides
are allowed. Presets are registered explicitly from
:func:fastvideo.registry._register_presets.
Classes¶
fastvideo.pipelines.basic.wan.wan_causal_dmd_pipeline
¶
Wan causal DMD pipeline implementation.
This module wires the causal DMD denoising stage into the modular pipeline.
Classes¶
fastvideo.pipelines.basic.wan.wan_causal_dmd_pipeline.WanCausalDMDPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_causal_dmd_pipeline.WanCausalDMDPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_causal_dmd_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_causal_pipeline
¶
Wan causal pipeline with standard multi-step denoising.
Block-by-block causal inference with KV caching, using the full scheduler timestep schedule (40-50 steps) rather than DMD few-step.
Classes¶
fastvideo.pipelines.basic.wan.wan_causal_pipeline.WanCausalPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Wan causal pipeline with standard multi-step denoising.
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_dmd_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_dmd_pipeline.WanDMDPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Wan video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_dmd_pipeline.WanDMDPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_dmd_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_i2v_dmd_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_i2v_dmd_pipeline.WanImageToVideoDmdPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_i2v_dmd_pipeline.WanImageToVideoDmdPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_i2v_dmd_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_i2v_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_i2v_pipeline.WanImageToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_i2v_pipeline.WanImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_pipeline.WanPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Wan video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_pipeline.WanPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_v2v_pipeline
¶
Wan video-to-video diffusion pipeline implementation.
This module contains an implementation of the Wan video-to-video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_v2v_pipeline.WanVideoToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_v2v_pipeline.WanVideoToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.