basic
¶
Basic inference pipelines for fastvideo.
This package contains basic pipelines for video and image generation.
Modules¶
fastvideo.pipelines.basic.cosmos
¶
Modules¶
fastvideo.pipelines.basic.cosmos.cosmos2_5_pipeline
¶
Cosmos 2.5 pipeline entry (staged pipeline).
Classes¶
fastvideo.pipelines.basic.cosmos.cosmos2_5_pipeline.Cosmos2_5Pipeline
¶Cosmos2_5Pipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Cosmos 2.5 video generation pipeline.
Source code in fastvideo/pipelines/composed_pipeline_base.py
Functions¶
fastvideo.pipelines.basic.cosmos.cosmos_pipeline
¶
Cosmos video diffusion pipeline implementation.
This module contains an implementation of the Cosmos video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.cosmos.cosmos_pipeline.Cosmos2VideoToWorldPipeline
¶Cosmos2VideoToWorldPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.cosmos.cosmos_pipeline.Cosmos2VideoToWorldPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/cosmos/cosmos_pipeline.py
Functions¶
fastvideo.pipelines.basic.gamecraft
¶
HunyuanGameCraft pipeline implementations.
Modules¶
fastvideo.pipelines.basic.gamecraft.gamecraft_pipeline
¶
HunyuanGameCraft video diffusion pipeline implementation.
This module implements the HunyuanGameCraft pipeline for camera/action-conditioned video generation with the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.gamecraft.gamecraft_pipeline.HunyuanGameCraftPipeline
¶HunyuanGameCraftPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Pipeline for HunyuanGameCraft video generation.
This pipeline supports: - Text-to-video generation with camera/action conditioning - Autoregressive generation with history frames - 33-channel input (16 latent + 16 gt_latent + 1 mask) - CameraNet for encoding Plücker coordinates
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.gamecraft.gamecraft_pipeline.HunyuanGameCraftPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/gamecraft/gamecraft_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan
¶
Modules¶
fastvideo.pipelines.basic.hunyuan.hunyuan_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan.hunyuan_pipeline.HunyuanVideoPipeline
¶HunyuanVideoPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan.hunyuan_pipeline.HunyuanVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan/hunyuan_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan15
¶
Modules¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_2sr_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_2sr_pipeline.HunyuanVideo152SRPipeline
¶HunyuanVideo152SRPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_2sr_pipeline.HunyuanVideo152SRPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_2sr_pipeline.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_2sr_pipeline.HunyuanVideo152SRPipeline.forward
¶forward(batch: ForwardBatch, fastvideo_args: FastVideoArgs) -> ForwardBatch
Generate a video or image using the pipeline.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
ForwardBatch
|
The batch to generate from. |
required |
fastvideo_args
|
FastVideoArgs
|
The inference arguments. |
required |
Returns: ForwardBatch: The batch with the generated video or image.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_2sr_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_i2v_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_i2v_pipeline.HunyuanVideo15ImageToVideoPipeline
¶HunyuanVideo15ImageToVideoPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_i2v_pipeline.HunyuanVideo15ImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_pipeline.HunyuanVideo15Pipeline
¶HunyuanVideo15Pipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_pipeline.HunyuanVideo15Pipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_pipeline.py
Functions¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_sr_pipeline
¶
Hunyuan video diffusion pipeline implementation.
This module contains an implementation of the Hunyuan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.hunyuan15.hunyuan15_sr_pipeline.HunyuanVideo15SRPipeline
¶HunyuanVideo15SRPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_sr_pipeline.HunyuanVideo15SRPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_sr_pipeline.py
fastvideo.pipelines.basic.hunyuan15.hunyuan15_sr_pipeline.HunyuanVideo15SRPipeline.forward
¶forward(batch: ForwardBatch, fastvideo_args: FastVideoArgs) -> ForwardBatch
Generate a video or image using the pipeline.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
batch
|
ForwardBatch
|
The batch to generate from. |
required |
fastvideo_args
|
FastVideoArgs
|
The inference arguments. |
required |
Returns: ForwardBatch: The batch with the generated video or image.
Source code in fastvideo/pipelines/basic/hunyuan15/hunyuan15_sr_pipeline.py
Functions¶
fastvideo.pipelines.basic.hyworld
¶
Modules¶
fastvideo.pipelines.basic.hyworld.hyworld_pipeline
¶
HYWorld video diffusion pipeline implementation.
This module contains an implementation of the HYWorld video diffusion pipeline using the modular pipeline architecture with HYWorld-specific denoising stage for chunk-based video generation with context frame selection.
Classes¶
fastvideo.pipelines.basic.hyworld.hyworld_pipeline.HYWorldPipeline
¶HYWorldPipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
HYWorld video diffusion pipeline.
This pipeline implements chunk-based video generation with context frame selection for 3D-aware generation using HYWorldDenoisingStage.
Note: HYWorld only uses a single LLM-based text encoder, unlike SDXL-style dual encoder setups. The text_encoder_2/tokenizer_2 are not used.
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.hyworld.hyworld_pipeline.HYWorldPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with HYWorld-specific denoising stage.
Source code in fastvideo/pipelines/basic/hyworld/hyworld_pipeline.py
Functions¶
fastvideo.pipelines.basic.lingbotworld
¶
fastvideo.pipelines.basic.longcat
¶
LongCat pipeline module.
Classes¶
fastvideo.pipelines.basic.longcat.LongCatImageToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Image-to-Video pipeline.
Generates video from a single input image using Tier 3 I2V conditioning: - Per-frame timestep masking (timestep[:, 0] = 0) - num_cond_latents parameter to transformer - RoPE skipping for conditioning frames - Selective denoising (skip first frame in scheduler)
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.LongCatImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up I2V-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatImageToVideoPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.LongCatPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatVideoContinuationPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Video Continuation pipeline.
Generates video continuation from multiple conditioning frames using optional KV cache for 2-3x speedup.
Key features: - Takes video input (13+ frames typically) - Encodes conditioning frames via VAE - Optionally pre-computes KV cache for conditioning - Uses cached K/V during denoising for speedup - Concatenates conditioning back after denoising
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.LongCatVideoContinuationPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up VC-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
fastvideo.pipelines.basic.longcat.LongCatVideoContinuationPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
Modules¶
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline
¶
LongCat Image-to-Video pipeline implementation.
This module implements I2V (Image-to-Video) generation for LongCat using Tier 3 conditioning with timestep masking, num_cond_latents support, and RoPE skipping.
Supports: - Basic I2V (50 steps, guidance_scale=4.0) - Distilled I2V with LoRA (16 steps, guidance_scale=1.0) - Refinement I2V for 720p upscaling (with refinement LoRA + BSA)
Classes¶
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline.LongCatImageToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Image-to-Video pipeline.
Generates video from a single input image using Tier 3 I2V conditioning: - Per-frame timestep masking (timestep[:, 0] = 0) - num_cond_latents parameter to transformer - RoPE skipping for conditioning frames - Selective denoising (skip first frame in scheduler)
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline.LongCatImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up I2V-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_i2v_pipeline.LongCatImageToVideoPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.longcat_pipeline
¶
LongCat video diffusion pipeline implementation.
This module implements the LongCat video diffusion pipeline using FastVideo's modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.longcat.longcat_pipeline.LongCatPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_pipeline.LongCatPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_pipeline.LongCatPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_pipeline.py
Functions¶
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline
¶
LongCat Video Continuation (VC) pipeline implementation.
This module implements VC (Video Continuation) generation for LongCat with KV cache optimization for 2-3x speedup.
Supports: - Basic VC (50 steps, guidance_scale=4.0) - Distilled VC with LoRA (16 steps, guidance_scale=1.0) - KV cache for conditioning frames
Classes¶
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVCLatentPreparationStage
¶LongCatVCLatentPreparationStage(scheduler, transformer, use_btchw_layout: bool = False)
Bases: LongCatI2VLatentPreparationStage
Prepare latents with video conditioning for first N frames.
Extends I2V latent preparation to handle video_latent (multiple frames) instead of image_latent (single frame).
Source code in fastvideo/pipelines/stages/latent_preparation.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVCLatentPreparationStage.forward
¶Prepare latents with VC conditioning.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVideoContinuationPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
LongCat Video Continuation pipeline.
Generates video continuation from multiple conditioning frames using optional KV cache for 2-3x speedup.
Key features: - Takes video input (13+ frames typically) - Encodes conditioning frames via VAE - Optionally pre-computes KV cache for conditioning - Uses cached K/V during denoising for speedup - Concatenates conditioning back after denoising
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVideoContinuationPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up VC-specific pipeline stages.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
fastvideo.pipelines.basic.longcat.longcat_vc_pipeline.LongCatVideoContinuationPipeline.initialize_pipeline
¶initialize_pipeline(fastvideo_args: FastVideoArgs)
Initialize LongCat-specific components.
Source code in fastvideo/pipelines/basic/longcat/longcat_vc_pipeline.py
Functions¶
fastvideo.pipelines.basic.ltx2
¶
fastvideo.pipelines.basic.matrixgame
¶
fastvideo.pipelines.basic.sd35
¶
Modules¶
fastvideo.pipelines.basic.sd35.sd35_pipeline
¶
Classes¶
fastvideo.pipelines.basic.sd35.sd35_pipeline.SD35Pipeline
¶SD35Pipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: ComposedPipelineBase
Minimal SD3.5 Medium text-to-image pipeline (treat as num_frames=1).
Source code in fastvideo/pipelines/composed_pipeline_base.py
fastvideo.pipelines.basic.sd35.sd35_pipeline.StableDiffusion3Pipeline
¶StableDiffusion3Pipeline(model_path: str, fastvideo_args: FastVideoArgs | TrainingArgs, required_config_modules: list[str] | None = None, loaded_modules: dict[str, Module] | None = None)
Bases: SD35Pipeline
Alias name to match SD3.5 diffusers model_index.json _class_name.
Source code in fastvideo/pipelines/composed_pipeline_base.py
Functions¶
fastvideo.pipelines.basic.turbodiffusion
¶
Classes¶
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionI2VPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion I2V pipeline for 1-4 step image-to-video generation.
Uses RCM scheduler, SLA attention, and dual model switching for high-quality I2V generation.
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionI2VPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_i2v_pipeline.py
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion video pipeline for 1-4 step generation.
Uses RCM scheduler and SLA attention for fast, high-quality video generation.
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.turbodiffusion.TurboDiffusionPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_pipeline.py
Modules¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_i2v_pipeline
¶
TurboDiffusion I2V (Image-to-Video) Pipeline Implementation.
This module contains an implementation of the TurboDiffusion I2V pipeline for 1-4 step image-to-video generation using rCM (recurrent Consistency Model) sampling with SLA (Sparse-Linear Attention).
Key differences from T2V: - Uses dual models (high/low noise) with boundary switching - sigma_max=200 (vs 80 for T2V) - Mask conditioning with encoded first frame
Classes¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_i2v_pipeline.TurboDiffusionI2VPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion I2V pipeline for 1-4 step image-to-video generation.
Uses RCM scheduler, SLA attention, and dual model switching for high-quality I2V generation.
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_i2v_pipeline.TurboDiffusionI2VPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_pipeline
¶
TurboDiffusion Video Pipeline Implementation.
This module contains an implementation of the TurboDiffusion video diffusion pipeline for 1-4 step video generation using rCM (recurrent Consistency Model) sampling with SLA (Sparse-Linear Attention).
Classes¶
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_pipeline.TurboDiffusionPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
TurboDiffusion video pipeline for 1-4 step generation.
Uses RCM scheduler and SLA attention for fast, high-quality video generation.
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.turbodiffusion.turbodiffusion_pipeline.TurboDiffusionPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/turbodiffusion/turbodiffusion_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan
¶
Modules¶
fastvideo.pipelines.basic.wan.wan_causal_dmd_pipeline
¶
Wan causal DMD pipeline implementation.
This module wires the causal DMD denoising stage into the modular pipeline.
Classes¶
fastvideo.pipelines.basic.wan.wan_causal_dmd_pipeline.WanCausalDMDPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_causal_dmd_pipeline.WanCausalDMDPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_causal_dmd_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_causal_pipeline
¶
Wan causal pipeline with standard multi-step denoising.
Block-by-block causal inference with KV caching, using the full scheduler timestep schedule (40-50 steps) rather than DMD few-step.
Classes¶
fastvideo.pipelines.basic.wan.wan_causal_pipeline.WanCausalPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Wan causal pipeline with standard multi-step denoising.
Source code in fastvideo/pipelines/lora_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_dmd_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_dmd_pipeline.WanDMDPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Wan video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_dmd_pipeline.WanDMDPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_dmd_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_i2v_dmd_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_i2v_dmd_pipeline.WanImageToVideoDmdPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_i2v_dmd_pipeline.WanImageToVideoDmdPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_i2v_dmd_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_i2v_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_i2v_pipeline.WanImageToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_i2v_pipeline.WanImageToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_i2v_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_pipeline
¶
Wan video diffusion pipeline implementation.
This module contains an implementation of the Wan video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_pipeline.WanPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Wan video diffusion pipeline with LoRA support.
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_pipeline.WanPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs) -> None
Set up pipeline stages with proper dependency injection.
Source code in fastvideo/pipelines/basic/wan/wan_pipeline.py
Functions¶
fastvideo.pipelines.basic.wan.wan_v2v_pipeline
¶
Wan video-to-video diffusion pipeline implementation.
This module contains an implementation of the Wan video-to-video diffusion pipeline using the modular pipeline architecture.
Classes¶
fastvideo.pipelines.basic.wan.wan_v2v_pipeline.WanVideoToVideoPipeline
¶
Bases: LoRAPipeline, ComposedPipelineBase
Source code in fastvideo/pipelines/lora_pipeline.py
fastvideo.pipelines.basic.wan.wan_v2v_pipeline.WanVideoToVideoPipeline.create_pipeline_stages
¶create_pipeline_stages(fastvideo_args: FastVideoArgs)
Set up pipeline stages with proper dependency injection.