video_generator
¶
VideoGenerator module for FastVideo.
This module provides a consolidated interface for generating videos using diffusion models.
Classes¶
fastvideo.entrypoints.video_generator.VideoGenerator
¶
VideoGenerator(fastvideo_args: FastVideoArgs, executor_class: type[Executor], log_stats: bool, *, log_queue=None)
A unified class for generating videos using diffusion models.
This class provides a simple interface for video generation with rich customization options, similar to popular frameworks like HF Diffusers.
Initialize the video generator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fastvideo_args
|
FastVideoArgs
|
The inference arguments |
required |
executor_class
|
type[Executor]
|
The executor class to use for inference |
required |
log_stats
|
bool
|
Whether to log statistics |
required |
log_queue
|
Optional multiprocessing.Queue to forward worker logs to |
None
|
Source code in fastvideo/entrypoints/video_generator.py
Functions¶
fastvideo.entrypoints.video_generator.VideoGenerator.from_fastvideo_args
classmethod
¶
from_fastvideo_args(fastvideo_args: FastVideoArgs, *, log_queue=None) -> VideoGenerator
Create a video generator with the specified arguments.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fastvideo_args
|
FastVideoArgs
|
The inference arguments |
required |
log_queue
|
Optional multiprocessing.Queue to forward worker logs to |
None
|
Returns:
| Type | Description |
|---|---|
VideoGenerator
|
The created video generator |
Source code in fastvideo/entrypoints/video_generator.py
fastvideo.entrypoints.video_generator.VideoGenerator.from_pretrained
classmethod
¶
from_pretrained(model_path: str | GeneratorConfig | Mapping[str, Any] | None = None, **kwargs) -> VideoGenerator
Create a video generator from a pretrained model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_path
|
str | GeneratorConfig | Mapping[str, Any] | None
|
Path or identifier for the pretrained model |
None
|
pipeline_config
|
Pipeline config to use for inference |
required | |
**kwargs
|
Additional arguments to customize model loading, set any FastVideoArgs or PipelineConfig attributes here. |
{}
|
Returns:
| Type | Description |
|---|---|
VideoGenerator
|
The created video generator |
Priority level: Default pipeline config < User's pipeline config < User's kwargs
Stable convenience kwargs remain supported here for common engine and offload settings. Advanced model- or pipeline-specific options should move to VideoGenerator.from_config(...).
Source code in fastvideo/entrypoints/video_generator.py
fastvideo.entrypoints.video_generator.VideoGenerator.generate
¶
generate(request: GenerationRequest | Mapping[str, Any], *, log_queue=None) -> GenerationResult | list[GenerationResult]
Generate video or image outputs from a typed inference request.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
GenerationRequest | Mapping[str, Any]
|
A |
required |
log_queue
|
Optional multiprocessing.Queue to forward worker logs to during this request. |
None
|
Returns:
| Type | Description |
|---|---|
GenerationResult | list[GenerationResult]
|
A |
GenerationResult | list[GenerationResult]
|
|
GenerationResult | list[GenerationResult]
|
prompts. |
Source code in fastvideo/entrypoints/video_generator.py
fastvideo.entrypoints.video_generator.VideoGenerator.generate_video
¶
generate_video(prompt: str | None = None, sampling_param: SamplingParam | None = None, mouse_cond: Tensor | None = None, keyboard_cond: Tensor | None = None, grid_sizes: tuple[int, int, int] | list[int] | Tensor | None = None, **kwargs) -> dict[str, Any] | list[dict[str, Any]]
Generate a video based on the given prompt.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prompt
|
str | None
|
The prompt to use for generation (optional if prompt_txt is provided) |
None
|
negative_prompt
|
The negative prompt to use (overrides the one in fastvideo_args) |
required | |
output_path
|
Path to save the video (overrides the one in fastvideo_args) |
required | |
prompt_path
|
Path to prompt file |
required | |
save_video
|
Whether to save the video to disk |
required | |
return_frames
|
Whether to include raw frames in the result dict |
required | |
num_inference_steps
|
Number of denoising steps (overrides fastvideo_args) |
required | |
guidance_scale
|
Classifier-free guidance scale (overrides fastvideo_args) |
required | |
num_frames
|
Number of frames to generate (overrides fastvideo_args) |
required | |
height
|
Height of generated video (overrides fastvideo_args) |
required | |
width
|
Width of generated video (overrides fastvideo_args) |
required | |
fps
|
Frames per second for saved video (overrides fastvideo_args) |
required | |
seed
|
Random seed for generation (overrides fastvideo_args) |
required | |
callback
|
Callback function called after each step |
required | |
callback_steps
|
Number of steps between each callback |
required |
Returns:
| Type | Description |
|---|---|
dict[str, Any] | list[dict[str, Any]]
|
A metadata dictionary for single-prompt generation, or a list of |
dict[str, Any] | list[dict[str, Any]]
|
metadata dictionaries for prompt-file batch generation. |
Source code in fastvideo/entrypoints/video_generator.py
fastvideo.entrypoints.video_generator.VideoGenerator.shutdown
¶
fastvideo.entrypoints.video_generator.VideoGenerator.unmerge_lora_weights
¶
Use unmerged weights for inference to produce videos that align with validation videos generated during training.