entrypoints
¶
Modules¶
fastvideo.entrypoints.cli
¶
Modules¶
fastvideo.entrypoints.cli.bench
¶
Runs benchmark against a running FastVideo OpenAI-compatible server.
Example usage
fastvideo bench --dataset vbench --num-prompts 20 --port 8000
fastvideo.entrypoints.cli.bench_serving
¶
Benchmark online serving for diffusion models (Image/Video Generation).
Example usage
launch a server and benchmark on it¶
T2V or T2I or any other multimodal generation model¶
fastvideo serve --config serve.yaml
benchmark it and make sure the port is the same as the server's port¶
fastvideo bench --dataset vbench --num-prompts 20 --port 8000
fastvideo.entrypoints.cli.cli_types
¶
Classes¶
fastvideo.entrypoints.cli.cli_types.CLISubcommand
¶Base class for CLI subcommands
fastvideo.entrypoints.cli.cli_types.CLISubcommand.subparser_init
¶subparser_init(subparsers: _SubParsersAction) -> FlexibleArgumentParser
fastvideo.entrypoints.cli.generate
¶
Classes¶
fastvideo.entrypoints.cli.generate.GenerateSubcommand
¶
Bases: CLISubcommand
The generate subcommand for the FastVideo CLI
Source code in fastvideo/entrypoints/cli/generate.py
fastvideo.entrypoints.cli.generate.GenerateSubcommand.validate
¶validate(args: Namespace) -> None
Validate the arguments for this command
Source code in fastvideo/entrypoints/cli/generate.py
Functions¶
fastvideo.entrypoints.cli.main
¶
Classes¶
Functions¶
fastvideo.entrypoints.cli.main.cmd_init
¶cmd_init() -> list[CLISubcommand]
Initialize all commands from separate modules
fastvideo.entrypoints.cli.serve
¶
fastvideo.entrypoints.cli.utils
¶
Functions¶
fastvideo.entrypoints.cli.utils.launch_distributed
¶Launch a distributed job with the given arguments
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_gpus
|
int
|
Number of GPUs to use |
required |
args
|
list[str]
|
Arguments to pass to v1_fastvideo_inference.py (defaults to sys.argv[1:]) |
required |
master_port
|
int | None
|
Port for the master process (default: random) |
None
|
Source code in fastvideo/entrypoints/cli/utils.py
fastvideo.entrypoints.openai
¶
Modules¶
fastvideo.entrypoints.openai.api_server
¶
Classes¶
Functions¶
fastvideo.entrypoints.openai.api_server.create_app
¶create_app(fastvideo_args: FastVideoArgs, output_dir: str = DEFAULT_OUTPUT_DIR, default_request: GenerationRequest | None = None) -> FastAPI
Build the FastAPI application with all routers mounted
Source code in fastvideo/entrypoints/openai/api_server.py
fastvideo.entrypoints.openai.api_server.lifespan
async
¶lifespan(app: FastAPI) -> AsyncIterator[None]
Load model on startup, clean up on shutdown
Source code in fastvideo/entrypoints/openai/api_server.py
fastvideo.entrypoints.openai.api_server.run_server
¶run_server(fastvideo_args: FastVideoArgs, host: str = DEFAULT_HOST, port: int = DEFAULT_PORT, output_dir: str = DEFAULT_OUTPUT_DIR, default_request: GenerationRequest | None = None)
Create the app and run it with uvicorn
Source code in fastvideo/entrypoints/openai/api_server.py
fastvideo.entrypoints.openai.common_api
¶
Classes¶
fastvideo.entrypoints.openai.common_api.ModelCard
¶
Bases: BaseModel
OpenAI-compatible model card
Functions¶
fastvideo.entrypoints.openai.common_api.available_models
async
¶Show available models
Source code in fastvideo/entrypoints/openai/common_api.py
fastvideo.entrypoints.openai.common_api.model_info
async
¶ fastvideo.entrypoints.openai.common_api.retrieve_model
async
¶retrieve_model(model: str)
Retrieve a model by name
Source code in fastvideo/entrypoints/openai/common_api.py
fastvideo.entrypoints.openai.protocol
¶
fastvideo.entrypoints.openai.state
¶
Global server state shared across API modules.
Keeping state in a dedicated module prevents the classic 'main vs package
module' duplication that occurs when api_server.py is run with python -m.
All modules that need the generator or server args should import from here.
Classes¶
Functions¶
fastvideo.entrypoints.openai.state.clear_state
¶ fastvideo.entrypoints.openai.state.get_default_request
¶ fastvideo.entrypoints.openai.state.get_generator
¶get_generator() -> VideoGenerator
Return the global VideoGenerator instance (set during startup).
fastvideo.entrypoints.openai.state.get_server_args
¶get_server_args() -> FastVideoArgs
Return the global FastVideoArgs (set during startup).
fastvideo.entrypoints.openai.state.set_state
¶set_state(generator: VideoGenerator, fastvideo_args: FastVideoArgs, output_dir: str, default_request: GenerationRequest | None = None) -> None
Set all server state at once (called from lifespan).
Source code in fastvideo/entrypoints/openai/state.py
fastvideo.entrypoints.openai.stores
¶
Classes¶
fastvideo.entrypoints.openai.stores.AsyncDictStore
¶A small async-safe in-memory key-value store for dict items.
This encapsulates the usual pattern of a module-level dict guarded by an asyncio.Lock and provides simple CRUD methods that are safe to call concurrently from FastAPI request handlers and background tasks.
Source code in fastvideo/entrypoints/openai/stores.py
fastvideo.entrypoints.openai.utils
¶
Functions¶
fastvideo.entrypoints.openai.utils.choose_image_ext
¶Pick a file extension for image outputs
Source code in fastvideo/entrypoints/openai/utils.py
fastvideo.entrypoints.openai.utils.merge_image_input_list
¶Merge multiple image input sources into a single flat list
Source code in fastvideo/entrypoints/openai/utils.py
fastvideo.entrypoints.openai.utils.parse_size
¶Parse a 'WIDTHxHEIGHT' string into (width, height)
Source code in fastvideo/entrypoints/openai/utils.py
fastvideo.entrypoints.openai.utils.save_image_to_path
async
¶Save an uploaded file or download from URL to target_path
Source code in fastvideo/entrypoints/openai/utils.py
fastvideo.entrypoints.streaming
¶
Classes¶
fastvideo.entrypoints.streaming.BlobStore
¶
Bases: ABC
Opaque byte-blob storage keyed by id.
A :class:ContinuationState payload can reference large tensors
stored in a :class:BlobStore rather than inlining them, so the
JSON payload stays small when the state travels over the wire.
fastvideo.entrypoints.streaming.FragmentedMP4Chunk
dataclass
¶
A single fMP4 byte chunk emitted by :class:FragmentedMP4Encoder.
kind identifies whether the chunk is the init segment (must be
fed into the client's SourceBuffer first) or a media fragment.
fastvideo.entrypoints.streaming.FragmentedMP4Encoder
¶
FragmentedMP4Encoder(*, width: int, height: int, fps: int, segment_idx: int, stream_id: str | None = None, ffmpeg_path: str = 'ffmpeg', preset: str = 'ultrafast', pixel_format_out: str = 'yuv420p', extra_args: list[str] | None = None)
Stream RGB frames in, fMP4 chunks out.
One encoder covers one segment. The server creates a new encoder per :class:`ltx2_segment_start`` boundary so each segment becomes one media fragment the client can append independently.
Example::
encoder = FragmentedMP4Encoder(width=1024, height=576, fps=24,
segment_idx=0)
async with encoder:
async for chunk in encoder.encode(frames):
await websocket.send_bytes(chunk.data)
Source code in fastvideo/entrypoints/streaming/stream.py
Functions¶
fastvideo.entrypoints.streaming.FragmentedMP4Encoder.encode
async
¶encode(frames: list[ndarray] | AsyncIterator[ndarray]) -> AsyncIterator[FragmentedMP4Chunk]
Feed frames into ffmpeg and yield fMP4 chunks as they appear.
Source code in fastvideo/entrypoints/streaming/stream.py
fastvideo.entrypoints.streaming.InMemoryBlobStore
¶
Bases: BlobStore
Thread-safe in-memory :class:BlobStore for single-process servers.
No eviction policy — callers are responsible for calling
:meth:drop when a blob's owning state is replaced or a session
ends. A redis- or filesystem-backed :class:BlobStore should
replace this when the streaming server lands as a real service
(PR 7.5+).
Source code in fastvideo/entrypoints/streaming/session_store.py
fastvideo.entrypoints.streaming.InMemorySessionStore
¶
Bases: SessionStore
Thread-safe in-memory :class:SessionStore.
Default implementation used by single-process deployments; a future Redis-backed store can be dropped in without changes to the server.
No eviction / TTL / bounded capacity — sessions only leave via
:meth:drop. The live streaming server (PR 7.5+) is responsible
for bounding growth and for dropping any :class:BlobStore blobs
referenced by a state when that state is replaced or a session
ends; this class does not know about blobs.
Source code in fastvideo/entrypoints/streaming/session_store.py
fastvideo.entrypoints.streaming.Session
dataclass
¶
Session(id: str = (lambda: hex)(), state: SessionState = INITIALIZING, created_at: float = monotonic(), last_activity: float = monotonic(), client_id: str | None = None, preset: str | None = None, preset_label: str | None = None, curated_prompts: list[str] = list(), segment_idx: int = 0, enhancement_enabled: bool = False, auto_extension_enabled: bool = False, loop_generation_enabled: bool = False, single_clip_mode: bool = False, generation_paused: bool = False, stream_mode: str = 'av_fmp4', gpu_id: int | None = None, continuation_state: ContinuationState | None = None, metadata: dict[str, Any] = dict())
Functions¶
fastvideo.entrypoints.streaming.Session.transition
¶transition(target: SessionState) -> None
Move to target if the edge is allowed.
Raises :class:InvalidSessionTransition on illegal moves. The
self-loop on ACTIVE is legal so the server can re-assert
ACTIVE on segment completion without special casing.
Source code in fastvideo/entrypoints/streaming/session.py
fastvideo.entrypoints.streaming.SessionManager
¶
Registers sessions and enforces per-server session limits.
Source code in fastvideo/entrypoints/streaming/session.py
Functions¶
fastvideo.entrypoints.streaming.SessionManager.reap_timed_out
¶Return the ids of sessions that have exceeded the idle timeout.
The caller is responsible for actually closing them — this
method only identifies dead sessions so the server can emit
session_timeout frames before dropping the WebSocket.
TODO: unused until a background driver calls it. Per-connection idle enforcement currently happens via asyncio.wait_for on receive_json; this helper catches sessions stuck before any receive (e.g. future QUEUED state) and is expected to be wired into the GPU-pool reaper.
Source code in fastvideo/entrypoints/streaming/session.py
fastvideo.entrypoints.streaming.SessionState
¶
Bases: Enum
State-machine positions for a streaming session.
Transitions are server-owned. See
docs/design/server_contracts/streaming.md for the full diagram.
fastvideo.entrypoints.streaming.SessionStore
¶
Bases: ABC
Keyed store for per-session continuation state.
Implementations own the session-id → state mapping. The streaming
server calls :meth:store after each segment and :meth:snapshot
when a client explicitly asks for an exportable state handle.
Functions¶
fastvideo.entrypoints.streaming.SessionStore.hydrate
abstractmethod
¶Install state as the starting point for a session.
When session_id is None the store allocates a fresh id
(UUID4); when provided the store uses it verbatim, overwriting
any prior state at that id.
Source code in fastvideo/entrypoints/streaming/session_store.py
fastvideo.entrypoints.streaming.SessionStore.snapshot
abstractmethod
¶snapshot(session_id: str) -> ContinuationState | None
Functions¶
fastvideo.entrypoints.streaming.build_app
¶
build_app(serve_config: ServeConfig, generator: _GeneratorProto, *, session_store: SessionStore | None = None) -> FastAPI
Build the FastAPI app used by :func:run_server.
Exposed so tests can drive the WebSocket endpoint in-process via
starlette.testclient.TestClient(app).websocket_connect(...).
Source code in fastvideo/entrypoints/streaming/server.py
fastvideo.entrypoints.streaming.run_server
¶
run_server(serve_config: ServeConfig, *, generator: _GeneratorProto | None = None) -> None
Launch the streaming server.
Boots a :class:fastvideo.VideoGenerator from
serve_config.generator unless generator is provided, then
serves build_app(...) via uvicorn.
Source code in fastvideo/entrypoints/streaming/server.py
Modules¶
fastvideo.entrypoints.streaming.protocol
¶
JSON WebSocket protocol schemas for the streaming server.
Every control message shares the envelope {"type": <str>, ...}.
Pydantic models live here so the server can parse / validate incoming
frames and emit well-typed outgoing frames without hand-rolled dicts.
The message catalogue matches the contract in
docs/design/server_contracts/streaming.md; additions must land in
both places in the same PR.
Classes¶
fastvideo.entrypoints.streaming.protocol.ContinuationStateSnapshot
¶ fastvideo.entrypoints.streaming.protocol.MediaInit
¶
Bases: BaseModel
Descriptor for the fMP4 initialization segment that follows.
fastvideo.entrypoints.streaming.protocol.SegmentPromptSource
¶
Bases: BaseModel
Request a new segment using a specific prompt.
fastvideo.entrypoints.streaming.protocol.SessionInitV2
¶
Bases: BaseModel
Opening frame the client sends after the WebSocket handshake.
fastvideo.entrypoints.streaming.protocol.SnapshotState
¶
Bases: BaseModel
Request the current ContinuationState for export.
Functions¶
fastvideo.entrypoints.streaming.protocol.parse_client_message
¶Parse an incoming WebSocket dict into a typed client message.
Unknown type values raise :class:pydantic.ValidationError; the
server handler turns that into an error frame with
code="invalid_message".
Source code in fastvideo/entrypoints/streaming/protocol.py
fastvideo.entrypoints.streaming.server
¶
Single-generator FastAPI + WebSocket streaming server.
Classes¶
Functions¶
fastvideo.entrypoints.streaming.server.build_app
¶build_app(serve_config: ServeConfig, generator: _GeneratorProto, *, session_store: SessionStore | None = None) -> FastAPI
Build the FastAPI app used by :func:run_server.
Exposed so tests can drive the WebSocket endpoint in-process via
starlette.testclient.TestClient(app).websocket_connect(...).
Source code in fastvideo/entrypoints/streaming/server.py
fastvideo.entrypoints.streaming.server.run_server
¶run_server(serve_config: ServeConfig, *, generator: _GeneratorProto | None = None) -> None
Launch the streaming server.
Boots a :class:fastvideo.VideoGenerator from
serve_config.generator unless generator is provided, then
serves build_app(...) via uvicorn.
Source code in fastvideo/entrypoints/streaming/server.py
fastvideo.entrypoints.streaming.session
¶
Per-connection session lifecycle for the streaming server.
Each WebSocket opens exactly one :class:Session. :class:SessionManager
enforces the generation_segment_cap and session_timeout_seconds
budgets from :class:fastvideo.api.StreamingConfig.
Classes¶
fastvideo.entrypoints.streaming.session.InvalidSessionTransition
¶
Bases: RuntimeError
Raised when a session is asked to transition along an illegal edge.
fastvideo.entrypoints.streaming.session.Session
dataclass
¶Session(id: str = (lambda: hex)(), state: SessionState = INITIALIZING, created_at: float = monotonic(), last_activity: float = monotonic(), client_id: str | None = None, preset: str | None = None, preset_label: str | None = None, curated_prompts: list[str] = list(), segment_idx: int = 0, enhancement_enabled: bool = False, auto_extension_enabled: bool = False, loop_generation_enabled: bool = False, single_clip_mode: bool = False, generation_paused: bool = False, stream_mode: str = 'av_fmp4', gpu_id: int | None = None, continuation_state: ContinuationState | None = None, metadata: dict[str, Any] = dict())
fastvideo.entrypoints.streaming.session.Session.transition
¶transition(target: SessionState) -> None
Move to target if the edge is allowed.
Raises :class:InvalidSessionTransition on illegal moves. The
self-loop on ACTIVE is legal so the server can re-assert
ACTIVE on segment completion without special casing.
Source code in fastvideo/entrypoints/streaming/session.py
fastvideo.entrypoints.streaming.session.SessionManager
¶Registers sessions and enforces per-server session limits.
Source code in fastvideo/entrypoints/streaming/session.py
fastvideo.entrypoints.streaming.session.SessionManager.reap_timed_out
¶Return the ids of sessions that have exceeded the idle timeout.
The caller is responsible for actually closing them — this
method only identifies dead sessions so the server can emit
session_timeout frames before dropping the WebSocket.
TODO: unused until a background driver calls it. Per-connection idle enforcement currently happens via asyncio.wait_for on receive_json; this helper catches sessions stuck before any receive (e.g. future QUEUED state) and is expected to be wired into the GPU-pool reaper.
Source code in fastvideo/entrypoints/streaming/session.py
fastvideo.entrypoints.streaming.session.SessionRejected
¶
Bases: RuntimeError
Raised when session creation fails (queue full, auth, etc.).
fastvideo.entrypoints.streaming.session_init_image
¶
Persist the initial-image blob attached to a streaming session.
Classes¶
fastvideo.entrypoints.streaming.session_init_image.SessionInitImage
dataclass
¶Location of the persisted init image.
Callers pass path to InputConfig.image_path; display_name
is only used for logs.
Functions¶
fastvideo.entrypoints.streaming.session_init_image.persist_session_init_image
¶persist_session_init_image(payload: Any, *, output_dir: str | None = None) -> SessionInitImage | None
Decode a client init-image blob and persist it to disk.
payload shape (matches the internal UI protocol)::
{
"mime": "image/png",
"name": "ref.png",
"data": "<base64 bytes>",
}
Returns None when payload is falsy (no init image). Raises
:class:ValueError on schema / size / decode errors so the caller
can surface a user-facing error frame.
Source code in fastvideo/entrypoints/streaming/session_init_image.py
fastvideo.entrypoints.streaming.session_store
¶
Session state store for the FastVideo streaming server.
The streaming server keeps continuation state (decoded frames + audio latents from the previous segment) server-side so the client doesn't re-upload multi-megabyte tensors each WebSocket message. Two operations are needed:
snapshot(session_id) -> ContinuationState— serialize the current state so it can be exported (e.g. over HTTP) or migrated to a different server.hydrate(state) -> session_id— load a previously serialized state into a new session (for resume-after-disconnect flows).
The store is an ABC with an :class:InMemorySessionStore default; Redis
or other backends can drop in without touching the pipeline.
Large tensor payloads (video frames, audio latents) are kept out of the
JSON payload via an accompanying :class:BlobStore. Both stores share a
process today; they are separate types so that a future implementation
can put blobs on S3 while keeping session metadata in Redis.
Classes¶
fastvideo.entrypoints.streaming.session_store.BlobStore
¶
Bases: ABC
Opaque byte-blob storage keyed by id.
A :class:ContinuationState payload can reference large tensors
stored in a :class:BlobStore rather than inlining them, so the
JSON payload stays small when the state travels over the wire.
fastvideo.entrypoints.streaming.session_store.InMemoryBlobStore
¶
Bases: BlobStore
Thread-safe in-memory :class:BlobStore for single-process servers.
No eviction policy — callers are responsible for calling
:meth:drop when a blob's owning state is replaced or a session
ends. A redis- or filesystem-backed :class:BlobStore should
replace this when the streaming server lands as a real service
(PR 7.5+).
Source code in fastvideo/entrypoints/streaming/session_store.py
fastvideo.entrypoints.streaming.session_store.InMemorySessionStore
¶
Bases: SessionStore
Thread-safe in-memory :class:SessionStore.
Default implementation used by single-process deployments; a future Redis-backed store can be dropped in without changes to the server.
No eviction / TTL / bounded capacity — sessions only leave via
:meth:drop. The live streaming server (PR 7.5+) is responsible
for bounding growth and for dropping any :class:BlobStore blobs
referenced by a state when that state is replaced or a session
ends; this class does not know about blobs.
Source code in fastvideo/entrypoints/streaming/session_store.py
fastvideo.entrypoints.streaming.session_store.SessionStore
¶
Bases: ABC
Keyed store for per-session continuation state.
Implementations own the session-id → state mapping. The streaming
server calls :meth:store after each segment and :meth:snapshot
when a client explicitly asks for an exportable state handle.
fastvideo.entrypoints.streaming.session_store.SessionStore.drop
abstractmethod
¶drop(session_id: str) -> None
fastvideo.entrypoints.streaming.session_store.SessionStore.hydrate
abstractmethod
¶Install state as the starting point for a session.
When session_id is None the store allocates a fresh id
(UUID4); when provided the store uses it verbatim, overwriting
any prior state at that id.
Source code in fastvideo/entrypoints/streaming/session_store.py
fastvideo.entrypoints.streaming.session_store.SessionStore.snapshot
abstractmethod
¶snapshot(session_id: str) -> ContinuationState | None
fastvideo.entrypoints.streaming.stream
¶
fMP4 stream encoder used by the streaming server.
The client's Media Source Extensions player needs a continuous fMP4
byte stream: first an initialization segment (ftyp + moov),
then one or more media segments (moof + mdat). We pipe raw
RGB frames into an ffmpeg subprocess configured for fragmented output
via -movflags empty_moov+default_base_moof+frag_keyframe+faststart
and stream the bytes back out.
Classes¶
fastvideo.entrypoints.streaming.stream.FragmentedMP4Chunk
dataclass
¶A single fMP4 byte chunk emitted by :class:FragmentedMP4Encoder.
kind identifies whether the chunk is the init segment (must be
fed into the client's SourceBuffer first) or a media fragment.
fastvideo.entrypoints.streaming.stream.FragmentedMP4Encoder
¶FragmentedMP4Encoder(*, width: int, height: int, fps: int, segment_idx: int, stream_id: str | None = None, ffmpeg_path: str = 'ffmpeg', preset: str = 'ultrafast', pixel_format_out: str = 'yuv420p', extra_args: list[str] | None = None)
Stream RGB frames in, fMP4 chunks out.
One encoder covers one segment. The server creates a new encoder per :class:`ltx2_segment_start`` boundary so each segment becomes one media fragment the client can append independently.
Example::
encoder = FragmentedMP4Encoder(width=1024, height=576, fps=24,
segment_idx=0)
async with encoder:
async for chunk in encoder.encode(frames):
await websocket.send_bytes(chunk.data)
Source code in fastvideo/entrypoints/streaming/stream.py
fastvideo.entrypoints.streaming.stream.FragmentedMP4Encoder.encode
async
¶encode(frames: list[ndarray] | AsyncIterator[ndarray]) -> AsyncIterator[FragmentedMP4Chunk]
Feed frames into ffmpeg and yield fMP4 chunks as they appear.
Source code in fastvideo/entrypoints/streaming/stream.py
fastvideo.entrypoints.streaming_generator
¶
Classes¶
fastvideo.entrypoints.streaming_generator.StreamingVideoGenerator
¶
StreamingVideoGenerator(fastvideo_args: FastVideoArgs, executor_class: type[Executor], log_stats: bool, use_queue_mode: bool = True)
Bases: VideoGenerator
This class extends VideoGenerator with streaming capabilities, allowing incremental video generation with step-by-step control.
Source code in fastvideo/entrypoints/streaming_generator.py
Functions¶
fastvideo.entrypoints.video_generator
¶
VideoGenerator module for FastVideo.
This module provides a consolidated interface for generating videos using diffusion models.
Classes¶
fastvideo.entrypoints.video_generator.VideoGenerator
¶
VideoGenerator(fastvideo_args: FastVideoArgs, executor_class: type[Executor], log_stats: bool, *, log_queue=None)
A unified class for generating videos using diffusion models.
This class provides a simple interface for video generation with rich customization options, similar to popular frameworks like HF Diffusers.
Initialize the video generator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fastvideo_args
|
FastVideoArgs
|
The inference arguments |
required |
executor_class
|
type[Executor]
|
The executor class to use for inference |
required |
log_stats
|
bool
|
Whether to log statistics |
required |
log_queue
|
Optional multiprocessing.Queue to forward worker logs to |
None
|
Source code in fastvideo/entrypoints/video_generator.py
Functions¶
fastvideo.entrypoints.video_generator.VideoGenerator.from_fastvideo_args
classmethod
¶from_fastvideo_args(fastvideo_args: FastVideoArgs, *, log_queue=None) -> VideoGenerator
Create a video generator with the specified arguments.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fastvideo_args
|
FastVideoArgs
|
The inference arguments |
required |
log_queue
|
Optional multiprocessing.Queue to forward worker logs to |
None
|
Returns:
| Type | Description |
|---|---|
VideoGenerator
|
The created video generator |
Source code in fastvideo/entrypoints/video_generator.py
fastvideo.entrypoints.video_generator.VideoGenerator.from_pretrained
classmethod
¶from_pretrained(model_path: str | GeneratorConfig | Mapping[str, Any] | None = None, **kwargs) -> VideoGenerator
Create a video generator from a pretrained model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_path
|
str | GeneratorConfig | Mapping[str, Any] | None
|
Path or identifier for the pretrained model |
None
|
pipeline_config
|
Pipeline config to use for inference |
required | |
**kwargs
|
Additional arguments to customize model loading, set any FastVideoArgs or PipelineConfig attributes here. |
{}
|
Returns:
| Type | Description |
|---|---|
VideoGenerator
|
The created video generator |
Priority level: Default pipeline config < User's pipeline config < User's kwargs
Stable convenience kwargs remain supported here for common engine and offload settings. Advanced model- or pipeline-specific options should move to VideoGenerator.from_config(...).
Source code in fastvideo/entrypoints/video_generator.py
fastvideo.entrypoints.video_generator.VideoGenerator.generate
¶generate(request: GenerationRequest | Mapping[str, Any], *, log_queue=None) -> GenerationResult | list[GenerationResult]
Generate video or image outputs from a typed inference request.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
GenerationRequest | Mapping[str, Any]
|
A |
required |
log_queue
|
Optional multiprocessing.Queue to forward worker logs to during this request. |
None
|
Returns:
| Type | Description |
|---|---|
GenerationResult | list[GenerationResult]
|
A |
GenerationResult | list[GenerationResult]
|
|
GenerationResult | list[GenerationResult]
|
prompts. |
Source code in fastvideo/entrypoints/video_generator.py
fastvideo.entrypoints.video_generator.VideoGenerator.generate_video
¶generate_video(prompt: str | None = None, sampling_param: SamplingParam | None = None, mouse_cond: Tensor | None = None, keyboard_cond: Tensor | None = None, grid_sizes: tuple[int, int, int] | list[int] | Tensor | None = None, **kwargs) -> dict[str, Any] | list[dict[str, Any]]
Generate a video based on the given prompt.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prompt
|
str | None
|
The prompt to use for generation (optional if prompt_txt is provided) |
None
|
negative_prompt
|
The negative prompt to use (overrides the one in fastvideo_args) |
required | |
output_path
|
Path to save the video (overrides the one in fastvideo_args) |
required | |
prompt_path
|
Path to prompt file |
required | |
save_video
|
Whether to save the video to disk |
required | |
return_frames
|
Whether to include raw frames in the result dict |
required | |
num_inference_steps
|
Number of denoising steps (overrides fastvideo_args) |
required | |
guidance_scale
|
Classifier-free guidance scale (overrides fastvideo_args) |
required | |
num_frames
|
Number of frames to generate (overrides fastvideo_args) |
required | |
height
|
Height of generated video (overrides fastvideo_args) |
required | |
width
|
Width of generated video (overrides fastvideo_args) |
required | |
fps
|
Frames per second for saved video (overrides fastvideo_args) |
required | |
seed
|
Random seed for generation (overrides fastvideo_args) |
required | |
callback
|
Callback function called after each step |
required | |
callback_steps
|
Number of steps between each callback |
required |
Returns:
| Type | Description |
|---|---|
dict[str, Any] | list[dict[str, Any]]
|
A metadata dictionary for single-prompt generation, or a list of |
dict[str, Any] | list[dict[str, Any]]
|
metadata dictionaries for prompt-file batch generation. |
Source code in fastvideo/entrypoints/video_generator.py
fastvideo.entrypoints.video_generator.VideoGenerator.shutdown
¶ fastvideo.entrypoints.video_generator.VideoGenerator.unmerge_lora_weights
¶Use unmerged weights for inference to produce videos that align with validation videos generated during training.