Generate cinematic video, images, and animated characters on your own Tenstorrent hardware. No cloud. No rate limits. Full creative control.
A focused three-phase loop for building a local AI media library — from a blank prompt to a polished collection you can enjoy on any screen.
Write a prompt — or let the three-tier generator do it for you. Submit to Wan2.2, Mochi, SkyReels, FLUX, or AnimateDiff. Server management, queuing, and live progress are built in.
Every generation lands in a persistent gallery with full metadata. Hover to preview. Star favorites. Export or share. A growing archive of everything your hardware has ever made.
Switch to TT-TV — a lean-back cinematic mode that plays your library as a continuous, looping experience. Your models. Your prompts. Your channel.
A full-screen cinematic viewer for your generated library. No algorithm decides what you see — just your own creations, playing on loop.
Switch between text-to-video, image generation, image-to-video, and character animation from a single interface. Each model has a dedicated server managed by the app.
14B parameter cinematic video model. The flagship experience for long, detailed prompts.
High-motion, expressive video generation. Great for character and action-heavy prompts.
Fast diffusion transformer at 540P. Animates still images with physics-respecting motion.
State-of-the-art text-to-image. Rich detail, accurate text rendering, photorealistic output.
Video-to-video character animation. Give any character a motion — or replace a person in a clip.
A built-in prompt engine with no cloud dependency. Generates cinematic, specific, and evocative prompts for every model type — instantly.
Samples from deep, curated word banks — subjects, settings, lighting, camera moves, mood, style — and assembles them into a structured slug. Fast, deterministic, always available. No model required.
Always availableTrains on a seed corpus of tagged prompts and generates novel recombinations at the sentence level. Produces unexpected register collisions — a 1970s Betamax in an Escher staircase, a Muppet at a Manhattan diner at 2am.
Always availableQwen3-0.6B on CPU (port 8001) takes the raw slug and makes it flow naturally — without re-selecting or hallucinating new elements. Adds rhythm and precision without creative drift.
Qwen3-0.6B on CPUDrive any character with a reference motion — a wave, a nod, a shaka, a Vulcan salute. Hover any clip to see it play. Generated with Wan2.2-T2V as animation reference clips.
Ubuntu 24.04 with Tenstorrent hardware? Grab the .deb.
Mac or any other Linux machine? Clone and run directly.
Or download directly from the
Releases page.
Install gh with sudo apt install gh if needed.
Installs the app and launchers. Docker may also be installed via recommended packages; otherwise, install and start Docker manually before first use.
Generated videos and images are saved to
~/.local/share/tt-video-gen/ and automatically
linked into ~/Videos/tt-local-generator/ for easy browsing.
tt-local-gen-download-model --repo Wan-AI/Wan2.2-T2V-A14B-Diffusers
Or point at a remote server: ./tt-gen --server http://your-tt-machine:8000