AI Video Model
Seedance 2.0 for Reference-Led Video Workflows and Audio-Capable Motion
Use Seedance 2.0 when reference material is part of the job: a still to animate, a clip to remix, or a direction that needs more input control than a simple prompt run.
Text to video, image to video, and video to video
Starter credits for new accounts
Daily check-in rewards
Audio-capable workflows supported

Supports text to video, image to video, and video to video in AuraTuner's main video studio.
Current setup covers 4 to 15 second workflows, multimodal reference handling, and audio-capable generation paths.
Use It When
You are animating a still, remixing a reference clip, testing continuity, or building campaign mockups where source material should influence the output.
Do Not Start Here If
You only have a simple prompt and want the lowest-cost way to test basic motion. Start with Kling, then move to Seedance once reference control becomes the real question.
First Test
Use a source image or clip that represents the hardest part of the job. The first test should prove reference following, not just prompt interpretation.