Seedance 2.0 + Video Reference = Fixes Continuity Issues
In this tutorial we will be seeing how to fix continuity issues in Seedance 2.0 when creating longer scenes by combining multiple clops, by using video references inside Higgsfield. Using a video reference gives a lot of context to Seedance, so it creates the next scene without any continuity issues. We will also be taking a brief look at Higgsfield Canvas. Here’s the video:
Video Summary
This video demonstrates how Seedance 2.0, accessed via the Hixfield platform, uses a new “Video Reference” feature to solve one of the biggest challenges in AI filmmaking: continuity issues [00:00:47].
Key Highlights:
- The Problem: AI video models often struggle to maintain consistency in lighting, characters, and sound across different clips, making it difficult to create longer, cohesive scenes [00:01:00].
- The Solution: By using a previously generated clip as a video reference for the next generation, Seedance 2.0 can understand the context, color grading, and style of the preceding scene [00:03:21].
- Workflow:
- The creator first generates an initial 15-second clip using a text prompt and image reference [00:01:21].
- For the next scene, the first video is uploaded as a reference, ensuring the AI “knows” it is a continuation of the same story [00:03:52].
- Post-Production: While the transitions are much more consistent, the creator notes that the AI often starts the new clip with the ending frame of the reference. He recommends trimming these overlapping frames in editing software like Premiere Pro for a seamless result [00:07:44].
- Higgsfield Canvas: The video mentions a new node-based structure called “Canvas” designed for more efficient filmmaking, though the creator notes some current limitations with linking video references within that system [00:08:14].
Overall, the video shows that video referencing shifts the user’s focus from fighting technical inconsistencies to refining the creative output and quality of the final result [00:06:39].

