It seems like 2025 is lastly going to be the yr when AI will enter the world of Video. We already made a post a couple of weeks in the past about two promising fashions (Wan 2.1 and SkyReels), however Tencent determined they had been going to deal with us with one other greatest in school mannequin for picture to video. On this information, we’ll go over the best way to use the native ComfyUI setup for Hunyuan Image2Video with RIFLEx. This addition permits us to generate movies which might be longer than the standard 129 frames (5 seconds) the mannequin was educated on.
Outcomes comparability with and with out RIFLEx
Earlier than we get into the main points of the workflow, these are the outcomes we acquired producing a 193 body video, with and with out RIFLEx. (720p version)
As you may see, the RIFLEx helps forestall the video from going right into a loop when going over the standard 129 frames beneficial restrict. It additionally makes it much less doubtless that the mannequin generates artefacts in longer movies.
Fast begin
In case you are in search of a fast begin with no set up, you may deploy the template we used on this tutorial here. We suggest utilizing an H100 for optimum efficiency, as this can be a heavy workflow. After two or three minutes, you need to be capable to entry the workflow through the ComfyUI interface, or through API.
To open the ComfyUI interface, click on on the ComfyUI hyperlink. Please be aware that the preliminary loading time could take a couple of minutes. As soon as the interface is prepared, you drop this workflow to get began.
Key workflow parameters
On this part, we’ll go over the vital parameters to get essentially the most out of this workflow.
Load Picture
That is the picture the workflow will use to generate the video’s first body.
Textual content Encode Hunyuan Video
That is your common textual content immediate. Right here, we had been utilizing the Flat Coloration LoRA from Civit AI and had a immediate tailor-made to work with that LoRA. Normally, you may stick with a brief description of what you might be concentrating on adopted by a couple of esthetic tags.
Hunyuan Picture To Video
This node lets you set the width and top of the video the workflow will generate. The size is the variety of frames. With RIFLEx, you may go as much as 257 frames, whereas with out it, it’s often beneficial to remain beneath 129.
Lora Loader Mannequin Solely
We’ve added the choice so as to add LoRAs to the workflow. You possibly can choose loRAs from the ComfyUI loras folder right here (for folks utilizing the template, here is a information on the best way to add new LoRAs). The power is a measure of how a lot the LoRA will impression the technology. At 0 the LoRA might be ignored, and at 1 will probably be at its most power.
Random Noise
With out going into the technical particulars, the seed is the random a part of GenAI. Set it to randomize if you need the workflow to make a totally new output every time. You too can repair it for those who discover a seed that works nicely together with your different parameters, it will make the outcomes extra constant.
One factor that’s good to bear in mind is that the seed has a big impact on the standard of the present technology of video fashions. In case you are not pleased with the outcomes, altering the seed would possibly resolve the issue.
Video Mix
This node combines all of the generated frames right into a video. The important thing parameters are the body price and the video format. Generally, Hunyuan generates very sluggish or very quick movies. Adjusting the body price is an effective method to regulate the velocity with out having to regenerate from scratch.
And that’s all you’ll want to get nice outcomes with Hunyuan Video. If you wish to get updates on new templates and guides, you may comply with us on Twitter/X or be a part of our discord.
You too can add your personal workflow to ViewComfy cloud and entry it through the ComfyUI interface or an API.