I wasn’t really impressed with the first Skyreels release, but this new text-to-video version is incredible. It works perfectly as a drop-in replacement for Wan 2.1 without needing any workflow changes when using kijai nodes.
I was worried it would be complicated to set up like the original version, but it turned out super simple. I just swapped the model file (using the 720p version that kijai compressed to 15gb) in my existing t2v workflow and hit generate. No other modifications needed at all.
The quality improvement is really noticeable compared to what I was getting before. The character generation especially looks way better - much more attractive and realistic results. I wish I could post examples but they’re probably too spicy for this community.
Anyone else testing this model? What are your thoughts on the quality compared to other t2v options?
Switched three days ago and wow, the rendering stability is night and day. Used to get these nasty artifacts where textures would completely fall apart mid-sequence, but this version keeps everything intact through the whole clip. Camera movement is way better too - no more jittery nonsense when panning or zooming. Runs buttery smooth now. Installation was dead simple, just dropped it in. Processing times are about the same, maybe a hair faster on complex stuff. Best part? Color grading actually stays consistent between frames, which cuts down my post work big time.
been usin it for about a week and honestly can’t go back to the old version. the face consistency is incredible - characters actually look like the same person throughout the whole clip instead of morphing into different people every few frames. also handles lighting changes way smoother than v2.1 did.
Just migrated from Wan 2.1 last weekend - night and day difference. Been batch testing different aspect ratios and this thing crushes vertical content compared to everything else I’ve tried.
What surprised me was how well it handles abstract prompts. Threw some really weird conceptual stuff at it and actually got results that made sense. Temporal coherence is solid too - no more objects randomly teleporting around frames.
Memory management is way cleaner. I can run longer sequences without hitting those VRAM walls that used to kill my workflows. The 15gb compressed version runs smooth on my setup with no noticeable quality loss for most stuff.
Still testing edge cases but it’s been rock solid. Definitely staying in my main pipeline.
Downloaded it yesterday after seeing those Twitter samples. Motion consistency is way better than before - way less flickering and it stays coherent even on longer clips. What really surprised me was how it handles complex prompts without falling apart. Running it on a 4090 and generation times aren’t bad, though I had to tweak VRAM allocation a bit. The compressed version works fine but the full model’s definitely sharper if you’ve got the storage. Compared to other T2V models I’ve tried lately, this one hits a sweet spot between quality and actually being usable. And yeah, the workflow integration is huge - I’m so tired of rebuilding pipelines every time something new drops.