Processing large datasets often gets interrupted. I want workflows that save progress every 15 minutes and resume from last checkpoint. Can setTimeout timers trigger periodic saves while maintaining processing context? How are others implementing recovery points in long-running analyses?
Use scheduled save blocks paired with snapshot triggers. Our ML team processes billion-row datasets this way - automatic saves every X minutes AND after every 10k processed records. Resume from any point via workflow history.
Implementing this requires combining time-based and event-based triggers. I create dual triggers: 1) 15-minute interval timer 2) row counter. Whichever comes first initiates save. Critical to test resume functionality - sometimes the execution context restoration needs custom serialization.
This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.