My findings after reviewing over 2000 n8n automation workflows

I decided to dive deep into understanding how people build n8n workflows, so I grabbed 2,050 public workflows and ran them through some analysis tools. I used AI to help me go through all the JSON files and create a comprehensive breakdown of what I found.

The results were pretty eye-opening. Here are the main takeaways from analyzing these workflows:

What I Discovered

Major Problem Areas

Error Handling Crisis: Almost all workflows (97%) don’t have any error handling at all. This means when something breaks, you just won’t know about it.

Security Issues: Found 320 public webhooks with no authentication and 152 workflows making HTTP calls instead of secure HTTPS.

Efficiency Problems: About 7% of workflows have unused nodes just sitting there, and many are making API calls inside loops which kills performance.

Key Numbers

  • Total workflows analyzed: 2,050
  • Total nodes found: 29,363
  • Average nodes per workflow: 14.3
  • AI/ML workflows: 34.7% of all workflows
  • Workflows with proper error handling: Only 3%

Common Patterns I Noticed

Most popular node combinations:

  • Set → HTTP Request (used 379 times)
  • HTTP Request → HTTP Request (350 times)
  • If → Set (267 times)

The Sticky Notes node is used most frequently (7,024 times), which shows people are trying to document their workflows.

Recommendations for Better Workflows

  1. Add Error Triggers: Every workflow should have one connected to notifications
  2. Secure Your Webhooks: Always use authentication
  3. Use HTTPS Only: Never send data over plain HTTP
  4. Clean Up: Remove unused nodes and optimize your flows
  5. Document Everything: Use those sticky notes to explain what your workflow does

AI Workflow Insights

Since over 30% of workflows are AI-related, I found some interesting patterns:

  • 346 workflows use agent-based approaches
  • 267 use multiple AI models
  • 201 have memory systems built in
  • Surprisingly, none use vector databases for RAG patterns

Security Checklist

Based on the vulnerabilities I found, here’s what every workflow should have:

  • No hardcoded API keys
  • Authentication on all webhooks
  • HTTPS for all external calls
  • Proper credential management
  • Error messages that don’t leak sensitive info

Performance Tips

  • Batch your API calls instead of looping
  • Remove unused nodes
  • Use parallel processing where possible
  • Keep workflows under 10 seconds execution time
  • Cache API responses when you can

Quick Wins

If you want to improve your workflows right now:

  1. Add an Error Trigger node to catch failures
  2. Enable authentication on any public webhooks
  3. Switch HTTP calls to HTTPS
  4. Remove any unused nodes
  5. Add some documentation with Sticky Notes

This analysis really opened my eyes to how much room there is for improvement in workflow design. Most people are building functional workflows but missing crucial elements like error handling and security.

Has anyone else done similar analysis on their workflows? Would love to hear what patterns you’ve noticed in your own automation setups.

Great analysis! The node usage stats really show where people are in their automation journey. Set → HTTP Request being the top combo tells me most folks are still doing basic data prep before API calls - they haven’t found the more advanced nodes yet. I see this all the time in enterprise teams who stick with what works instead of exploring better options.

The AI workflow numbers actually worry me from a cost angle. 700+ AI workflows can get crazy expensive if they’re not optimized, especially when people chain models together when they don’t need to.

Your security findings missed something big though - credential reuse is probably worse than hardcoded keys. I’ve seen teams create one ‘master’ credential for everything, which is a disaster waiting to happen.

The missing vector databases for RAG makes sense. Most n8n users probably just use simple text matching or external services instead of building it natively. I’d love to know how many of those 2050 workflows are actually running in production vs just abandoned experiments.

This research matches what I’ve seen managing hundreds of n8n instances across different teams. The error handling stat doesn’t surprise me - I’ve watched countless workflows fail silently because nobody added basic monitoring. You missed one pattern though: people overuse webhook triggers when scheduled ones work better. They expose unnecessary endpoints because webhooks feel more ‘real-time’ even when it’s not needed. The AI workflow percentage is interesting but I bet most are just experiments, not production. The workflows that actually run consistently in business are boring stuff - data syncing, notifications, basic API integrations. For performance, the biggest problem isn’t API calls in loops. It’s workflows without proper rate limiting. Teams deploy something that works in testing then completely overwhelm third-party APIs with real data volumes. Simple delay nodes prevent tons of headaches.

Wow, that’s a lot of data! The 97% no error handling thing is scary but not surprising tbh. I’ve been building workflows for 2 years and only started adding error triggers after getting burned hard by silent failures. One thing I noticed - your sticky notes stat shows people are documenting, but in my experience those notes are usually just “TODO: fix this later” lol. Also curious about the no vector db usage for RAG - maybe people just don’t know how to implement them in n8n? The webhook security issue is huge. I’ve seen so many demos where people just paste raw webhook URLs in Slack channels.