Hey everyone, I’m looking for some advice on our Jira setup. We’ve got a two-node cluster running, and I’m wondering if there’s a clever way to do a full reindex without taking the whole system offline.
We usually do this after version updates, mostly for security reasons. The tricky part is, we can’t easily take a node out of the load balancer because another team handles that. It’s a real pain and slows everything down.
I know Jira can do background reindexing while it’s up and running, but I’m worried about potential hiccups. Has anyone figured out a trick to keep Jira available during a full reindex without messing with the load balancer? Any tips would be super helpful!
hey RunningTiger, i’ve been in ur shoes. background reindexing is actually pretty solid in my experience. we run a 3-node cluster and do it regularly without issues. just keep an eye on system resources and maybe schedule it during off-peak hours. if ur worried, u could always test on a staging environment first. good luck!
I’ve dealt with this exact situation in my previous role. Here’s what worked for us: we leveraged Jira’s built-in background reindexing feature, but with a twist. We scheduled it during our lowest traffic period and temporarily beefed up our server resources to handle the extra load.
To mitigate risks, we first ran a full backup, then started the reindex on one node. We closely monitored performance and user experience. If any issues cropped up, we were ready to pause and reassess.
One key thing we learned: clear communication with users about potential slowdowns was crucial. We also found that breaking the reindex into smaller chunks over a few days, rather than one massive operation, helped maintain stability.
It’s not a perfect solution, but it allowed us to keep Jira running while still getting that crucial reindex done. Just be prepared for some potential performance hits during the process.
I’ve encountered this challenge before. While background reindexing is generally reliable, it can be resource-intensive. One approach we’ve successfully implemented is staggering the reindex across nodes. Start with one node, monitor its progress and impact, then move to the next. This method maintains cluster availability while minimizing risk. Additionally, consider optimizing your indices beforehand to reduce reindex time. If possible, temporarily increase server resources during the process. Always have a rollback plan ready, just in case. Remember, thorough testing in a staging environment that mirrors production is crucial before attempting this in your live system.