Multiple Jenkins builds updating same Jira ticket fixVersions causes version loss

I’m running into a race condition issue when multiple Jenkins builds try to update the same Jira ticket at once. Each build should add its own fixVersion to the ticket, but sometimes versions get overwritten or lost completely.

node {
    stage('Update Ticket') {
        def ticketQuery = jiraJqlSearch jql: "project = ${projectKey} AND issuekey = ${ticketId}'"
        def ticketList = ticketQuery.data.issues
        for (j = 0; j < ticketList.size(); j++) {
            def ticketData = jiraGetIssue idOrKey: ticketList[j].key
            def releaseVersion = jiraNewVersion version: [name: "${versionName}",
                                                project: "${projectKey}"]
            def currentVersions = ticketData.data.fields.fixVersions << releaseVersion.data
            def updatedTicket = [fields: [fixVersions: currentVersions]]
            result = jiraEditIssue idOrKey: ticketList[j].key, issue: updatedTicket
        }
    }
}

The problem happens when I have multiple pipeline jobs running simultaneously. For example if Build A tries to add version 5.2, Build B adds version 6.1, and Build C adds version 7.0 to the same ticket, only some versions actually stick. Looking at the ticket history shows that some versions got added then immediately removed.

I think this is happening because each build reads the current fixVersions list, modifies it, then writes it back. But if another build updates the ticket between the read and write operations, the changes get lost.

Has anyone dealt with this kind of concurrent update problem before? I need a way to safely append fixVersions without losing data when multiple builds run at the same time. Any suggestions for making this more robust?

This exact thing burned me last year when parallel builds kept stomping on each other’s version updates. You nailed it - it’s definitely that read-modify-write race condition. Here’s what saved us: retry mechanism with exponential backoff. When the JIRA update fails or we hit a version conflict, we wait a random interval, re-read the ticket state, and retry. We also validate if our version’s already there before updating - cuts down on pointless API calls. Another trick that helped: batch version updates when you can. Instead of three separate builds each adding one version, queue them up and let one process handle multiple additions in a single atomic operation. The retry approach is way easier to implement if you can’t mess with your build orchestration.

We fixed this with a basic lock using Jenkins build parameters. Set up a shared parameter that works like a semaphore - builds check if another build is already updating JIRA tickets for the project. If it’s locked, wait 30-60 seconds and try again. Pretty crude solution but beats dealing with API conflicts.

Had the same issue with our deployment pipeline. Found a much better approach than trying to coordinate concurrent updates. Use JIRA’s REST API with PUT requests that include version checks. Grab the issue’s current version number from the response metadata, then include that version when you update. If someone else changed the ticket between your fetch and update, JIRA throws a 409 conflict and rejects it. Just re-fetch and try again. The key difference - don’t try preventing the race condition, detect it and handle it gracefully. Way more reliable than hoping timing works out. One more thing - create version objects outside the update call if they don’t exist yet. Otherwise you’ll hit another race condition there.

This topic was automatically closed 4 days after the last reply. New replies are no longer allowed.