I’m encountering a situation where my Jenkins pipeline utilizes a script that interacts with Jira to update issue versions. The code essentially performs the following actions:
node {
stage('JIRA') {
def searchResults = jiraJqlSearch jql: "project = ${jiraProjectKey} AND issuekey = ${issueKey}"
def issues = searchResults.data.issues
for (index = 0; index < issues.size(); index++) {
def retrievedIssue = jiraGetIssue idOrKey: issues[index].key
def createdVersion = jiraNewVersion version: [name: "${newVersion}", project: "${jiraProjectKey}"]
def updatedFixVersions = retrievedIssue.data.fields.fixVersions << createdVersion.data
def issueUpdate = [fields: [fixVersions: updatedFixVersions]]
response = jiraEditIssue idOrKey: issues[index].key, issue: issueUpdate
}
}
}
The script successfully searches and appends new versions to issues. However, I face a problem when multiple concurrent builds attempt to update the same Jira issue with different versions. This often results in some builds overwriting others, leading to the loss of previously appended versions. For instance:
- Job 1 - build 1 updates ${issueKey} with fixVersion 1.1
- Job 2 - build 1 updates ${issueKey} with fixVersion 2.1
- Job 3 - build 1 updates ${issueKey} with fixVersion 3.1
Eventually, the final state of ${issueKey} may show fixVersions as 1.1 and 3.1, with version 2.1 missing from the activity history. I believe this might be related to a Dirty Write issue within either Jira or the Jenkins plugin. Could someone suggest modifications to the pipeline script that would allow independent executions to correctly append to fixVersions without causing loss of data? Additionally, I’m working on a standalone job to replicate this issue for better testing of potential solutions. Thank you for your help.