How to fetch all tickets from Jira using Python and loop through them

I want to retrieve every ticket from my Jira instance and process them one by one. Right now I can only get individual tickets like this:

from jira import JIRA

connection = JIRA(basic_auth=('myuser', 'mypass'), options={'server':'https://company-jira.atlassian.net'})

ticket = connection.issue('TICKET-123')
print(ticket.fields.project.key)
print(ticket.fields.issuetype.name)
print(ticket.fields.assignee.displayName)
print(ticket.fields.summary)
print(ticket.fields.comment.comments)

This works fine for getting one ticket at a time, but I need to create a collection of all ticket IDs so I can do something like:

ticket = connection.issue(ticket_id)

My plan is to make a loop that goes through every ticket and shows the same information. The problem is I don’t know how to get all the ticket keys first. What’s the best way to do this?

Yeah, search_issues works but you’ll slam into rate limits with big Jira instances. Been there.

I used to fight with the Jira Python library for bulk stuff until I figured out automation platforms do this way better. They handle pagination, rate limits, and errors automatically.

With Latenode, you can build a workflow that:

  • Grabs all tickets with JQL queries
  • Runs your custom logic on each one
  • Retries failed API calls
  • Runs scheduled or on-demand

No more writing loops or babysitting API connections. Just drag and drop to build your pipeline. The Jira integration handles auth and pagination for you.

You can easily add other stuff later - send results to Slack, update databases, generate reports. All visual, no coding.

search_issues is your answer, but there’s a catch with large datasets that others missed. Don’t pull everything at once - use the startAt parameter to paginate and avoid memory issues and timeouts.

start_at = 0
max_results = 50
all_issues = []

while True:
    issues = connection.search_issues('project in (PROJ1, PROJ2)', startAt=start_at, maxResults=max_results)
    all_issues.extend(issues)
    if len(issues) < max_results:
        break
    start_at += max_results

This saved me when processing 10k+ tickets across multiple projects. The connection would timeout trying to fetch everything in one go. Loop through all_issues and use issue.key for your processing.

u can use search_issues() like this: connection.search_issues('project = YOUR_PROJECT', maxResults=False). setting maxResults to False gets all issues. then loop through the results n grab issue.key for each ticket ID. works great for batch processing!

Use JQL filters to narrow down what you need before fetching. I learned this the hard way after accidentally pulling 50k resolved tickets from our ancient Jira instance when I only wanted active ones. Start with issues = connection.search_issues('status != Resolved AND status != Closed') or whatever fits your case. Then iterate with for issue in issues: print(issue.key) to grab the ticket keys. Some fields might be empty on certain ticket types, so wrap field access in try-catch blocks or check if the field exists first. I’ve had scripts crash on tickets where assignee was None or custom fields weren’t populated.