How to concatenate multiple image URLs in Airtable field when processing simultaneous requests with backend update delays

I’m building a messaging bot that handles multiple image uploads from users at the same time. The problem is that when users send several images together, my server gets all the requests at once but there’s a delay when updating the database field.

Because of this timing issue, instead of adding new image URLs to the existing ones in my database field, it keeps replacing the whole field with just the latest URL.

Here’s my current code structure:

@app.route('/handler', methods=['POST'])
def process_request():
    payload = request.json
    user_id = payload['userId']
    sender_name = payload['userName']
    
    if payload['messageType'] == 'image':
        fresh_image_url = payload.get('imageData')
        print("New URL: ", fresh_image_url)
        
        record_id, current_urls = db.fetch_field(user_id, "stored_images")
        print(record_id, current_urls)
        
        if current_urls is None:
            current_urls = ""
        
        url_list = current_urls.split("\n")
        url_list.append(fresh_image_url)
        
        combined_urls = "\n".join(url_list)
        print(combined_urls)
        
        db.save_image_urls(record_id, combined_urls)
    
    return payload

The issue is that only the final URL gets saved in my database instead of all the URLs being combined together. How can I fix this so that all image URLs get properly appended even when multiple requests arrive simultaneously and there are backend processing delays?

The issue you’re encountering seems to stem from a classic race condition, which is common when handling concurrent database operations. In your case, multiple requests might be reading the same value before any of the writes are completed. I faced a similar problem while developing a real-time analytics dashboard and resolved it using file-based locking with Python’s fcntl module. By obtaining an exclusive lock on a temporary file before reading the URLs, I was able to ensure that only one process could read the data at a time, releasing the lock afterward once the updates were done.

Alternatively, a more robust solution would be to manage this at the database level by implementing atomic operations. Rather than a read-modify-write cycle, consider saving each image URL as a separate record along with its timestamp and user_id, reconstructing the combined string when you need to display it. If you’re tied to your current schema and can’t make these changes, introducing a slight random delay before database reads combined with retry logic could help mitigate the issue, albeit not in an elegant way. This method can work well for lower traffic applications.

Yeah, this happens because your fetch and update operations aren’t atomic. I hit the exact same issue building a collaborative doc editor where multiple users uploaded attachments at once. Here’s what fixed it for me: use a queue. I set up Redis (or just an in-memory queue if you’re on one server) and pushed each image upload into it instead of processing directly. Then had a worker handle them one by one. This way your database reads and writes happen in order - no more overwrites. No Redis? Just throw a mutex lock around your database operations with threading.Lock() to serialize access to each user’s stored_images field. Bottom line: stop concurrent access to the same database field instead of trying to fix timing issues after the fact.