I’m building a messaging bot that processes multiple image uploads sent by users at the same time. The bot uses a messaging API to receive the images and stores their URLs in a database table.
The problem I’m facing is that when users send several images together, my server gets all the requests at once. But it takes some time for the database to actually update each record. This causes a race condition where only the last image URL gets saved instead of adding all URLs to the existing field.
The issue is that when multiple requests come in simultaneously, they all read the same initial value from the database before any updates happen. This means each request overwrites the previous one instead of appending to it.
What’s the best way to modify this code so that all image URLs get properly appended to the database field, even when dealing with concurrent requests and database update delays?
Database locking works but it’s messy. I’d use a message queue instead - Redis or RabbitMQ will serialize your image processing. Just queue all the requests and handle them one by one. No more race conditions, no database lock headaches. Did this for a chat app and it’s way cleaner than trying to manage concurrent writes. You can even batch images from the same user if they come in quickly. Scales better and you won’t be fighting transaction isolation levels.
Had this exact problem at my last company with a file sharing service. The queue approach works, but you can fix this without adding new infrastructure.
Drop the string concatenation completely. Use a JSON column and leverage database atomic operations:
# Instead of concatenating strings, append to JSON array
db.execute(
"UPDATE table SET stored_images = JSON_ARRAY_APPEND(stored_images, '$', %s) WHERE user_id = %s",
[incoming_url, user_id]
)
This is atomic at the database level - no race conditions. If your database doesn’t support JSON operations, use a separate images table with foreign keys.
Honestly though, the real issue is your architecture. You’re trying to maintain state in a stateless endpoint. Consider using a temporary staging table where each request inserts its URL immediately, then have a background job periodically consolidate them into your main table.
This way your endpoint just does INSERT operations (naturally concurrent safe) and you handle aggregation separately.
you could also try optimistic locking with version numbers. just add a version column to your table and bump it up each time you update. if the version’s different between when you read and write, retry the whole thing. way simpler than queues but still handles race conditions cleanly without blocking other requests.
I hit this exact issue building a file upload service. Skip application-level handling and use database-level concurrency control instead. Database transactions with row-level locking worked for me, but honestly a queue-based approach is better here. Don’t update the same field simultaneously - insert each image URL as its own row with timestamp and user_id, then query when you need them. No more race conditions since you’re not doing read-modify-write on the same field. If you’re stuck with concatenation, wrap your fetch and save in a transaction with proper isolation levels.
try using transactions with SELECT FOR UPDATE to lock the row during updates. this prevents other requests from reading stale data until you’re done. most sql databases support this and it’ll fix your race condition issue.