I’m working on a file upload system where I want to stream files directly to Google Drive without storing them locally first. My setup includes uppy for the frontend and tusdotnet for the backend server. I tried creating a custom ITusStore implementation but ran into issues.
public async Task<string> InitializeFileUploadAsync(long fileSize, string metaData, CancellationToken token)
{
var sessionId = Guid.NewGuid().ToString();
// Set up Google Drive file properties
var driveFile = new Google.Apis.Drive.v3.Data.File
{
Name = $"upload_session_{sessionId}",
Parents = new[] { "target_directory_id" } // Your folder ID here
};
// Initialize resumable upload
var createRequest = _googleDriveService.Files.Create(driveFile, null, "application/octet-stream");
createRequest.Fields = "id";
createRequest.ChunkSize = 8 * 1024 * 1024; // 8MB chunks
// Get the resumable session URL
var sessionUri = createRequest.InitiateSessionAsync(token).Result;
_activeSessions[sessionId] = new DriveUploadSession
{
SessionUri = sessionUri,
BytesUploaded = 0
};
return sessionId;
}
Has anyone successfully implemented direct Google Drive uploads with tus protocol? What am I missing in my approach?
Custom ITusStore implementations for Google Drive are a nightmare. You’re basically building a bridge between two APIs that hate each other. I’ve tried this - it gets messy fast.
Skip the tusdotnet and Google Drive API wrestling match. Use webhooks instead. When files hit your upload endpoint, automatically stream them to Google Drive through a proper automation platform.
I’ve built systems like this where files go straight to cloud storage without touching local drives. The trick is having solid automation that handles API calls, retries, and errors for you. Way cleaner than maintaining custom ITusStore code.
Keep uppy on the frontend, but ditch the complex tus setup. You’ll get a reliable file processing pipeline that actually works.
u shouldnt use .Result in asycn code, it can lead 2 deadlocks! switch it 2 await for InitiateSessionAsync. also, keep an eye on chunk boundaries whn streaming to Google Drive API, helps with avoidin issues.
been there! your DriveUploadSession class needs more metadata - file position, chunk hashes, and error counters. google drive gets pissy without proper Content-Range headers on each chunk. also make sure your _googleDriveService has retry logic built in - network hiccups will kill the whole upload.
The main problem is that tus expects file offset tracking on your server, but Google Drive handles this internally with resumable upload sessions. Your InitializeFileUploadAsync method starts the Google Drive session fine, but you’re missing how to map tus chunks to Drive’s resumable protocol. I solved this by storing the Google Drive session URI with the expected file offsets in my ITusStore. When tus sends a chunk, I check the offset matches what Drive expects, then use HttpClient to PUT directly to the stored session URI. The tricky bit is handling partial failures - if Drive rejects a chunk, you need to query its upload status and sync back with tus. Also make sure your chunk sizes match. Drive works best with 256KB multiples, so set up tusdotnet the same way. Without proper offset sync between both protocols, you’ll get corruption or failed uploads.
Man, all these custom ITusStore implementations are overkill. I work with file upload systems constantly and mapping between tus protocol and Google Drive’s resumable uploads means months debugging weird edge cases.
You’re forcing two different protocols to talk when they handle chunks and sessions completely differently. Google Drive wants specific chunk boundaries, tus does its own thing, and you’re translating between them.
Skip the custom bridging code. Upload to a temp endpoint, then trigger an automated workflow that streams directly to Google Drive. You get direct streaming without maintaining fragile custom ITusStore code.
I’ve watched teams spend weeks making tus and Google Drive work together, then it breaks when Google tweaks their API. Automation handles retry logic, error management, and API quirks automatically.
Keep uppy for UX, but automate the Google Drive integration. Way more reliable than custom protocol bridges.
You are attempting to use the tus protocol with Google Drive’s resumable upload API, and you’re encountering issues due to the incompatibility in how chunk sizes and offsets are handled between the two systems. Your InitializeFileUploadAsync method correctly initiates the Google Drive upload session, but it’s missing the crucial step of synchronizing chunk processing between the tus protocol’s expectations and Google Drive’s resumable upload mechanism. This leads to potential corruption or failed uploads.
Understanding the “Why” (The Root Cause):
The core issue lies in the mismatch between how tus and Google Drive manage uploads. tus expects the server to track file offsets and handle chunks independently. Google Drive, on the other hand, manages this internally within its resumable upload sessions. Directly streaming tus chunks to Google Drive without a buffer and careful offset management results in inconsistencies. Google Drive may reject chunks because they don’t align with its expected chunk boundaries or offsets, leading to upload failures. Additionally, Google Drive’s resumable sessions have a time limit (approximately one week), requiring additional session management on your side.
Step-by-Step Guide:
Implement a Buffering Mechanism: Modify your ITusStore implementation to incorporate a buffering system. Instead of directly streaming tus chunks to Google Drive, accumulate chunks until they reach Google Drive’s preferred chunk size (a multiple of 256KB is recommended). This ensures alignment with Google Drive’s expectations. You’ll need to carefully track byte offsets to correctly assemble and send these larger chunks.
Maintain Accurate Offset Tracking: Implement precise offset tracking within your DriveUploadSession class. This class must store the current byte offset (position) for the upload, ensuring consistency between the tus client’s reported offset and Google Drive’s internal tracking.
Use HttpClient for Direct Upload: Use HttpClient to make PUT requests directly to the Google Drive resumable upload session URI. This URI is obtained during session initialization (InitiateSessionAsync). For each buffered chunk, construct the appropriate Content-Range header to specify the byte range being uploaded.
Handle Partial Failures: Implement robust error handling to manage situations where Google Drive rejects a chunk. This might involve querying the upload status to determine the point of failure and resuming from the correct offset.
Implement Session Cleanup: Implement a mechanism to clean up abandoned upload sessions after a certain period (e.g., one week). This prevents the accumulation of stale sessions.
//Example adjustments (Illustrative, requires adaptation to your existing code)
public class DriveUploadSession
{
public string SessionUri { get; set; }
public long BytesUploaded { get; set; }
public List<byte[]> BufferedChunks { get; set; } = new List<byte[]>();
}
public async Task<string> UploadChunkAsync(string sessionId, byte[] chunk, long offset, CancellationToken token)
{
var session = _activeSessions[sessionId];
session.BufferedChunks.Add(chunk);
// Check if buffer reaches Google Drive's chunk size
if (session.BufferedChunks.Sum(c => c.Length) >= 8 * 1024 * 1024) // Example: 8MB
{
//Combine chunks into a single byte array
var combinedChunk = session.BufferedChunks.SelectMany(x => x).ToArray();
await UploadBufferedChunkAsync(sessionId, combinedChunk, offset, token);
session.BufferedChunks.Clear();
}
return sessionId;
}
private async Task UploadBufferedChunkAsync(string sessionId, byte[] chunk, long offset, CancellationToken token)
{
var session = _activeSessions[sessionId];
using (var client = new HttpClient())
{
var content = new ByteArrayContent(chunk);
content.Headers.ContentRange = new ContentRangeHeaderValue(offset, offset + chunk.Length - 1, chunk.Length); // Adjust as needed
var response = await client.PutAsync(session.SessionUri, content, token);
// Handle errors and partial failures. Query Google Drive upload status if necessary.
if (!response.IsSuccessStatusCode)
{
// Implement retry logic with exponential backoff.
// Log errors appropriately for debugging.
}
session.BytesUploaded += chunk.Length;
}
}
Common Pitfalls & What to Check Next:
Chunk Size Mismatch: Ensure your tus client’s chunk size is a multiple of 256KB to match Google Drive’s preferences. Mismatched chunk sizes are a major source of upload failures.
Offset Calculation Errors: Double-check your offset calculations. Incorrect offsets will lead to chunk corruption or rejections.
Insufficient Error Handling: Implement comprehensive error handling and retry logic with exponential backoff to address transient network issues.
Session Expiration: Implement Google Drive session cleanup to avoid issues with expired sessions.
Still running into issues? Share your (sanitized) config files, the exact command you ran, and any other relevant details. The community is here to help!