Handling payload backlog in Nearby Connections API

I’m working with an IoT device that sends sensor measurements to an Android phone using Nearby Connections. The device transmits temperature readings every 50ms (20 times per second).

The issue I’m facing relates to how the API handles multiple payloads:

When using sendPayload() multiple times, the API guarantees ordered delivery by queuing subsequent payloads until the previous one completes transmission.

In my situation, since network speeds fluctuate, the received data on the phone arrives with increasing delays. The payload queue keeps growing larger.

How can I solve this problem? I don’t actually need ordered delivery for my use case. One approach I considered is implementing an acknowledgment system where each payload waits for a delivery confirmation before sending the next one.

Any suggestions would be helpful.

EDIT:

Using STREAM payload type solved my problem perfectly. When the InputStream contains multiple sensor readings (each reading has temperature, humidity, etc. totaling 28 bytes), I use the skip() method to jump directly to the most recent data.

Nice work with the STREAM approach. I hit a similar issue building a real-time monitoring system for industrial sensors. What worked for me was batching readings on the IoT device before sending them. Instead of individual 28-byte packets every 50ms, I’d collect 10-15 readings and send one bigger payload every 500-750ms. Cut down the queue buildup big time while keeping latency acceptable for most monitoring stuff. The receiving end just parses the batched data. Might be worth trying if you want to cut network overhead even more, though sounds like STREAM is doing the job for you.

Nice work with STREAM payloads. I hit the same issue building a heart rate monitor that pulled real-time data from wearables. That ordered delivery guarantee in Nearby Connections kills you with high-frequency streams. Your skip() trick is perfect for grabbing the latest readings. I also learned to tweak connection bandwidth based on what you’re doing. Since temperature monitoring doesn’t need crazy low latency, try BALANCED or LOW_POWER connections - they reduce queueing since the API throttles throughput more. Also, watch those payload status callbacks. They’ll tell you when the queue’s backing up so you can throttle transmission rates on the sender side.

Glad you got it working! I was about to suggest ditching the payload queue and just overwriting old data, but STREAM’s definitely the better approach. Just watch out for your skip() logic with partial reads - sometimes the inputstream won’t have all 28 bytes ready. I’ve run into that exact issue doing sensor work before.