Real-time data exchange between OMNeT++ simulation and Python ML model

I’m working with an OMNeT++ network simulator (version 4.6) that uses the INET framework. My goal is to establish real-time communication between the running simulation and a machine learning model implemented in Python.

The simulator needs to continuously send network metrics like signal quality measurements and node position data to the Python application. The ML model processes this information to make optimization decisions and sends control commands back to the simulator.

What’s the best approach to enable this bidirectional communication while both applications are running? I need the data flow to happen during simulation execution, not as separate batch processes.

For OMNeT++ 4.6 with INET, go with UDP sockets over TCP. TCP’s reliability mechanisms will add delays that’ll mess with your real-time requirements. I built something similar - created a custom module inheriting from cSimpleModule and put socket communication in separate threads. Make sure you’re using non-blocking socket operations or your simulation will freeze waiting for Python responses. Python side, I used asyncio for handling concurrent data processing and socket communication. Protobuf beats JSON for serialization - it’s faster and handles binary data without the headaches. Don’t forget to add a heartbeat mechanism so you can catch when either side goes down.

i used TCP sockets too! its pretty reliable for sending data in real-time. just set up a socket in OMNeT++ and let python handle the incoming data on a port. json makes it super easy to move the data back and forth. cheers!

Named pipes worked great for my similar setup - perfect for low-latency communication between OMNeT++ and external apps. Just create a FIFO pipe on Unix and have both processes read/write to it. Way less overhead than network sockets since it’s all local. You’ll need custom OMNeT++ modules that dump simulation data to the pipe at set intervals or when specific events happen. Python can watch the pipe continuously for incoming data and send responses back through a separate pipe. Watch out for synchronization though - don’t let your simulation hang waiting for Python if the ML processing takes too long.