Real-time data exchange between OMNeT++ simulation and Python-based machine learning model

I’m working with an OMNeT++ simulator (version 4.6) that uses the INET framework. I need to establish a connection between my running simulation and a Python script that contains a machine learning algorithm.

Basically, my simulation collects network metrics like signal-to-noise ratios and mobile device positions. This information needs to be passed to my Python ML model while the simulation is still running. The Python script processes this data to train the model and then sends back control commands to optimize network performance.

What’s the best approach to set up this bidirectional communication? I need both programs to exchange data continuously during execution without stopping either process.

I had the same issue when connecting OMNeT++ to external analytics tools. Named pipes worked great for me - they give you reliable two-way communication without network protocol overhead. Just create a named pipe in your simulation with standard C++ file operations, then have your Python script read/write to the same pipe using os.open(). Both processes can run independently while staying synced. Watch out for blocking operations though - I’d set timeouts on both ends so you don’t get deadlocks when one process is waiting for data that never comes.

totally! sockets are the way to go. I did this b4 with OMNeT++ and Python too. make ur python script a TCP server, and with OMNeT++ use cSocket for data exchange. just run the socket stuff in a diff thread to keep it smooth!

Shared memory works great for this setup. I used it when OMNeT++ was feeding data to multiple Python processes running different ML tasks.

Performance beats sockets or pipes since you’re not copying data around. OMNeT++ writes to a memory segment, Python reads it directly with mmap or multiprocessing.shared_memory.

For bidirectional communication, I made two shared memory blocks - one for sim data going to Python, another for commands coming back. Threw in some semaphores so both processes know when fresh data’s ready.

Trickiest part is the memory layout since C++ and Python both need to read the same data structure. I went with a simple header containing data size and timestamp, then the actual metrics in fixed format.

This talk covers Python + OMNeT++ integration really well:

Watch out though - if your ML model’s slow, buffer the simulation data so OMNeT++ doesn’t sit there waiting for Python to catch up.

ZeroMQ message queues work perfectly for this. I built a custom OMNeT++ module that handles messaging in the background - doesn’t block simulation events at all. ZeroMQ manages connections automatically and recovers when processes crash. You’ve got different patterns to choose from too. I started with request-reply for basic commands, then moved to pub-sub when my ML model needed to push updates to multiple sim nodes. Setup’s easy - just link the C++ library in OMNeT++ and pip install pyzmq on the Python side. Since it’s async, your simulation timing stays spot-on even when ML processing runs long. Way cleaner than wrestling with raw sockets or shared memory corruption.