Runtime data exchange between OMNeT++ simulation and Python-based machine learning model

I’m working with an OMNeT++ simulation (version 4.6) that uses the INET framework. My project requires real-time communication between the simulator and a Python ML model while both are running.

The simulator needs to continuously send network data to the Python agent, including signal-to-noise ratio measurements and node positions. The Python model processes this information and trains itself dynamically.

After processing, the Python agent must send control commands back to OMNeT++ to optimize network performance. Both applications need to exchange this data seamlessly during execution.

What’s the best approach to establish this bidirectional communication between these two separate processes? I need a solution that works efficiently during runtime without interrupting either the simulation or the ML training process.

Named pipes worked perfectly for my similar project last year. Way simpler than sockets or message queues when you’re working with local processes on the same machine. Set up a FIFO pipe for each direction - one for OMNeT++ to Python, another for Python back to OMNeT++. Your C++ simulation writes serialized data straight to the pipe, Python reads it using standard file operations. No complex networking libraries required. Pipes buffer automatically, which is huge. If your ML model gets busy training, simulation data just queues up without blocking OMNeT++. Same thing the other way around. I used JSON for everything since both sides handle it easily. For SNR measurements and positions, JSON keeps things readable and makes debugging way easier. Performance was solid even with high-frequency data exchange. One gotcha - handle pipe breaks gracefully. If either process crashes, the other needs to detect it and reconnect or fail cleanly. Otherwise you’ll get weird hanging that’s a pain to debug.

I’ve done this exact thing - connecting OMNeT++ simulations to ML pipelines in real time. ZeroMQ was a game changer for us.

Skip the heavy message brokers. ZeroMQ handles IPC without the bloat. Set up publisher-subscriber: OMNeT++ publishes network data, Python subscribes. Create another socket pair for the return path.

My setup:

Write a simple C++ wrapper in OMNeT++ that uses zmq to publish data. Python just needs pyzmq to receive and send commands back.

Crucial part - make everything non-blocking. Use ZMQ_NOBLOCK so neither process hangs waiting. Your simulation timing won’t get messed up.

Batch your small updates. Don’t fire off every single measurement instantly. We collected them for ~100ms then sent as one batch. Much better performance.

Watch out for one thing - your Python model needs to handle irregular message timing. The simulation will sometimes dump data in bursts depending on what’s happening in the scenario.

Happy to drop some code snippets if you want. This scaled great even with our complex network sims.

TCP sockets are probably easier than ZeroMQ here. I built something similar with inet - just basic socket communication. Python runs a server, OMNeT++ connects as a client. Works great and no extra dependencies. Just run the network stuff in separate threads so your simulation doesn’t freeze up.

This topic was automatically closed 4 days after the last reply. New replies are no longer allowed.