I’m working on a network simulation project using OMNeT++ version 4.6 with INET framework. My challenge is establishing real-time communication between the simulation and a Python-based machine learning algorithm.
The simulation generates network performance metrics like signal-to-noise ratio values for different connections and position coordinates of moving devices. This information needs to be passed to a Python script that contains a learning model.
The Python component processes this data to train its model dynamically and then produces optimization commands to maintain optimal signal quality across the network. These commands must be sent back to the OMNeT++ simulation while it’s running.
What approaches can I use to enable this bidirectional data exchange between the two running processes? I need a solution that works during simulation execution without interrupting the flow.
I did something similar two years ago and went with ZeroMQ message queues - worked perfectly. The big win is that ZeroMQ runs asynchronously, so your simulation never gets stuck waiting for Python responses. My setup had OMNeT++ pushing metrics data to a ZMQ socket using REQ-REP. Python grabbed these messages, ran them through the ML model, then sent optimization commands back on a different channel. Since everything was decoupled, the simulation kept running smoothly even when ML processing times varied. Installation’s pretty simple - compile the ZMQ C++ bindings into your OMNeT++ modules and use pyzmq for Python. Way more reliable than direct TCP connections since ZMQ handles dropped connections and message queuing automatically. You can easily add more Python workers later if ML processing becomes a bottleneck. Just watch out for message serialization overhead, though with network metrics and position coordinates it wasn’t noticeable in my case.
I went with Redis streams as a message broker. Works great because Redis handles queuing and persistence out of the box - if your Python ML component crashes, you won’t lose simulation data. OMNeT++ pushes metrics to Redis streams via the C++ client, Python pulls them with redis-py. Redis handles backpressure really well too. When ML processing gets slow, messages just pile up without messing with simulation timing. I set up separate streams for metrics and command responses to keep things clean. JSON serialization overhead was pretty minimal for what you get. Only issue I hit was memory usage with big position datasets, but Redis’s auto-expiration fixed that. Performance stayed solid even with heavy ML models since Redis runs independently and buffers everything smoothly.
shared memory’s probably overkill here. I’d go with unix domain sockets - they’re much faster than TCP but still easy to set up. just create a socket in your omnet++ module, have python connect to it, and you’re good to go. no need for external stuff like zmq or redis. i’ve used this for real-time data feeds and it handles bidirectional communication without blocking the simulation thread.
Memory-mapped files with simple locking work best for this.
Set up a shared memory region both processes can access. OMNeT++ writes metrics to specific offsets, Python reads from those spots and writes optimization commands to different offsets.
I use basic flags - OMNeT++ sets a flag when new data’s ready. Python checks the flag, processes data, writes results, and sets its own completion flag. No network overhead or serialization delays.
For signal-to-noise ratios and position coordinates, define a simple struct layout in shared memory. Fixed-size arrays work great for this data.
Keep your data structures simple and don’t use dynamic allocation in the shared region. I reserve separate blocks - one for metrics, one for position data, one for command responses.
This beats sockets for consistent timing since there’s no network stack. Just handle cases where Python’s slower than your simulation - add basic buffering so OMNeT++ doesn’t get stuck waiting.
I’ve hit this exact problem with production systems. Here are three solid approaches:
Named Pipes (FIFOs) - Both processes read/write to the same named pipe. OMNeT++ dumps metrics, Python reads them, runs your ML model, then writes commands back. Fast and no external dependencies.
TCP Sockets - Python runs a simple TCP server, OMNeT++ connects as client and sends JSON data. Python responds with optimization commands on the same connection. I use this all the time for real-time exchanges.
Shared Memory with Semaphores - Best for high-frequency data. Both processes hit the same memory region with proper sync. Cuts out serialization overhead completely.
For network metrics and position data, go with TCP sockets. They’re reliable, easy to debug, and JSON handles your mixed data types without hassle. Just use a simple protocol where each message has a type field - metrics data or optimization commands.
Keep the protocol simple and don’t block operations in your OMNeT++ simulation thread.