I’m working on a project where I need to establish real-time communication between my OMNeT++ simulator (version 4.6) and a Python ML algorithm. My setup uses the INET framework for network simulation features.
The workflow I’m trying to achieve is:
My OMNeT++ simulation calculates network metrics like signal-to-noise ratio averages and tracks mobile device positions.
This information needs to be passed to a Python-based learning algorithm that processes the data for online training.
The Python script then computes optimization commands to maximize SNR performance.
These control signals must be fed back into the running OMNeT++ simulation.
What’s the best approach to establish this bidirectional communication between these two separate processes while both are executing? I need this data exchange to happen continuously during simulation runtime rather than as a batch process.
I hit this same problem last year and solved it with Redis message queues. Way cleaner than pipes or direct sockets. Your OMNeT++ sim pushes SNR and position data to Redis queues, Python subscribes for real-time processing. The big win is decoupling - if your ML algorithm crashes, OMNeT++ keeps running. Redis handles buffering automatically. For feedback, I used a separate queue where Python publishes optimization commands and OMNeT++ polls for updates. Since you’re doing online training, Redis persistence helps with restarts. The game-changer was adding a heartbeat mechanism - both processes send periodic status messages so you know when something breaks. Performance was solid for my network topology sims, handled hundreds of updates per second no problem. Just make sure your Redis instance has enough memory for your data throughput.
I’d go with protobuf serialization. Used it with omnet++ and Python integration - way faster than JSON for network data. Protocol buffers handle binary format efficiently and both languages support it well. You define your message schema once (SNR values, device positions, control commands) then generate code for both sides. With UDP sockets for low latency, this setup beat Redis for real-time ML feedback loops.
I had a similar issue when connecting OMNeT++ with external algorithms. Socket-based communication worked best for me. Set up a TCP socket server in Python and have OMNeT++ connect as a client to send data periodically. The tricky part is timing synchronization - OMNeT++ uses simulation time while Python runs in real time, so you’ve got to manage when data exchanges happen. I created a custom OMNeT++ module that handles socket communication and buffers control commands until the right simulation events occur. For SNR optimization, try a simple request-response setup. Have OMNeT++ send current metrics to Python at certain checkpoints, wait for the optimization response, then continue with updated parameters. This avoids timing issues that’ll mess up your results. Watch out for Python processing time though - if your ML algorithm takes too long, you’ll get unrealistic delays in the network simulation. I used threading on the Python side to handle this better.
For real-time bidirectional communication between OMNeT++ and Python, I’d go with named pipes or shared memory depending on your performance needs.
I’ve dealt with similar integration stuff before. Named pipes work well for most cases - they’re simple to implement on both sides. Create a FIFO pipe where OMNeT++ writes network metrics and Python reads them, then use a second pipe for feedback.
Here’s what worked for me:
Set up two named pipes - one for each direction
In your OMNeT++ simulation, add write operations where you calculate SNR and position data
Run your Python script in a separate process, continuously reading from pipe 1 and writing control commands to pipe 2
OMNeT++ reads the control signals and applies them to your simulation parameters
If you need higher throughput, shared memory with semaphores performs better but it’s more complex. For network simulation data exchange, named pipes are usually fast enough.
Another option is ZeroMQ sockets - they handle messaging cleanly and work across different processes. The pub-sub pattern fits your use case well.
This video covers Python integration techniques that might help:
Handle blocking operations carefully so your simulation timing stays consistent. You might want non-blocking reads/writes or separate threads for the communication layer.