Real-time data exchange between OMNeT++ simulation and Python ML model

I’m working on a network simulation project using OMNeT++ version 4.6 with the INET framework. My setup involves a machine learning model built in Python that needs to interact with the simulation while it’s running.

Here’s what I need to accomplish:

The simulation collects network metrics like signal quality measurements and device positions. This information needs to be passed to my Python ML algorithm for processing. The ML model then calculates optimization parameters to improve network performance and sends these results back to the OMNeT++ simulation.

I’m looking for suggestions on how to establish this bidirectional communication between the two applications during execution. What would be the best approach to handle this real-time data transfer? Are there specific libraries or methods that work well for connecting OMNeT++ simulations with external Python processes?

ZeroMQ’s perfect for this. I’ve used it in production connecting simulations to ML pipelines - handles async messaging like a champ.

Use REQ-REP if you want OMNeT++ to send metrics and wait for optimization parameters back. PUSH-PULL works better if you don’t want timing coupled. Python bindings are rock solid, no need to deal with raw socket programming.

Create ZMQ sockets in your OMNeT++ initialize() method, then send messages from your metric handlers. Python runs a loop receiving data, feeding your ML model, sending results back. ZMQ handles connection management and message queuing automatically.

Watch out for simulation pause/resume though - learned this the hard way. ZMQ keeps buffering messages when your sim stops, so you’ll get stale data on restart.

This covers Python integration in OMNeT++ with practical examples of the data exchange patterns you need.

redis could work well here - it’s a solid middleman db both apps can access. omnet++ pushes metrics as json, python reads them and writes results back. won’t be the fastest option but it handles crashes well and you get persistence without extra work.

i did something similar! sockets work, but make sure to sync ur times right. i ran into troubles with OMNeT++'s clock being off. threading helps, for sure.

I’d try shared memory for this - it’s great when you’re moving data back and forth frequently. I used POSIX shared memory segments with semaphores to sync between my OMNeT++ sim and Python ML pipeline. Way faster than other IPC methods, especially for big datasets like position matrices or signal measurements. You’ll use shm_open and mmap on the C++ side in your simulation modules, then Python can hit the same memory regions with its mmap module. The tricky bit is making sure processes don’t step on each other when reading/writing, but once you nail the synchronization, data transfer is basically instant.

Named pipes work great for this kind of setup. Create a FIFO pipe on the filesystem - OMNeT++ writes simulation data to one pipe and reads optimization parameters from another. Way simpler than sockets and no network overhead.

I extended my simulation modules with custom message handlers that trigger pipe writes when collecting metrics. On the Python side, I run a separate thread to monitor the pipe and feed data through the ML model. Just make sure you buffer your data properly since pipe operations can block. Also use a simple message format with delimiters to handle partial reads correctly.