Real-time data exchange between OMNeT++ simulator and Python ML model

I’m working on a network simulation project where I need to connect two different systems that run at the same time. My main simulator is built with OMNeT++ version 4.6 and uses the INET framework for networking features.

The challenge is that I need my simulator to talk to a machine learning model written in Python while both are running. Here’s what needs to happen:

  1. The OMNeT++ simulator collects network performance data like signal quality measurements and device positions.
  2. This information gets sent to the Python ML model in real time.
  3. The Python model processes this data and learns from it continuously.
  4. The model then sends back control commands to optimize network performance.
  5. The simulator receives these commands and applies them immediately.

I’m looking for the best way to set up this two-way communication between the OMNeT++ process and the Python process. Both need to exchange data back and forth during runtime without stopping either program. What’s the most reliable approach to implement this kind of inter-process communication?

hey! have you considered using ZeroMQ? it’s really light and works great with C++ and Python. i’ve done a similar thing before, and it keeps the latency low. make sure to go with async calls to avoid blocking in either process.