Runtime integration between OMNeT++ simulation and Python ML model

I’m working on a network simulation project using OMNeT++ version 4.6 with the INET framework. My setup involves a machine learning model developed in Python that needs to interact with the simulation while it’s running.

The workflow is like this: my OMNeT++ simulation collects network metrics including signal quality measurements and mobile device positions. This information needs to be passed to my Python ML model for processing. The Python side then calculates optimization parameters to improve network performance and sends these back to the simulation.

What I need help with is establishing this bidirectional communication between the running OMNeT++ process and the Python application. Both need to exchange data continuously during simulation execution. Has anyone implemented something similar or know what approach works best for this kind of inter-process communication?

Had the same problem two years back with a cognitive radio simulation that needed real-time ML feedback. Tried a bunch of approaches but named pipes worked best. Way lower latency than sockets, which was huge for keeping simulation timing accurate. Set up two named pipes - one for OMNeT++ to send data to Python, another for Python to send optimization parameters back. OMNeT++ writes serialized data packets with your metrics, Python reads from its input pipe and writes results to the output pipe. Just make sure you handle serialization right - binary data’s way more efficient than text when you’re dealing with continuous streams. The tricky part is timing sync between simulation and ML processing. I added a simple acknowledgment mechanism so Python finishes its calculations before the simulation moves to the next step. Keeps the simulation from getting ahead of the ML optimization.