How to capture layer outputs in reinforcement learning framework

I’m working on tracking neural network layer outputs in a deep reinforcement learning setup. Getting the weights and biases was pretty easy, but I’m stuck on capturing the intermediate activations.

I managed to set up monitoring for the network parameters without issues. But when I try to add summary operations for the activations, I keep running into shape problems. Here’s the error I’m getting:

InvalidArgumentError: Shape [-1,84,84,4] has negative dimensions
[[Node: dqn/input_layer = Placeholder[dtype=DT_FLOAT, shape=[?,84,84,4], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

The issue seems to happen when I try to create summaries for the layer activations. The input layer is supposed to handle variable batch sizes, but something goes wrong with the tensor shape definition.

Has anyone dealt with similar activation recording issues in RL frameworks? I’m not sure if this is a placeholder definition problem or something with how I’m setting up the summary operations.

Had the exact same issue with my DQN setup last year. That negative dimension error pops up when TensorFlow can’t figure out shapes during graph construction - usually because batch dimensions aren’t defined yet. In my case, the placeholder wasn’t the problem. I was feeding activation tensors to summary operations wrong. I was creating summaries before the placeholders got actual data during the session run. Here’s what fixed it: I delayed summary creation until after the first forward pass. Wrapped my summary ops in a conditional that only ran after the network processed at least one batch. This let TensorFlow work with actual tensor shapes instead of abstract placeholder dimensions. Also check if you’re passing the placeholder tensor directly to the summary operation instead of the actual activation output. Make sure you’re grabbing the layer’s output tensor, not the input placeholder when you create activation summaries.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.