You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, your work looks really impressive, but I have some questions about the code: 1. In the paper, you mentioned that there would be density map generation later. From my understanding, the output in the test.py file should be a density map, but its size does not match the input's, so I am not sure if the output is indeed the density map referred to in the text. 2. Can your code identify the locations of people?
The text was updated successfully, but these errors were encountered:
Thanks for your interest. In the counting tasks, the output density map is
typically smaller than the original image for two main reasons. First, the
backbone is based on the VGG network, which inherently downsamples features
by a factor of 16. Second, maintaining the density map at the original
image size would result in an exponential increase in computational cost
when calculating the loss. As a result, the model generates a smaller
density map. For visualization purposes in the paper, I use cv2.resize to
scale the model's output density map to the same size as the original image.
Regarding your second question, our model currently does not support direct
localization output. If needed, you could potentially apply a non-maximum
suppression (NMS) algorithm to the output density map to extract
localization points. However, I haven't tried this approach, and the
results may be unstable.
Hello, your work looks really impressive, but I have some questions about the code: 1. In the paper, you mentioned that there would be density map generation later. From my understanding, the output in the test.py file should be a density map, but its size does not match the input's, so I am not sure if the output is indeed the density map referred to in the text. 2. Can your code identify the locations of people?
The text was updated successfully, but these errors were encountered: