Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge memory leak... #3

Open
TheJoshGriffith opened this issue Jun 30, 2016 · 4 comments
Open

Huge memory leak... #3

TheJoshGriffith opened this issue Jun 30, 2016 · 4 comments

Comments

@TheJoshGriffith
Copy link

Running this on a pi2 in high resolution mode, I'm leaking 1MB/second. With manual garbage collection (gc.collect()) every loop, I managed to reduce that to about 10MB/minute. I'm still cautious of using it though.

Have developers any idea why this might be the case? It does appear to be server.py which is leaking, but to the best of my knowledge no objects are being left uncleaned. It could potentially be a driver issue, but then I don't understand why the python process would be showing the leak itself.

@TheJoshGriffith
Copy link
Author

As it turns out, the memory leak itself stops when there are no web clients connected, so it's definitely something relating to the socket. The memory usage does not, however, reset.

@patrickfuller
Copy link
Owner

I haven't run camp in a while, but I didn't see this behavior when I did. I had this running for ~3 months on a Pi1 with constant logins, so I'm guessing this gc issue is related to dependencies.

It's a pretty simple library, so we should be able to isolate the problem without much work. The only thing that would produce that sort of leak rate is the camera reader, so it looks like that's not getting collected before the next loop. Are you using a USB camera or a Pi camera?

The web client behavior makes sense. Looking at the code here, I wrote it so the camera only reads when a user is logged in. As an aside, the way I wrote it would have terrible performance for simultaneous user access (although that's an easy fix).

@TheJoshGriffith
Copy link
Author

TheJoshGriffith commented Jun 30, 2016

Having only ever used Python for automation testing, and last used it with sockets about 5 years ago, I'm a little lost, but will try my best to work through this...

I have somewhat stopped the behaviour, which is really odd, by putting this into the code:

gc.collect()
objgraph.show_most_common_types()

Which forces GC and just dumps counts of each type of var actively held in memory. This has at worst reduced the memory leak from 1MB/second to 10MB/minute (as described above). With no clients, naturally, there is no leak. When I removed the show_most_common_types() call, strangely the leak went dramatically upwards (with gc.collect still in place). I really don't understand that behaviour, it suggests that the call itself is forcing some further memory cleanup... My brain really couldn't take that hard of a hit on a Thursday afternoon.

I'm using the pi camera, and I could partly expect there to be a leak in the driver code, but I'm not sure I'd expect that to play out by demonstrating a leak in the python process, or if it should actually leak in the kernel process itself... Not sure how the two integrate, and haven't even begun to consider looking at the driver code (won't do so unless I'm advised that it could be the cause, at least).

I'd be very interested to know about reducing the performance hit... I assume currently the server is capturing and sending one frame to each client, instead of capturing a frame and sending it to all clients? Not sure how I'd go about handing the case where you've got no sockets open, presumably the ideal way would be to not capture any images, but there's a threading risk of not having a captured image ready to send when a client connects (or am I overthinking this?)

Anyway, I'll drop some more debug stuff on it shortly and see if I can find more info. Currently, with the two calls listed above, memory leak is very slow and the object count for all the types that dump lists is either static or decreasing every time I look at it.

@patrickfuller
Copy link
Owner

I'm using the pi camera, and I could partly expect there to be a leak in the driver code, but I'm not sure I'd expect that to play out by demonstrating a leak in the python process, or if it should actually leak in the kernel process itself... Not sure how the two integrate, and haven't even begun to consider looking at the driver code (won't do so unless I'm advised that it could be the cause, at least).

I used a USB camera with my long-running test, and my Pi camera tests were short lived. This makes me think that the way I handled the picamera module is the issue.

From the Pi Camera docs, it looks like there are better ways to read from the camera. The first thing I'd test is to replace this with:

with picamera.PiCamera() as camera:
    camera.capture(sio, "jpeg", use_video_port=True)

This will likely kill the frame rate, but, if picamera is the culprit, this should eliminate the memory leak. It may also be worth trying without use_video_port (from here - I likely added it for speed). If memory leak is gone but frame rate is slow, I'd then look at capture_sequence (ref).

Unfortunately, I no longer have a pi camera to replicate your issue. Let me know how things turn out and I'll try helping as much as I can.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants