We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
While downloading a very large file and watching a YouTube video, CPU usage on my Broflake widget (free peer) was much higher than I'd expect.
Cumulative PPROF shows that we're spending a lot of time locking.
Showing top 20 nodes out of 138 flat flat% sum% cum cum% 0.01s 0.024% 0.024% 15.67s 37.91% runtime.systemstack 0.02s 0.048% 0.073% 15.52s 37.55% runtime.wakep 0 0% 0.073% 15.50s 37.50% runtime.startm 15.48s 37.45% 37.53% 15.48s 37.45% runtime.pthread_cond_signal 0 0% 37.53% 15.48s 37.45% runtime.semawakeup 0 0% 37.53% 15.46s 37.41% runtime.notewakeup 0.01s 0.024% 37.55% 14.65s 35.45% runtime.goready.func1 0 0% 37.55% 14.64s 35.42% runtime.ready 0 0% 37.55% 6.14s 14.86% runtime.mcall 0 0% 37.55% 6.08s 14.71% runtime.schedule 0 0% 37.55% 5.82s 14.08% runtime.park_m 0.01s 0.024% 37.58% 5.60s 13.55% runtime.findRunnable 0 0% 37.58% 5.37s 12.99% runtime._System 0 0% 37.58% 4.78s 11.57% github.com/pion/sctp.(*Association).readLoop 0 0% 37.58% 4.76s 11.52% github.com/pion/sctp.(*Association).handleInbound 0 0% 37.58% 4.74s 11.47% github.com/pion/sctp.(*Association).handleChunk 0 0% 37.58% 4.74s 11.47% github.com/pion/sctp.(*Association).handleSack 0.02s 0.048% 37.62% 4.72s 11.42% github.com/pion/sctp.(*Association).processSelectiveAck 0 0% 37.62% 4.68s 11.32% github.com/pion/sctp.(*payloadQueue).pop 0.01s 0.024% 37.65% 4.68s 11.32% github.com/pion/sctp.(*payloadQueue).updateSortedKeys
This is also confirmed by non-cumulative pprof:
Showing top 20 nodes out of 138 flat flat% sum% cum cum% 15.48s 37.45% 37.45% 15.48s 37.45% runtime.pthread_cond_signal 3.06s 7.40% 44.86% 3.06s 7.40% runtime.pthread_cond_wait
Possibly this is due to lots of channel sends and receives.
The text was updated successfully, but these errors were encountered:
Apparently, it's getting OOM Killed too
May 9 15:05:38 broflake-uncensored-peer-1 kernel: [508700.096948] sshd invoked oom-killer: gfp_mask=0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), order=0, oom_score_adj=0 May 9 15:05:38 broflake-uncensored-peer-1 kernel: [508700.097079] oom_kill_process.cold+0xb/0x10 May 9 15:05:38 broflake-uncensored-peer-1 kernel: [508700.097103] __alloc_pages_may_oom+0x112/0x1e0 May 9 15:05:38 broflake-uncensored-peer-1 kernel: [508700.097347] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name May 9 15:05:38 broflake-uncensored-peer-1 kernel: [508700.097449] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=ssh.service,mems_allowed=0,global_oom,task_memcg=/user.slice/user-0.slice/session-152.scope,task=widget,pid=69952,uid=0 May 9 15:05:38 broflake-uncensored-peer-1 kernel: [508700.097479] Out of memory: Killed process 69952 (widget) total-vm:1880484044kB, anon-rss:282084kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:920kB oom_score_adj:0
Sorry, something went wrong.
No branches or pull requests
While downloading a very large file and watching a YouTube video, CPU usage on my Broflake widget (free peer) was much higher than I'd expect.
Cumulative PPROF shows that we're spending a lot of time locking.
This is also confirmed by non-cumulative pprof:
Possibly this is due to lots of channel sends and receives.
The text was updated successfully, but these errors were encountered: