-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(userspace): solving batch of recent regressions #1524
Changes from all commits
fcd0c83
286ecbd
ef7fa54
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -88,7 +88,7 @@ bool sinsp_container_manager::remove_inactive_containers() | |
}); | ||
|
||
auto containers = m_containers.lock(); | ||
if (m_inspector->m_sinsp_stats_v2) | ||
if (m_inspector != nullptr && m_inspector->m_sinsp_stats_v2) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In line if(m_inspector->m_lastevent_ts >
m_last_flush_time_ns + m_inspector->m_inactive_container_scan_time_ns)
m_last_flush_time_ns + m_inspector->m_inactive_container_scan_time_ns) Not sure if we want to add an extra check also above of remove these checks like before There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it is also used at line There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We need to check for There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you @jasondellaluce yes in that original PR we had discussions about checking |
||
{ | ||
m_inspector->m_sinsp_stats_v2->m_n_missing_container_images = 0; | ||
// Will include pod sanboxes, but that's ok | ||
|
@@ -97,7 +97,7 @@ bool sinsp_container_manager::remove_inactive_containers() | |
for(auto it = containers->begin(); it != containers->end();) | ||
{ | ||
sinsp_container_info::ptr_t container = it->second; | ||
if (m_inspector->m_sinsp_stats_v2) | ||
if (m_inspector != nullptr && m_inspector->m_sinsp_stats_v2) | ||
{ | ||
auto container_info = container.get(); | ||
if (!container_info || (container_info && !container_info->m_is_pod_sandbox && container_info->m_image.empty())) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the past, returning SCAP_SUCCESS was fair enough, since the actual result thread info allocation happened below and the caller could still check for a NULL out thread info. Now, we can’t rely on the NULL result, since the caller is responsible of allocating it. As a consequence, for invalid thread IDs we always returned bogus scap thread infos. So IMO this is easily fixable by returning SCAP_FAILURE there, since they are actual failure scenarios.