Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAOS-16809 vos: container based stable epoch #15605

Merged
merged 3 commits into from
Jan 17, 2025
Merged

Conversation

Nasf-Fan
Copy link
Contributor

@Nasf-Fan Nasf-Fan commented Dec 12, 2024

For the purpose of efficient calculating container based local stable epoch, we will maintain some kind of sorted list for active DTX entries with epoch order. But consider related overhead, it is not easy to maintain a strictly sorted list for all active DTX entries. For the DTX which leader resides on current target, its epoch is already sorted when generate on current engine. So the main difficulty is for those DTX entries which leaders are on remote targets.

On the other hand, the local stable epoch is mainly used to generate global stable epoch that is for incremental reintegration. In fact, we do not need a very accurate global stable epoch for incremental reintegration. It means that it is no matter (or non-fatal) if the calculated stable epoch is a bit smaller than the real case. For example, seconds error for the stable epoch almost can be ignored if we compare such overhead with rebuilding the whole target from scratch. So for the DTX entry which leader is on remote target, we will maintain it in the list with relative incremental trend based on the epoch instead of strict sorting the epoch. We introduce an O(1) algorithm to handle such unsorted DTX entries list for calculating local stable epoch.

Main VOS APIs for the stable epoch:

/* Calculate current locally known stable epoch for the given container. */ daos_epoch_t vos_cont_get_local_stable_epoch(daos_handle_t coh);

/* Get global stable epoch for the given container. */
daos_epoch_t vos_cont_get_global_stable_epoch(daos_handle_t coh);

/* Set global stable epoch for the given container. */
int vos_cont_set_global_stable_epoch(daos_handle_t coh, daos_epoch_t epoch);

Another important enhancement in the patch is about handling potential conflict between EC/VOS aggregation and delayed modification with very old epoch.

For standalone transaction, when it is started on the DTX leader, its epoch is generated by the leader, then the modification RPC will be forwarded to other related non-leader(s). If the forwarded RPC is delayed for some reason, such as network congestion or system busy on the non-leader, as to the epoch for such transaction becomes very old (exceed related threshold), as to VOS aggregation may has already aggregated related epoch range. Under such case, the non-leader will reject such modification to avoid data lost/corruption.

For distributed transaction, if there is no read (fetch, query, enumerate, and so on) before client commit_tx, then related DTX leader will generate epoch for the transaction after client commit_tx. Then it will be the same as above standalone transaction for epoch handling.

If the distributed transaction involves some read before client commit_tx, its epoch will be generated by the first accessed engine for read. If the transaction takes too long time after that, then when client commit_tx, its epoch may become very old as to related DTX leader will have to reject the transaction to avoid above mentioned conflict. And even if the DTX leader did not reject the transaction, some non-leader may also reject it because of the very old epoch. So it means that under such framework, the life for a distributed transaction cannot be too long. That can be adjusted via the server side environment variable DAOS_VOS_AGG_GAP. The default value is 60 seconds.

NOTE: EC/VOS aggregation should avoid aggregating in the epoch range where
lots of data records are pending to commit, so the aggregation epoch
upper bound is 'current HLC - vos_agg_gap'.

Before requesting gatekeeper:

  • Two review approvals and any prior change requests have been resolved.
  • Testing is complete and all tests passed or there is a reason documented in the PR why it should be force landed and forced-landing tag is set.
  • Features: (or Test-tag*) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.
  • Commit messages follows the guidelines outlined here.
  • Any tests skipped by the ticket being addressed have been run and passed in the PR.

Gatekeeper:

  • You are the appropriate gatekeeper to be landing the patch.
  • The PR has 2 reviews by people familiar with the code, including appropriate owners.
  • Githooks were used. If not, request that user install them and check copyright dates.
  • Checkpatch issues are resolved. Pay particular attention to ones that will show up on future PRs.
  • All builds have passed. Check non-required builds for any new compiler warnings.
  • Sufficient testing is done. Check feature pragmas and test tags and that tests skipped for the ticket are run and now pass with the changes.
  • If applicable, the PR has addressed any potential version compatibility issues.
  • Check the target branch. If it is master branch, should the PR go to a feature branch? If it is a release branch, does it have merge approval in the JIRA ticket.
  • Extra checks if forced landing is requested
    • Review comments are sufficiently resolved, particularly by prior reviewers that requested changes.
    • No new NLT or valgrind warnings. Check the classic view.
    • Quick-build or Quick-functional is not used.
  • Fix the commit message upon landing. Check the standard here. Edit it to create a single commit. If necessary, ask submitter for a new summary.

Copy link

github-actions bot commented Dec 12, 2024

Ticket title is 'DAOS local stable epoch'
Status is 'In Review'
Labels: 'Rebuild'
https://daosio.atlassian.net/browse/DAOS-16809

@daosbuild1
Copy link
Collaborator

Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/319/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/350/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/301/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/398/log

@daosbuild1
Copy link
Collaborator

Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/345/log

@daosbuild1
Copy link
Collaborator

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch from 42a21a8 to 96716b9 Compare December 12, 2024 16:20
@daosbuild1
Copy link
Collaborator

Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/383/log

@daosbuild1
Copy link
Collaborator

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/348/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/356/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/374/log

@daosbuild1
Copy link
Collaborator

Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/349/log

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch 2 times, most recently from 1bdab81 to 128319b Compare December 12, 2024 16:48
@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch 2 times, most recently from 1db207f to eead8cc Compare December 13, 2024 07:42
@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch from eead8cc to f3368e7 Compare December 13, 2024 10:34
@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/18/execution/node/319/log

For the purpose of efficient calculating container based local stable epoch,
we will maintain some kind of sorted list for active DTX entries with epoch
order. But consider related overhead, it is not easy to maintain a strictly
sorted list for all active DTX entries. For the DTX which leader resides on
current target, its epoch is already sorted when generate on current engine.
So the main difficulty is for those DTX entries which leaders are on remote
targets.

On the other hand, the local stable epoch is mainly used to generate global
stable epoch that is for incremental reintegration. In fact, we do not need
a very accurate global stable epoch for incremental reintegration. It means
that it is no matter (or non-fatal) if the calculated stable epoch is a bit
smaller than the real case. For example, seconds error for the stable epoch
almost can be ignored if we compare such overhead with rebuilding the whole
target from scratch. So for the DTX entry which leader is on remote target,
we will maintain it in the list with relative incremental trend based on the
epoch instead of strict sorting the epoch. We introduce an O(1) algorithm to
handle such unsorted DTX entries list for calculating local stable epoch.

Main VOS APIs for the stable epoch:

/* Calculate current locally known stable epoch for the given container. */
daos_epoch_t vos_cont_get_local_stable_epoch(daos_handle_t coh);

/* Get global stable epoch for the given container. */
daos_epoch_t vos_cont_get_global_stable_epoch(daos_handle_t coh);

/* Set global stable epoch for the given container. */
int vos_cont_set_global_stable_epoch(daos_handle_t coh, daos_epoch_t epoch);

Another important enhancement in the patch is about handling potential
conflict between EC/VOS aggregation and delayed modification with very
old epoch.

For standalone transaction, when it is started on the DTX leader, its epoch
is generated by the leader, then the modification RPC will be forwarded to
other related non-leader(s). If the forwarded RPC is delayed for some reason,
such as network congestion or system busy on the non-leader, as to the epoch
for such transaction becomes very old (exceed related threshold), as to VOS
aggregation may has already aggregated related epoch rang. Under such case,
the non-leader will reject such modification to avoid data lost/corruption.

For distributed transaction, if there is no read (fetch, query, enumerate,
and so on) before client commit_tx, then related DTX leader will generate
epoch for the transaction after client commit_tx. Then it will be the same
as above standalone transaction for epoch handling.

If the distributed transaction involves some read before client commit_tx,
its epoch will be generated by the first accessed engine for read. If the
transaction takes too long time after that, then when client commit_tx, its
epoch may become very old as to related DTX leader will have to reject the
transaction to avoid above mentioned conflict. And even if the DTX leader
did not reject the transaction, some non-leader may also reject it because
of the very old epoch. So it means that under such framework, the life for
a distributed transaction cannot be too long. That can be adjusted via the
server side environment variable DAOS_VOS_AGG_GAP. The default value is 60
seconds.

NOTE: EC/VOS aggregation should avoid aggregating in the epoch range where
      lots of data records are pending to commit, so the aggregation epoch
      upper bound is 'current HLC - vos_agg_gap'.

Signed-off-by: Fan Yong <[email protected]>
@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/18/execution/node/359/log

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch from cd6f8c0 to 7aa0703 Compare January 15, 2025 02:51
liuxuezhao
liuxuezhao previously approved these changes Jan 15, 2025
jolivier23
jolivier23 previously approved these changes Jan 15, 2025
@@ -2241,6 +2240,8 @@ sched_run(ABT_sched sched)
return;
}

dx->dx_sched_info.si_agg_gap = (vos_get_agg_gap() + 10) * 1000; /* msecs */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you move this to sched_info_init()? Or remove the 'si_agg_gap'? I don't see why we need to duplicate this constant value in sched_info.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can move it into sched_info_init(), but seems not too much difference. I am not sure what "duplicate" you mean. I have ever discussed with you that making such agg_gap as global variable will cause compile dependency trouble.

* The gap between the max allowed aggregation epoch and current HLC. The modification
* with older epoch out of range may cause conflict with aggregation as to be rejected.
*/
uint64_t sc_agg_eph_gap;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it a global value? I don't see why it's per-container.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put it in ds_cont_child just for using it easily when aggregation. I will consider a dss variable to replace that.

@@ -425,6 +429,9 @@ cont_child_aggregate(struct ds_cont_child *cont, cont_aggregate_cb_t agg_cb,
DP_CONT(cont->sc_pool->spc_uuid, cont->sc_uuid),
tgt_id, epoch_range.epr_lo, epoch_range.epr_hi);

if (!param->ap_vos_agg)
vos_cont_set_mod_bound(cont->sc_hdl, epoch_range.epr_hi);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need to set this value here and there? Isn't it a global constant value 'cur_time - gap'? I don't see why it's related to aggregation range.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If aggregation run slowly, much behind of current HLC, it is unnecessary to restart the DTX which epoch is out of 'cur_time - gap' but newer than current aggregation boundary. We only need to guarantee that the DTX's epoch is not smaller than current aggregation real boundary.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks. but I don't think this optimization is necessary though. :)

@@ -1853,6 +1855,8 @@ dtx_cont_register(struct ds_cont_child *cont)
D_GOTO(out, rc = -DER_NOMEM);
}

cont->sc_agg_eph_gap = d_sec2hlc(vos_get_agg_gap());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's not necessary to duplicate a constant value for each container.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will consider a dss variable to replace that.

@@ -436,6 +436,7 @@ class EngineYamlParameters(YamlParameters):
"D_LOG_FILE_APPEND_PID=1",
"DAOS_POOL_RF=4",
"CRT_EVENT_DELAY=1",
"DAOS_VOS_AGG_GAP=25",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we use this value (used by CI) as default and tune the value on larger system when necessary?

* pressure, the EC/VOS aggregation up boundary may be higher than vc_local_stable_epoch,
* then it will cause vc_mod_epoch_bound > vc_local_stable_epoch.
*/
daos_epoch_t vc_mod_epoch_bound;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite follow so many various variables, it looks to me there are only two things:

  1. constant gap between current HLC and EC aggregation upper bound, any write with lower epoch (lower than aggregation upper bound) will be rejected. (write with "epoch < current HLC - gap" will be rejected)
  2. per-container persistent stable epoch, which is min(uncommitted dtx epoch, agg upper bound epoch).

Did I miss anything?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. constant gap between current HLC and EC aggregation upper bound, any write with lower epoch (lower than aggregation upper bound) will be rejected. (write with "epoch < current HLC - gap" will be rejected)

Such way can work, but may cause too much transaction restart unnecessarily. In fact, we only need to guarantee that no modification will be older than aggregation boundary.

  1. per-container persistent stable epoch, which is min(uncommitted dtx epoch, agg upper bound epoch).

That is only in theory, but it is not easy maintain DTX epoch order efficiently, we do not know which one is "min".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see why this variable was introduced now, though I don't think this optimization is necessary. :)

@Nasf-Fan
Copy link
Contributor Author

Shouldn't we use this value (used by CI) as default and tune the value on larger system when necessary?

I do not think so, simplifying user configuration is more important than CI test.

@Nasf-Fan Nasf-Fan dismissed stale reviews from jolivier23 and liuxuezhao via 1245805 January 15, 2025 10:20
@Nasf-Fan
Copy link
Contributor Author

I refreshed the patch with removing your concern about "agg_gap". @NiuYawei

@Nasf-Fan Nasf-Fan requested a review from NiuYawei January 15, 2025 10:21
@@ -425,6 +429,9 @@ cont_child_aggregate(struct ds_cont_child *cont, cont_aggregate_cb_t agg_cb,
DP_CONT(cont->sc_pool->spc_uuid, cont->sc_uuid),
tgt_id, epoch_range.epr_lo, epoch_range.epr_hi);

if (!param->ap_vos_agg)
vos_cont_set_mod_bound(cont->sc_hdl, epoch_range.epr_hi);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks. but I don't think this optimization is necessary though. :)

* It is not easy to know which DTX is the oldest one in the unsorted list.
* The one after the header in the list maybe older than the header. But the
* epoch difference will NOT exceed 'vos_agg_gap' since any DTX with older
* epoch will be rejected (and restart with newer epoch).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite follow this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See the logic in vos_dtx_alloc(), if new added DTX's epoch is older than the tail one, the vc_mod_epoch_bound will be refreshed to guarantee the GAP restriction.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see it now, thank for the explanation.

* may be over-estimated. Usually, the count of re-indexed DTX entries is quite
* limited, and will be purged soon after the container opened (via DTX resync).
* So it will not much affect the local stable epoch calculation.
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The uncommitted dtx entry could be linked in reindex list only? I'm not sure why the 'stable epoch' is related to 'reindex'.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For those originally unsorted ones, they will be linked into unsorted list. The DTX entries to be reindexed are uncommitted, their epoch will affect the local stable epoch that cannot exceed the lowest epoch of them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks.

* is older, then reuse the former one.
*/
if (unlikely(epoch < cont->vc_local_stable_epoch))
epoch = cont->vc_local_stable_epoch;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can this happen, bug? Then I think we'd trigger an assert (or at least print error message).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not bug, consider the following case:
Two DTX entries in the unsorted list, assume DTX1's epoch is 100, DTX2's epoch is 90. The first time of get_local_stable_epoch, the calculated one 100 - GAP, then DTX1 is committed, and then another get_local_stable_epoch is called, then the new calculated stable epoch is 90 - GAP, that is older than the former one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see.

* acceptable after reporting the new local stable epoch. The semantics maybe so
* strict as to a lot of DTX restart.
*/
if (cont->vc_mod_epoch_bound < epoch) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this "vc_mod_epoch_bound" is related to "stable epoch"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use vc_mod_epoch_bound to control whether accept new modification, if it is older than stable epoch, then the stable epoch is not "stable", that will break incremental reintegration semantics.

}

if (unlikely(cont_ext->ced_global_stable_epoch > epoch)) {
D_WARN("Do not allow to rollback global stable epoch from "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto


if (!dth->dth_epoch_owner && !d_list_empty(&cont->vc_dtx_unsorted_list)) {
dae = d_list_entry(cont->vc_dtx_unsorted_list.prev, struct vos_dtx_act_ent,
dae_order_link);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite follow why rejecting old epoch need to check the active dtx in unsorted list?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To guarantee the rules of calculating local stable epoch.

*
* This is an O(N) algorithm. N is the count of DTX entries to be
* re-indexed. Please reference vos_cont_get_local_stable_epoch().
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose reindex just load dtx entries from pmem, and re-create index in DRAM? So should it just reuse the same mechanism/code used for normal active dtx tracking? I don't see why extra processing is required for reindex?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the user/admin may configure different "GAP" when restart the engines. Means the DTX entries may have different "GAP" when they were generated.

* pressure, the EC/VOS aggregation up boundary may be higher than vc_local_stable_epoch,
* then it will cause vc_mod_epoch_bound > vc_local_stable_epoch.
*/
daos_epoch_t vc_mod_epoch_bound;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see why this variable was introduced now, though I don't think this optimization is necessary. :)

@@ -436,6 +436,7 @@ class EngineYamlParameters(YamlParameters):
"D_LOG_FILE_APPEND_PID=1",
"DAOS_POOL_RF=4",
"CRT_EVENT_DELAY=1",
"DAOS_VOS_AGG_GAP=25",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks to me not quite right if we don't test default configuration in CI hardware tests. If the default value won't work well on extreme large system like Aurora, I think it's acceptable to ask user to configure a larger value (via server yaml).

@Nasf-Fan
Copy link
Contributor Author

I see, thanks. but I don't think this optimization is necessary though. :)

The basic policy is that try to avoid restarting the DTX as much as possible.

It looks to me not quite right if we don't test default configuration in CI hardware tests. If the default value won't work well on extreme large system like Aurora, I think it's acceptable to ask user to configure a larger value (via server yaml).

Current set (25 seconds) in the patch is try to make the CI test to have the similar behavior as without the patch. If we use the default configuration (60 seconds), we may miss a lot of race windows with aggregation. That may cause some bugs to be hidden.

NiuYawei
NiuYawei previously approved these changes Jan 16, 2025
* It is not easy to know which DTX is the oldest one in the unsorted list.
* The one after the header in the list maybe older than the header. But the
* epoch difference will NOT exceed 'vos_agg_gap' since any DTX with older
* epoch will be rejected (and restart with newer epoch).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see it now, thank for the explanation.

* may be over-estimated. Usually, the count of re-indexed DTX entries is quite
* limited, and will be purged soon after the container opened (via DTX resync).
* So it will not much affect the local stable epoch calculation.
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks.

* is older, then reuse the former one.
*/
if (unlikely(epoch < cont->vc_local_stable_epoch))
epoch = cont->vc_local_stable_epoch;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see.

liuxuezhao
liuxuezhao previously approved these changes Jan 16, 2025
Resolve some merge conflict for copyright.

Skip-test: true
Skip-unit-tests: true
Skip-nlt: true
Skip-func-test: true

Signed-off-by: Fan Yong <[email protected]>
@Nasf-Fan
Copy link
Contributor Author

Resolve some merge conflict for copyright.

@Nasf-Fan
Copy link
Contributor Author

@mchaarawi, any suggestion for the patch? Thanks!

Copy link
Contributor

@liuxuezhao liuxuezhao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can it be landed without waiting another round of test again?

@gnailzenh
Copy link
Contributor

Sure, since only copyright is changed, it's ok to skip retest

@gnailzenh gnailzenh merged commit 4e0f123 into master Jan 17, 2025
45 of 46 checks passed
@gnailzenh gnailzenh deleted the Nasf-Fan/DAOS-16809_1 branch January 17, 2025 02:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

8 participants