-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAOS-16809 vos: container based stable epoch #15605
Conversation
Ticket title is 'DAOS local stable epoch' |
Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/319/log |
Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/350/log |
Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/301/log |
Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/398/log |
Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/345/log |
Test stage Build on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/521/log |
42a21a8
to
96716b9
Compare
Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/383/log |
Test stage Build on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/387/log |
Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/348/log |
Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/356/log |
Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/374/log |
Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/349/log |
1bdab81
to
128319b
Compare
Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/ |
Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/ |
Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/ |
Test stage Unit Test with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/ |
1db207f
to
eead8cc
Compare
Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/ |
Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/ |
Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/ |
Test stage Unit Test with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/ |
eead8cc
to
f3368e7
Compare
Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/ |
Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/ |
Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/ |
Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/18/execution/node/319/log |
For the purpose of efficient calculating container based local stable epoch, we will maintain some kind of sorted list for active DTX entries with epoch order. But consider related overhead, it is not easy to maintain a strictly sorted list for all active DTX entries. For the DTX which leader resides on current target, its epoch is already sorted when generate on current engine. So the main difficulty is for those DTX entries which leaders are on remote targets. On the other hand, the local stable epoch is mainly used to generate global stable epoch that is for incremental reintegration. In fact, we do not need a very accurate global stable epoch for incremental reintegration. It means that it is no matter (or non-fatal) if the calculated stable epoch is a bit smaller than the real case. For example, seconds error for the stable epoch almost can be ignored if we compare such overhead with rebuilding the whole target from scratch. So for the DTX entry which leader is on remote target, we will maintain it in the list with relative incremental trend based on the epoch instead of strict sorting the epoch. We introduce an O(1) algorithm to handle such unsorted DTX entries list for calculating local stable epoch. Main VOS APIs for the stable epoch: /* Calculate current locally known stable epoch for the given container. */ daos_epoch_t vos_cont_get_local_stable_epoch(daos_handle_t coh); /* Get global stable epoch for the given container. */ daos_epoch_t vos_cont_get_global_stable_epoch(daos_handle_t coh); /* Set global stable epoch for the given container. */ int vos_cont_set_global_stable_epoch(daos_handle_t coh, daos_epoch_t epoch); Another important enhancement in the patch is about handling potential conflict between EC/VOS aggregation and delayed modification with very old epoch. For standalone transaction, when it is started on the DTX leader, its epoch is generated by the leader, then the modification RPC will be forwarded to other related non-leader(s). If the forwarded RPC is delayed for some reason, such as network congestion or system busy on the non-leader, as to the epoch for such transaction becomes very old (exceed related threshold), as to VOS aggregation may has already aggregated related epoch rang. Under such case, the non-leader will reject such modification to avoid data lost/corruption. For distributed transaction, if there is no read (fetch, query, enumerate, and so on) before client commit_tx, then related DTX leader will generate epoch for the transaction after client commit_tx. Then it will be the same as above standalone transaction for epoch handling. If the distributed transaction involves some read before client commit_tx, its epoch will be generated by the first accessed engine for read. If the transaction takes too long time after that, then when client commit_tx, its epoch may become very old as to related DTX leader will have to reject the transaction to avoid above mentioned conflict. And even if the DTX leader did not reject the transaction, some non-leader may also reject it because of the very old epoch. So it means that under such framework, the life for a distributed transaction cannot be too long. That can be adjusted via the server side environment variable DAOS_VOS_AGG_GAP. The default value is 60 seconds. NOTE: EC/VOS aggregation should avoid aggregating in the epoch range where lots of data records are pending to commit, so the aggregation epoch upper bound is 'current HLC - vos_agg_gap'. Signed-off-by: Fan Yong <[email protected]>
Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/18/execution/node/359/log |
cd6f8c0
to
7aa0703
Compare
src/engine/sched.c
Outdated
@@ -2241,6 +2240,8 @@ sched_run(ABT_sched sched) | |||
return; | |||
} | |||
|
|||
dx->dx_sched_info.si_agg_gap = (vos_get_agg_gap() + 10) * 1000; /* msecs */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you move this to sched_info_init()? Or remove the 'si_agg_gap'? I don't see why we need to duplicate this constant value in sched_info.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can move it into sched_info_init(), but seems not too much difference. I am not sure what "duplicate" you mean. I have ever discussed with you that making such agg_gap as global variable will cause compile dependency trouble.
src/include/daos_srv/container.h
Outdated
* The gap between the max allowed aggregation epoch and current HLC. The modification | ||
* with older epoch out of range may cause conflict with aggregation as to be rejected. | ||
*/ | ||
uint64_t sc_agg_eph_gap; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't it a global value? I don't see why it's per-container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put it in ds_cont_child
just for using it easily when aggregation. I will consider a dss variable to replace that.
@@ -425,6 +429,9 @@ cont_child_aggregate(struct ds_cont_child *cont, cont_aggregate_cb_t agg_cb, | |||
DP_CONT(cont->sc_pool->spc_uuid, cont->sc_uuid), | |||
tgt_id, epoch_range.epr_lo, epoch_range.epr_hi); | |||
|
|||
if (!param->ap_vos_agg) | |||
vos_cont_set_mod_bound(cont->sc_hdl, epoch_range.epr_hi); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to set this value here and there? Isn't it a global constant value 'cur_time - gap'? I don't see why it's related to aggregation range.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If aggregation run slowly, much behind of current HLC, it is unnecessary to restart the DTX which epoch is out of 'cur_time - gap' but newer than current aggregation boundary. We only need to guarantee that the DTX's epoch is not smaller than current aggregation real boundary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thanks. but I don't think this optimization is necessary though. :)
src/dtx/dtx_common.c
Outdated
@@ -1853,6 +1855,8 @@ dtx_cont_register(struct ds_cont_child *cont) | |||
D_GOTO(out, rc = -DER_NOMEM); | |||
} | |||
|
|||
cont->sc_agg_eph_gap = d_sec2hlc(vos_get_agg_gap()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's not necessary to duplicate a constant value for each container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will consider a dss variable to replace that.
@@ -436,6 +436,7 @@ class EngineYamlParameters(YamlParameters): | |||
"D_LOG_FILE_APPEND_PID=1", | |||
"DAOS_POOL_RF=4", | |||
"CRT_EVENT_DELAY=1", | |||
"DAOS_VOS_AGG_GAP=25", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we use this value (used by CI) as default and tune the value on larger system when necessary?
* pressure, the EC/VOS aggregation up boundary may be higher than vc_local_stable_epoch, | ||
* then it will cause vc_mod_epoch_bound > vc_local_stable_epoch. | ||
*/ | ||
daos_epoch_t vc_mod_epoch_bound; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite follow so many various variables, it looks to me there are only two things:
- constant gap between current HLC and EC aggregation upper bound, any write with lower epoch (lower than aggregation upper bound) will be rejected. (write with "epoch < current HLC - gap" will be rejected)
- per-container persistent stable epoch, which is min(uncommitted dtx epoch, agg upper bound epoch).
Did I miss anything?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- constant gap between current HLC and EC aggregation upper bound, any write with lower epoch (lower than aggregation upper bound) will be rejected. (write with "epoch < current HLC - gap" will be rejected)
Such way can work, but may cause too much transaction restart unnecessarily. In fact, we only need to guarantee that no modification will be older than aggregation boundary.
- per-container persistent stable epoch, which is min(uncommitted dtx epoch, agg upper bound epoch).
That is only in theory, but it is not easy maintain DTX epoch order efficiently, we do not know which one is "min".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see why this variable was introduced now, though I don't think this optimization is necessary. :)
I do not think so, simplifying user configuration is more important than CI test. |
Signed-off-by: Fan Yong <[email protected]>
I refreshed the patch with removing your concern about "agg_gap". @NiuYawei |
@@ -425,6 +429,9 @@ cont_child_aggregate(struct ds_cont_child *cont, cont_aggregate_cb_t agg_cb, | |||
DP_CONT(cont->sc_pool->spc_uuid, cont->sc_uuid), | |||
tgt_id, epoch_range.epr_lo, epoch_range.epr_hi); | |||
|
|||
if (!param->ap_vos_agg) | |||
vos_cont_set_mod_bound(cont->sc_hdl, epoch_range.epr_hi); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thanks. but I don't think this optimization is necessary though. :)
* It is not easy to know which DTX is the oldest one in the unsorted list. | ||
* The one after the header in the list maybe older than the header. But the | ||
* epoch difference will NOT exceed 'vos_agg_gap' since any DTX with older | ||
* epoch will be rejected (and restart with newer epoch). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite follow this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the logic in vos_dtx_alloc()
, if new added DTX's epoch is older than the tail one, the vc_mod_epoch_bound
will be refreshed to guarantee the GAP restriction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see it now, thank for the explanation.
* may be over-estimated. Usually, the count of re-indexed DTX entries is quite | ||
* limited, and will be purged soon after the container opened (via DTX resync). | ||
* So it will not much affect the local stable epoch calculation. | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The uncommitted dtx entry could be linked in reindex list only? I'm not sure why the 'stable epoch' is related to 'reindex'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For those originally unsorted ones, they will be linked into unsorted list. The DTX entries to be reindexed are uncommitted, their epoch will affect the local stable epoch that cannot exceed the lowest epoch of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thanks.
* is older, then reuse the former one. | ||
*/ | ||
if (unlikely(epoch < cont->vc_local_stable_epoch)) | ||
epoch = cont->vc_local_stable_epoch; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can this happen, bug? Then I think we'd trigger an assert (or at least print error message).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not bug, consider the following case:
Two DTX entries in the unsorted list, assume DTX1's epoch is 100, DTX2's epoch is 90. The first time of get_local_stable_epoch, the calculated one 100 - GAP, then DTX1 is committed, and then another get_local_stable_epoch is called, then the new calculated stable epoch is 90 - GAP, that is older than the former one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see.
* acceptable after reporting the new local stable epoch. The semantics maybe so | ||
* strict as to a lot of DTX restart. | ||
*/ | ||
if (cont->vc_mod_epoch_bound < epoch) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this "vc_mod_epoch_bound" is related to "stable epoch"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We use vc_mod_epoch_bound
to control whether accept new modification, if it is older than stable epoch, then the stable epoch is not "stable", that will break incremental reintegration semantics.
} | ||
|
||
if (unlikely(cont_ext->ced_global_stable_epoch > epoch)) { | ||
D_WARN("Do not allow to rollback global stable epoch from " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
|
||
if (!dth->dth_epoch_owner && !d_list_empty(&cont->vc_dtx_unsorted_list)) { | ||
dae = d_list_entry(cont->vc_dtx_unsorted_list.prev, struct vos_dtx_act_ent, | ||
dae_order_link); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite follow why rejecting old epoch need to check the active dtx in unsorted list?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To guarantee the rules of calculating local stable epoch.
* | ||
* This is an O(N) algorithm. N is the count of DTX entries to be | ||
* re-indexed. Please reference vos_cont_get_local_stable_epoch(). | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose reindex just load dtx entries from pmem, and re-create index in DRAM? So should it just reuse the same mechanism/code used for normal active dtx tracking? I don't see why extra processing is required for reindex?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because the user/admin may configure different "GAP" when restart the engines. Means the DTX entries may have different "GAP" when they were generated.
* pressure, the EC/VOS aggregation up boundary may be higher than vc_local_stable_epoch, | ||
* then it will cause vc_mod_epoch_bound > vc_local_stable_epoch. | ||
*/ | ||
daos_epoch_t vc_mod_epoch_bound; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see why this variable was introduced now, though I don't think this optimization is necessary. :)
@@ -436,6 +436,7 @@ class EngineYamlParameters(YamlParameters): | |||
"D_LOG_FILE_APPEND_PID=1", | |||
"DAOS_POOL_RF=4", | |||
"CRT_EVENT_DELAY=1", | |||
"DAOS_VOS_AGG_GAP=25", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks to me not quite right if we don't test default configuration in CI hardware tests. If the default value won't work well on extreme large system like Aurora, I think it's acceptable to ask user to configure a larger value (via server yaml).
The basic policy is that try to avoid restarting the DTX as much as possible.
Current set (25 seconds) in the patch is try to make the CI test to have the similar behavior as without the patch. If we use the default configuration (60 seconds), we may miss a lot of race windows with aggregation. That may cause some bugs to be hidden. |
* It is not easy to know which DTX is the oldest one in the unsorted list. | ||
* The one after the header in the list maybe older than the header. But the | ||
* epoch difference will NOT exceed 'vos_agg_gap' since any DTX with older | ||
* epoch will be rejected (and restart with newer epoch). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see it now, thank for the explanation.
* may be over-estimated. Usually, the count of re-indexed DTX entries is quite | ||
* limited, and will be purged soon after the container opened (via DTX resync). | ||
* So it will not much affect the local stable epoch calculation. | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thanks.
* is older, then reuse the former one. | ||
*/ | ||
if (unlikely(epoch < cont->vc_local_stable_epoch)) | ||
epoch = cont->vc_local_stable_epoch; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see.
Resolve some merge conflict for copyright. Skip-test: true Skip-unit-tests: true Skip-nlt: true Skip-func-test: true Signed-off-by: Fan Yong <[email protected]>
Resolve some merge conflict for copyright. |
@mchaarawi, any suggestion for the patch? Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can it be landed without waiting another round of test again?
Sure, since only copyright is changed, it's ok to skip retest |
For the purpose of efficient calculating container based local stable epoch, we will maintain some kind of sorted list for active DTX entries with epoch order. But consider related overhead, it is not easy to maintain a strictly sorted list for all active DTX entries. For the DTX which leader resides on current target, its epoch is already sorted when generate on current engine. So the main difficulty is for those DTX entries which leaders are on remote targets.
On the other hand, the local stable epoch is mainly used to generate global stable epoch that is for incremental reintegration. In fact, we do not need a very accurate global stable epoch for incremental reintegration. It means that it is no matter (or non-fatal) if the calculated stable epoch is a bit smaller than the real case. For example, seconds error for the stable epoch almost can be ignored if we compare such overhead with rebuilding the whole target from scratch. So for the DTX entry which leader is on remote target, we will maintain it in the list with relative incremental trend based on the epoch instead of strict sorting the epoch. We introduce an O(1) algorithm to handle such unsorted DTX entries list for calculating local stable epoch.
Main VOS APIs for the stable epoch:
/* Calculate current locally known stable epoch for the given container. */ daos_epoch_t vos_cont_get_local_stable_epoch(daos_handle_t coh);
/* Get global stable epoch for the given container. */
daos_epoch_t vos_cont_get_global_stable_epoch(daos_handle_t coh);
/* Set global stable epoch for the given container. */
int vos_cont_set_global_stable_epoch(daos_handle_t coh, daos_epoch_t epoch);
Another important enhancement in the patch is about handling potential conflict between EC/VOS aggregation and delayed modification with very old epoch.
For standalone transaction, when it is started on the DTX leader, its epoch is generated by the leader, then the modification RPC will be forwarded to other related non-leader(s). If the forwarded RPC is delayed for some reason, such as network congestion or system busy on the non-leader, as to the epoch for such transaction becomes very old (exceed related threshold), as to VOS aggregation may has already aggregated related epoch range. Under such case, the non-leader will reject such modification to avoid data lost/corruption.
For distributed transaction, if there is no read (fetch, query, enumerate, and so on) before client commit_tx, then related DTX leader will generate epoch for the transaction after client commit_tx. Then it will be the same as above standalone transaction for epoch handling.
If the distributed transaction involves some read before client commit_tx, its epoch will be generated by the first accessed engine for read. If the transaction takes too long time after that, then when client commit_tx, its epoch may become very old as to related DTX leader will have to reject the transaction to avoid above mentioned conflict. And even if the DTX leader did not reject the transaction, some non-leader may also reject it because of the very old epoch. So it means that under such framework, the life for a distributed transaction cannot be too long. That can be adjusted via the server side environment variable DAOS_VOS_AGG_GAP. The default value is 60 seconds.
NOTE: EC/VOS aggregation should avoid aggregating in the epoch range where
lots of data records are pending to commit, so the aggregation epoch
upper bound is 'current HLC - vos_agg_gap'.
Before requesting gatekeeper:
Features:
(orTest-tag*
) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.Gatekeeper: