You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now given the LazyInit PR in admix, we are prepared to test the similar processing scenario for SR2 reprocessing, in terms of DB connection per job. That means, from a stress test on NiLab, we will be able to project how much connection we will have in SR2 reprocessing, and see if it is OK to make NiLab mirror exclusively reserved for reprocessing pipeline and nothing else.
Here is a draft of test proposal, which should be a project recorded here once we figure out exact runlist.
Find a list of SR1 calibration runs whose peaks are not available (to coordinate with @dantonmartin. It should be a subset of those runs whose raw_records not removed from disk, because of missing intermediate data)
Merge LazyInit PR in admix and make sure outsource is using it. (whether through overriding admix version, or just in a new environment with this updated admix)
Make sure in xenon config that the only mirror involved is NiLab.
Process v15 for this decided run list. Hopefully we get something around 400 runs (that is a typical batch size we submit per day for SR1)
Make sure in xenon config that - Ideally handled by @minzhong98 or other people who have admin log of NiLab mirror: Closely monitor RunDB load, and compare the connection amount trend with running job number you learned from condor_q. Understand the amount phenomenologically:
Not really an outsource "issue", but just a to-do test. We hope to use that mirror exclusively for reprocessing or MC.
The text was updated successfully, but these errors were encountered: