-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transfer Requests for Run2016 data #666
Comments
On Tue, Jul 26, 2016 at 12:02 PM, mdunser [email protected] wrote:
We should discuss in a CMG group meeting - I believe the group should be Giovanni
|
Hi Marc, Can you transfer /SingleMuon/Run2016G-PromptReco-v1/MINIAOD ? Thanks, |
Voila: https://cmsweb.cern.ch/phedex/prod/Request::View?request=765363 On 24 Aug 2016, at 12:56, Jan Steggemann <[email protected]mailto:[email protected]> wrote: Hi Marc, Can you transfer /SingleMuon/Run2016G-PromptReco-v1/MINIAOD ? Thanks, — |
Hi Marc, could you please transfer /MET/Run2016F-PromptReco-v1/MINIAOD Many thanks, |
sorry, I meant: |
hi cristina, RunG i had requested some time ago, so that one should be there. -m On 07 Oct 2016, at 18:39, cbotta <[email protected]mailto:[email protected]> wrote: Hi Marc, could you please transfer /MET/Run2016F-PromptReco-v1/MINIAOD Many thanks, — |
just for the record, here the request: https://cmsweb.cern.ch/phedex/prod/Request::View?request=802579 On 07 Oct 2016, at 18:42, Marc Dunser <[email protected]mailto:[email protected]> wrote: hi cristina, RunG i had requested some time ago, so that one should be there. -m On 07 Oct 2016, at 18:39, cbotta <[email protected]mailto:[email protected]> wrote: Hi Marc, could you please transfer /MET/Run2016F-PromptReco-v1/MINIAOD Many thanks, — |
Hi Marc, I need to look into these data somewhat urgently
Given your initial message, would a transfer request still make sense or I'd be better off using crab / AAA? Thanks, |
Hi Riccardo, so the first dataset (v1) is already at CERN so you should be able to access it The other two don’t exist. Best, On 31 Oct 2016, at 20:57, Riccardo Manzoni <[email protected]mailto:[email protected]> wrote: Hi Marc, I need to look into these data somewhat urgently /SingleMuon/Run2016G-PromptReco-v1/MINIAOD Given your initial message, would a transfer request still make sense or I'd be better off using crab / AAA? Thanks, — |
Sorry Marc, I corrected the first message, I meant 2016H v1 to v3. |
Hi, I made the request for all RunH (v1,v2,v3) samples just now: https://cmsweb.cern.ch/phedex/prod/Request::View?request=817882 It will probably be approved tomorrow, and only then will the transfer start, so Depending on your interpretation of “urgently” you might want to consider Best, On 31 Oct 2016, at 21:19, Riccardo Manzoni <[email protected]mailto:[email protected]> wrote: Sorry Marc, I corrected the first message, I meant 2016H v1 to v3. — |
Hello Marc, could you please add to Many thanks |
Hi, this was already part of this request: Or am I missing something? -m On 11 Nov 2016, at 11:48, cbotta <[email protected]mailto:[email protected]> wrote: Hello Marc, could you please add to Many thanks — |
Ciao Marc. Would be great if you can add the /SinglePhoton/Run2016B-23Sep2016-v3/MINIAOD do not see in the https://cmsweb.cern.ch/phedex/prod/Request::View?request=820680 Thanks Maria |
Hi Maria, all, here’s the list of the datasets in the local space: The request which has the SinglePhotons in it: https://cmsweb.cern.ch/phedex/prod/Request::View?request=820681 For all I can tell right now, all these datasets are here. -m On 11 Nov 2016, at 12:02, mariadalfonso <[email protected]mailto:[email protected]> wrote: Ciao Marc. Would be great if you can add the /SinglePhoton/Run2016B-23Sep2016-v3/MINIAOD do not see in the https://cmsweb.cern.ch/phedex/prod/Request::View?request=820680 Thanks Maria — |
Hi all, new year, new cleaning-up campaign. Hooray! The situation is that the extended local space which is now 300 TB is overfull. We are a few TB above our quota, so there is no way around cleaning up a certain amount of the data we have subscribed to the local space. So, unless there are some solid reasons for keeping any of the following datasets in the local space, I will request their deletion. This will free up roughly 50 TB, which should buy us some time before we (I) have to think about something new/smarter. These are all PromptReco of 2016 data PDs. Before you request keeping them, please have a good look and some solid reasoning. Keeping datasets just for funsies won't cut it anymore, unfortunately. Also: datasets can clearly be retrieved back at a later point if that were necessary. Best, /BTagCSV/Run2016*-PromptReco-v*/MINIAOD |
Hi Marc, Can you transfer the following datasets to CERN: /Tau/Run2016B-03Feb2017_ver2-v2/MINIAOD Thanks! |
Hi Jan,
request is made: https://cmsweb.cern.ch/phedex/prod/Request::View?request=967091
Best,
-m
On 21 Mar 2017, at 10:31, Jan Steggemann <[email protected]<mailto:[email protected]>> wrote:
Hi Marc,
Can you transfer the following datasets to CERN:
/Tau/Run2016B-03Feb2017_ver2-v2/MINIAOD
/Tau/Run2016E-03Feb2017-v1/MINIAOD
/Tau/Run2016G-03Feb2017-v1/MINIAOD
Thanks!
Jan
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#666 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AEdevKmD46V1W99CvdHtXT_KDV-aVMatks5rn5jhgaJpZM4JU-5I>.
|
@mdunser |
sorry for being a bit late on this, request is here: https://cmsweb.cern.ch/phedex/prod/Request::View?request=1113675
-m
On 21 Sep 2017, at 16:21, mariadalfonso <[email protected]<mailto:[email protected]>> wrote:
@mdunser<https://github.com/mdunser>
Can you please transfer this dataset /DoubleMuon/Run2016E-03Feb2017-v1/MINIAOD
Maria
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#666 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AEdevJOEynsZ5x0nwt31dSQIHBq1mu-iks5sknDTgaJpZM4JU-5I>.
|
can you please transfer , with high priority, Thanks |
here’s the request:
https://cmsweb.cern.ch/phedex/prod/Request::View?request=1132385
-m
On 13 Oct 2017, at 11:16, mariadalfonso <[email protected]<mailto:[email protected]>> wrote:
@mdunser<https://github.com/mdunser>
can you please transfer , with high priority,
/DoubleEG/Run2016D-03Feb2017-v1/MINIAOD
/DoubleMuon/Run2016H-03Feb2017_ver3-v1/MINIAOD
/SinglePhoton/Run2016H-03Feb2017_ver3-v1/MINIAOD
Thanks
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#666 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AEdevDSTDNlwRa0GvaeEpWkvpvYywu2iks5sryp1gaJpZM4JU-5I>.
|
Hi all,
I'm starting a new issue for data-transfers of Run2016. Many of the datasets, despite being subscribed originally to CERN are being deleted fairly rapidly.
After some interaction with CompOps it has become clear that there is no longer any guarantee that any dataset of any run-period in 2016 will remain at the CERN T2. Computing resources are hitting their absolute limit pretty much everywhere, so whatever workflow that used to work in 2015 is no longer guaranteed to work in 2016.
For users of heppy and heppy_batch that means that they should have a look into using one of two options:
useAAA which copies the root files locally to /tmp via xrd and runs on them from there. In my experience this works well, though it sometimes takes many resubmissions for it to finally run through. This requires setting the environment variable X509_USER_PROXY to an empty file that exists.
crab3 works well, with some adaptions to the run-config and the crab/ directory. If you plan on running loads of data now or in the future, this might be worthwhile checking out. In my experience this work well.
It is, of course, still possible to transfer datasets to our local buffer, but be advised that this may take up to a week or more from your request on this thread to the dataset finally being present at CERN, and in addition the buffer has a limited size, so there is no way we can store all data in it just for the sake of it.
Best,
-m
The text was updated successfully, but these errors were encountered: