-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathhowto_run.txt
78 lines (63 loc) · 2.99 KB
/
howto_run.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
COMMAND SEQUENCE TO EXECUTE ANALYSIS FLOW (skeleton):
======================================================
ALPHA:
--------------
0. python batch/submitLSFjobs.py -c python/HHAnalyzer.py -l HH -o /lustre/cmswork/hh/alpha_ntuples/v1_2017...
0b. cmsRun python/HHanalyzer.py
ALP_ANALYSIS:
--------------
1. Baseline Selection -> default tree + Hemisphere tree produced:
python scripts/BaselineSelector.py -s data_moriond -o def_cmva -i v2_20170222
python scripts/BaselineSelector.py -s signals -t -o def_cmva -i v2_20170606
python scripts/BaselineSelector.py -s signals -t -o def_cmva -i v2_20170606 --jetCorr 0
python scripts/BaselineSelector.py -s signals -t -o def_cmva -i v2_20170606 --jetCorr 1
python scripts/BaselineSelector.py -s signals -t -o def_cmva -i v2_20170606 --jetCorr 2
python scripts/BaselineSelector.py -s signals -t -o def_cmva -i v2_20170606 --jetCorr 3
2. Merge data and signal files:
cd /lustre/cmswork/hh/alp_moriond_base/def_cmva/
rm BTagCSVRun2016.root
hadd BTagCSVRun2016.root BTagCSVRun2016*-v*.root
rm HHTo4B_pangea.root
hadd HHTo4B_pangea.root HHTo4B_*.root
** the same in the jetCorr folders.
3. Create mixed data looking at merged data sample:
python scripts/MixingSelector.py -s Data -i def_cmva --comb appl
4. Baseline Selection on mixed data:
python scripts/BaselineSelector.py -s Data -o def_cmva_mixed -i def_cmva/mixed_ntuples -m
(detailed instruction for all the script from readme:
https://github.com/cms-hh-pd/alp_analysis)
hh2bbbb_limit:
--------------
5. Create symb link in a working folder (define folder names inside script):
./scripts/prepare_area.sh
6. Create h5 file with samples and variables needed:
python scripts/create_h5_chunked.py -i h5foldername/ -s sample(wildcard) (-j --ew)
7. Train mva:
python scripts/train_classifier.py -i h5foldername -o foldername
8. Get classifier report:
python scripts/classifier_report.py -j jsonpath (-a additionalSample --ext extraPlots)
8. Create datacard and hist for combine:
python scripts/create_card.py -j jsonpath -o foldername --oname cardname
combine:
--------------
9. execute combine from CMSSW_7_4_7/src:
combine -d datacardpath -M Asymptotic --run blind --rMax 10000000 --minRelVa..
combine -M MaxLikelihood....
see also:
- https://twiki.cern.ch/twiki/bin/viewauth/CMS/HiggsWG/HiggsPAGPreapprovalChecks
- https://twiki.cern.ch/twiki/bin/viewauth/CMS/HiggsWG/SWGuideNonStandardCombineUses#Nuisance_parameter_impacts
Additional tool for limit:
- to scan bdt cut:
./scripts/run_cards.sh
(from combine area) ../../CMSSW_8_0_25/src/Analysis/hh2bbbb_limit/scripts/run_limits.sh
./scripts/read_log.sh
## TO GET PLOTS
alp_plots:
--------------
0. to get plots before BDT from baseline selection output:
python scripts/drawcomp_preBDT.py -o outfolder -w 0
1. to get plots after BDT from classifier_report output:
python scripts/drawcomp_afterBDT.py -n -o outfolder -w 0 -b bdtname
detailed instructions for plots in the readme:
https://github.com/cms-hh-pd/alp_plots
======================================================