Skip to content

jackal1-66/Rivet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rivet guide to start

Make a directory for the analysis and cd into it

rivet-mkanalysis ALICE_2018_I1672806

Then run

rivet-buildplugin RivetALICE_2018_I1672806.so ALICE_2018_I1672806.cc 

Run source code on Pythia (.hepmc) data file for 100 events (for example)

rivet --ignore-beams --pwd -a ALICE_2018_I1672806 ../TEST-pythia/pythia8.pbpb2760.hepmc -o rivet.yoda -n 100

Make Rivet plots

rivet-mkhtml --pwd --errs rivet.yoda

Open Rivet plots in the browser

open rivet-plots/index.html

Generate Pythia Events:

make a new directory to keep pythia data

mkdir TEST-pythia

Get Pythia running options

run-pythia —help

Run for 1000 events with a output file name (.hepmc)

time run-pythia -e 7000 -o pythia8.pbpb2760.hepmc -c "SoftQCD:all on" -n 1000

Pythia Tunes definition with Sacrifice (1e6 events with fifo)

mkfifo test.fifo

Monash

run-pythia -n 1000000 -o test.fifo -e 13000 -c "SoftQCD:all on" >Pythiapp1e6.log &

Mode0

run-pythia -n 1000000 -o test.fifo -e 13000 -c "SoftQCD:all on" -i mode0.par >Pythiapp1e6mode0.log &

Mode2

run-pythia -n 1000000 -o test.fifo -e 13000 -c "SoftQCD:all on" -i mode2.par >Pythiapp1e6mode2.log &

And then to run rivet on the ongoing generation:

rivet --ignore-beams --pwd -a ALICE_2017_I1511870 test.fifo -o pythia8pp1e6new.yoda -n 1000000 >RivetPythia8pp1e6new.log &

(mode0.par and mode2.par are inside the main folder of the repo).
You can also add a seed to the sacrifice application by adding the parameter -r

In troubles with running Pythia?

  • cd into Sacrifice folder
  • cd src
  • emacs PythiaMain.cxx (comment the default options for beam energy and colliding energy) then give these parameters in .param file while running pythia
  • cd ..
  • make install
  • then run pythia again to generate event data

Run HERWIG on the server (OBSOLETE)

The installation on the server was not so easy and it is a little "crooked", so in order to run HERWIG using fifos and rivet there are precise steps to follow:

  1. Type "scldev" which will enable the correct versions of g++ compiler and python
  2. Type "activate" which will enable the Herwig interface
  3. If the .in file was not read, type "Herwig read <filename.in>" which will generate a .run file
  4. Run the Herwig file using "Herwig run <filename.run> -N <number_of_events_to_generate>" &
  5. Run the rivet program using the fifo generated by Herwig. This will return an error since the rivet environment was not defined (yet) [THIS STEP IS MANDATORY IN ORDER TO RUN THE SIMULATION, OTHERWISE HERWIG WILL RETURN AN INTERPRETER ERROR]
  6. Type "rivetenv" which will enable rivet environment
  7. Run again rivet
  8. Wait for the events to be generated

Merging more analysis

Before merging, the RIVET_ANALYSIS_PATH and RIVET_DATA_PATH must be set accordingly so that they contain the path of the analysis you're working on.
In order to do that you should use the function rivet-merge (from Rivet 2.7.2). This allowed me to merge multiple yoda files including weights just by doing "rivet-merge file1.yoda file2.yoda file3.yoda -o filemerged.yoda", without -e option which complicate things related to weights (this was instead used in order to merge multiple jobs of the same analysis).
The normal rivet-merge is useful for example to merge different simulations like pp and pPb.
When observables like centrality and multiplicity are involved, rivet-merge might require additional parameters to be run. For example if the ALICE_2015_PBPBCentrality is used to get a calibration file that is fed into a rivetisation containing the centrality (hence adding :cent=GEN during the Rivet run), the command to be used should contain the flags -p calibration.yoda -a ALICE_20XX_IXXXXXXX:cent=GEN in order for the merging to work flawlessly, otherwise a calibration file error is going to appear when trying the merge.

Use Dummy Histos on Rivet

It is possible to create dummy histograms which are linked (referencing data) from experimental histograms. This is done using the command:

Histo1DPtr   \_h\_0000, \_h\_0001; // for a histogram
\_h\_0000 = BookHisto1D("d01-x01-y01");
\_h\_0001 = BookHisto1D("TMP/test", refData(1,1,1));

This will lead to the histo _h_0001 being set (binning, etc) as the experimental histogram d01-x01-y01.

Edit Legend to rivet plots

In order to do so you can edit the titles in this way

rivet-mkhtml --pwd --errs rivet.yoda:'Title=This is my custom Title' rivet2.yoda:'Title=This is my other custom title'

Run HERWIG with AliGenerators

Unfortunately the installation of AliGenerators doesn't work that good, so in order to make it work you should copy your configuration .in files into the folder $HERWIG_ROOT/share/Herwig/, cd into the folder and then finally use Herwig read as usual. For some reasons that I don't understand the file SoftTune.in was removed from the snippets folder, so you can create it beforehand by copying the one you can find in the main folder of this repository.

Update 22-08-2022:

HERWIG can be run from anywhere by using the option --repo, so in order to make it work, after entering the AliGenerators environment, one should do:

Herwig --repo=${HERWIG_ROOT}/share/Herwig/HerwigDefaults.rpo read filename.in  
Herwig --repo=${HERWIG_ROOT}/share/Herwig/HerwigDefaults.rpo run filename.run -N Nevents --seed <RandomNumber> &    

this way the correct repository will be loaded with the options declared on the input files with a number of events Nevents and using the RandomNumber seed.

Run HERWIG on the GRID

The rivet-Herwig7.jdl file must be uploaded on alien, then the procedure is similar to the Pythia one (next section). A specific HERWIG.in file with Baryonic Reconnection ON will be used for the HERWIG simulation. Since HERWIG changed a lot of things while upgrading, it's important to also provide the files SoftTune.in and BaryonicReconnection.in in order to make the simulation work properly.

05/12/2022 Update: now both baryonic and Plain reconnection are included properly in the scripts and are tested to be working. In order to run the generation the command submit rivet-Herwig7.jdl Mode numEvSingle Jobs CalibrationOption needs to be executed. Mode with Herwig can be either baryreco or plainreco, respectively for baryonic or plain reconnection. Any other command will make HERWIG run with the standard LHC-MB configuration.
Calibration option, as for PYTHIA, must be defined following the values:

  • 0: no calibration
  • 1: Calibration with ALICE_2015_PBPBCentrality launched (the file must be provided in the inputs)
  • 2: preload calibration file named calibration.yoda (to be run after calibration = 1, or if you have already a calibration file)
  • Any other value: the simulation won't continue running (exit)

Running on the GRID

Running multiple events on a server it's extremely time consuming, but doing it on the GRID can reduce that time drastically. Let's start from the beginning. You need a certificate provided by the CERN certification authority https://ca.cern.ch/ca/. When you have it drag and drop it on your browser, you're going to need it later. Now we have to modify the certificate downloaded in order to extract the certificate and private key you're going to use to access the GRID. So from terminal do:

  1. to extract certificate
openssl pkcs12 -in myCert.p12 -clcerts -nokeys -out $HOME/.globus/usercert.pem 
  1. to extract the private key
openssl pkcs12 -in myCert.p12 -nocerts -out $HOME/.globus/userkey.pem
  1. Set the mode on both of them in order that only the owner can read/write them:
chmod 600 $HOME/.globus/userkey.pem  
chmod 600 $HOME/.globus/usercert.pem

Done. Now you should be able to access ALIen, which is the interface for the GRID system on ALICE. To do so load the AliPhysics environment:

alienv enter AliPhysics

and try alien.py. If things are setup correctly, you should be asked the password and will be able to enter the ALICE systems.
Inserting files inside the GRID is unfortunately painful, so try to keep them as a bare minimum.
To run RIVET on the GRID you need all the ALICE_YEAR_I\<InspireID\>.* files, the .jdl and .sh files you find in the main folder of the repo and also the parameter files mode0 and mode2. To put them on the GRID first enter with alien.py, cd into the folder you want to use and copy the path. Then copy your files inside alien from your pc by doing:

alien_cp file:filenametocopy *PATH_IN_ALIEN*/filenametocopy  

for each file.
After you've done this you can finally submit your rivetization on the GRID, but remember to edit the JDL file by inserting your analysis name and the files you need to upload (generally all the ALICE*.* and the mode*.par files). It might be needed to edit also the Bash_RunRivet_commonPar.sh file if you want to run an analysis with calibration (default is without).
In the end try to run the analysis by doing a submit of the JDL inside the GRID, following the example for Monash you see in the jdl file. You can check the status of your jobs going to the alimonitor.cern.ch interface (MonALISA).
When all the jobs are completed you can move to copy and merge (in this specific order) the results inside your PC using the macros you have in the GRIDMerge folder by changing accordingly both of them with respect to your analysis.
So Copy the GRIDMerge folder inside your ALICE_YEAR_I folder. Now cd into it and in order to execute the files you will most likely to allow them to be executed in the bash, so digit:

chmod u+x filename.sh  

for both the scripts you find inside that folder. Inside the Copy script edit accordingly the analysis name, the MCGen used and the path in which the subjobs outputs are. Then (WHILE ON ALIPHYSICS) execute the script, simply with:

./CopyFilesRivet.sh  

which will copy all the Rivet.yoda files obtained in the analysis in a locally stored Yodas folder.
At last, move to AliGenerators, and perform the merge using the MergeFilesRivet.sh script, remembering to change inside it the name of your output merged Yoda accordingly to your used generator.

Update 22-08-2022

The Calibration possibility has been introduced inside the running script via parameter. The common submitting command for the JDL is:

submit rivet-JDL.jdl ParameterFile.par Nevents Njobs CalibrationOption  

where ParameterFile.par is useful only for Pythia for now and three options are available:

  • Monash (default)
  • Mode0
  • Mode2.

CalibrationOption has three options as well:

  • 0 -> no Calibration
  • 1 -> Calibration
  • 2 -> Rivet simulation using a preloaded calibration file.

This was tested using the ALICE_2015_PBPBCentrality plugin in Pb-Pb collisions (which will be loaded by default with the CalValue = 1), so different options may be required for your analyses.

Update 15-12-2022: EPOS4 Integration

From the AliGenerators version of 14 of December 2022 (AliGenerators::v20221214-1), it is possible again to use the framework on the GRID.
The issue was related to loading the Rivet apckage (due to a commit pushed on 23/09/2020), but it is now possible to load Rivet::v3.1.6-alice1-15 contained in the latest AliGenerators package.
In order to run the EPOS4 generator on the GRID, you need to upload in your working folder 5 files:

  1. Bash_RunRivet_commonPar.sh
  2. Bash_RivetValidation.sh
  3. epos
  4. epos4.optns
  5. rivet-EPOS4.jdl

Then you need to replace the folders inside the JDL file with your own paths indicating the positions of your Rivet macros, and the files you just uploaded.
To submit the jobs use the syntax:

submit rivet-EPOS4.jdl *random_text* *number_of_events* *number_of_jobs*

where random_text can be anything, like EPOS4 for example. It's a string that is used to load some parameter files in other generators, but it's not used in EPOS4.
The main configuration for the generator is provided via the epos4.optns file (the name is hardcoded in the script). If you are interested in understanding what the different parameters mean and how to translate your configuration to the language of the generator, you can visit the tutorial website made for EPOS4. I made it as a hobby for a mini-workshop, so if there's something wrong or not working fully I apologise in advance.

Yoda to ROOT conversion

The conversion with the yoda python script yoda2root (as of 09/08/2022) doesn't work properly (missing yoda.root module), so as a work around the golang scripts can be used. In particular a series of executables were compiled for different operating systems and they can be found inside the yoda2root folder. Before running them, your computer needs to have GO installed, for this follow the tutorial you can find here https://go.dev/doc/install .
After you've completed the procedure, you need to enable the execution of the program (for your specific OS), so run:
$chmod u+x filename.exe and then you can convert your yoda file to root using the syntax: ./filename.exe yodafile.yoda rootfile.root

Known bug

When editing a script on Windows sometimes it can happen that weird characters are inserted making it not run on Linux.
Before adding a script on the GRID, try to run it on your local machine with a Linux system, if the error is similar to this "/bin/bash^M: bad interpreter: No such file or directory" run the command:

sed -i -e 's/\r$//' script.sh  

this will remove the weird character. Try running if again, if it works upload it on the GRID.

Custom Pythia version run without AliGenerators

Due to the time limits of upgrading AliGenerators, it was compulsary to provide the user a way to run custom made or upgraded versions of Pythia as soon as they are released. The idea behind the implementation is to compile Pythia on the run on the GRID, which is indeed achievable and fast.
The Bash_runRivet_CommonPar bash script will do just that providing that a .tgz archive containing a folder named "pythia*" is uploaded by the JDL script (rivet-newP8.jdl). By default the run will start using Monash, but a custom parameter file can be provided to change any option of the generation.
A simple submission would look like this:

submit rivet-newP8.jdl custom 20000 250 2

where custom is the name of the provided .par file (which is going to be skipped in case it doesn't exist), 20000 are the number of events, 250 are the submitted splitted jobs and 2 is the calibration option.
The center of mass energy can be changed inside the .jdl files, right after the -s flag inside the arguments.
N.B. the simulation will use the common main42 file located inside the example folder of the pythia installation directory. If something in the next upgrades happens on that file, the simulation might not work, reach me in case (last release tested is Pythia 8.3.10).

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published