The framework makes use of the C++ language and is interfaced with ROOT, and is available under this Github link. After cloning/downloading the repository, the only things you need to setup are: you need to have the ROOT framework (see here for a quick start on ROOT) and a gcc compiler. The current version of the framework was compiled using gcc v6.20 and ROOT v6.10.04.
The 13 TeV ATLAS Open Data are hosted on the CERN Open Data portal and ATLAS Open Data portal in this documentation. The framework can access the samples in two ways:
The framework consists of two main parts:
The analysis code is located in the Analysis folder, with 12 sub-folders corresponding to the 12 examples of physics analysis documented in Physics analysis examples. The naming of the sub-folders follows a simple rule: “NNAnalysis”, where NN can be WBoson, ZBoson, TTbar, SingleTop, WZDiBoson, ZZDiBoson, HZZ, HWW, Hyy, ZPrimeBoosted, ZTauTau and SUSY.
Each analysis sub-folder contains the following files:
As an example, in the case of the HWWAnalysis, the sub-folder looks like this (Output_HWWAnalysis was not created yet):
In the main directory, do the first setup of the code by typing in the terminal:
./welcome.sh
or in case you have installed the source shell:
source welcome.sh
This will ask you if you want to create automatically all the output directories in all the 12 analysis sub-folders, or to erase their contents in case it is needed.
After this, change to any of the analysis sub-folders and open using your preferred text-editor the analysis main-control code (main_NNAnalysis.C): it controls the location of the input samples, please find the line:
// path to your local directory *or* URL, please change the default one!
TString path = "";
and adapt it properly to your specific case.
After that, execute the code via the command line using:
./run.sh
or
source run.sh
The script will interactively ask you for two options which you can type directly (0, 1,..) in the terminal and hit “ENTER”:
After you choose the options, the code will compile and create the needed ROOT shared libraries, and the analysis selection will begin: it will run over each input sample defined in main_NNAnalysis.C.
If everything was successful, the code will create in the output directory (Output_NNAnalysis) a new file with the name of the corresponding sample (data, ttbar,…).
To clean all shared and linked libraries after running, you can use a script called clean.sh located in the main directory.
The plotting code is located in the Plotting folder and contains the following files:
In the main Plotting directory, execute in the terminal:
./plotme.sh
or in case you have installed the source shell:
source plotme.sh
The script will interactively ask you for two options which you can type directly (0, 1,..) in the terminal and hit “ENTER”:
After you choose the options, the code will compile and create the needed ROOT shared libraries, and the plotting will begin. If everything was successful, the code will create in the output directory (histograms) the corresponding plots defined in HistoList_ANALYSISNAME.txt.
To clean all shared and linked libraries after running, you can use a script called clean.sh located in the main directory.
Additional information about the plotting code:
In case you want to see the data and MC event yields: change “#define YIELDS 0” to “#define YIELDS 1” in Plotting.cxx and remake the plots;
In case you want to add the normalised signal to the plots: change “#define NORMSIG 0” to “#define NORMSIG 1” in Plotting.cxx and remake the plots;
In case something is not working: by changing “#define DEBUG 0” to “#define DEBUG 1” in Plotting.cxx, a lot of debug information will appear, this can help you trace the origin of any possible problem (usually, these could be: the directory histograms does not exist, a wrong path for the location of the input files is given, a wrong or non-existent histogram name is requested, one or several input files from the analysis are missing or failed,..)
In case you want to compile the code instead of the using the plotme script, type “make clean; make” and then run the code with ./plot [NNAnalysis] [location of Output_NNAnalysis]
To add a new variable called new_variable (which, as an example, will contain the information of something), save it as a new histogram called h_new and make a plot of it, follow the instructions below:
(1) Add in the header (NNAnalysis.h) the new histogram (in the function public TSelector where you see the definitions of other histograms):
TH1F *h_new = 0;
(2) Add in the histogram header (NNAnalysisHistograms.h) four new lines:
inside the function define_histograms() add:
h_new = new TH1F("h_new", "Description of the new variable; X axis name ; Y axis name ", number of bins , minimum bin value , maximum bin value);
inside the function FillOutputList() add:
GetOutputList()->Add(h_new);
inside the function WriteHistograms() add:
h_new->Write();
inside the function FillHistogramsGlobal() add:
if (s.Contains("h_new")) h_new->Fill(m,w);
(3) And finally, inside the main analysis code NNAnalysis.C you need to define a new variable (in this case an integer called new_variable), connect it to the value of the branch something that exists in the input samples (those are listed in the analysis header NNAnalysis.h after “Declaration of leaf types”) and save the newly created histogram inside the function Process after the line that reads the content of the TTree (fChain->GetTree()->GetEntries()>0)
int new_variable = something;
FillHistogramsGlobal( new_variable, weight, "h_new");
where the weight is the multiplication of scale factors and Monte Carlo weight.
(4) Now run the analysis code as usual again over all the samples and check that the new histogram h_new appears in the produced output files.
(5) Analysis part is done, go to Plotting part and in the list_histos directory in HistoList_ANALYSISNAME.txt file add one new line:
h_new
(with no empty lines before or after it!).
(6) Execute the plotting code as usual (no need to change the code itself at all), and you will find the new histogram in histograms/h_new.png!
Go to the next framework or jump back to the summary page for the analysis frameworks or the general summary page.