This repository contains the code used to run the following analyses:
- VBS WH
- VBS VVH (all-hadronic)
It is structured as follows:
abcdnet
: Code to train ABCDNet (VBS VVH)analysis
: RAPIDO analysis codecombine
: Code for running Higgs Combine limitsnotebooks
: Collection of Jupyter notebooksskimmer
: Deprecated nanoAOD-tools skimming code
First, decide on one UAF to use out of uaf-2
, uaf-3
, and uaf-4
.
Each machine has 7 TB of NVMe storage mounted to /data
, which we will use heavily, but they are separate from each other, so we must select one (preferably, one that is the least heavily utilized).
mkdir -p /data/userdata/${USER}/nanoaod/
ln -s /data/userdata/phchang/nanoaod/VBSVVHSkim /data/userdata/${USER}/nanoaod/VBSVVHSkim
git clone [email protected]:jkguiang/vbs.git
cd vbs/analysis
source setup.sh
This directory contains scale factors, cross section, python scripts containing lists of Project Metis DBSSample
objects, etc.
scp -r uaf-2:/data/userdata/jguiang/vbs_data data # if you are on uaf-2 already, just cp -R
git clone [email protected]:jkguiang/rapido.git
cd rapido
make -j
cd -
Eventually, we will create a branch of NanoTools that has these changes. The changes below primarily add the branches needed to read the custom branch that we add for the ttH lepton ID MVA for NanoAOD v9.
cd NanoTools/NanoCORE
cp /home/users/jguiang/projects/NanoTools/NanoCORE/Nano.h .
cp /home/users/jguiang/projects/NanoTools/NanoCORE/Nano.cc .
cp /home/users/jguiang/projects/NanoTools/NanoCORE/ElectronSelections.h .
cp /home/users/jguiang/projects/NanoTools/NanoCORE/ElectronSelections.cc .
cp /home/users/jguiang/projects/NanoTools/NanoCORE/MuonSelections.h .
cp /home/users/jguiang/projects/NanoTools/NanoCORE/MuonSelections.cc .
cp /home/users/jguiang/projects/NanoTools/NanoCORE/Tools/jetcorr/JetResolutionUncertainty.h Tools/jetcorr/JetResolutionUncertainty.h
rm -rf Tools/jetcorr/data/; cp -r /home/users/jguiang/projects/NanoTools/NanoCORE/Tools/jetcorr/data/ Tools/jetcorr/data/
cp /home/users/jguiang/projects/NanoTools/NanoCORE/Makefile . # comment out RHEL 7 line and comment in RHEL 8 line
cd -
make study=vbsvvhjets
The command below will write babies (ROOT files with TTrees) to /data/userdata/$USER/vbs_studies/vbsvvhjets/output_TestRun
using 64 parallel threads.
./bin/run vbsvvhjets --n_workers=64 --basedir=/data/userdata/$USER/vbs_studies --skimdir=/data/userdata/$USER/nanoaod/VBSVVHSkim --skimtag=0lep_2ak4_2ak8_ttH --data --tag=TestRun
./bin/merge_vbsvvhjets vbsvvhjets --basedir=/data/userdata/$USER/vbs_studies --tag=TestRun
Assuming you have Miniconda installed:
conda create -n vbs
conda activate vbs
conda install -c conda-forge numpy pandas matplotlib tqdm mplhep uproot scikit-learn
pip install yahist
The command below will create plots in $HOME/public_html/vbsvvhjets_plots/TestRun/
. You can use the --help
flag to see all available options.
export PYTHONPATH=${PYTHONPATH}:$PWD
python scripts/make_plots_vbsvvh.py TestRun --allmerged # use --semimerged for semi-merged channel
The command below will write babies (ROOT files with TTrees) to /data/userdata/$USER/vbs_studies/vbsvvhjets/output_v1
using 64 parallel threads.
A different output directory is populated for each JEC and JER variation.
sh scripts/runall_vbsvvhjets.sh v1 # v1 can be any string for tag
The following command will create datacards in ../combine/vbsvvh/datacards/VBSVVH_allmerged_v1
:
python scripts/make_datacards_vbsvvh.py v1 --allmerged # use --semimerged for semi-merged channel
Alternatively, the private samples (C2W, C2Z scan) can be used, and datacards will instead be written to ../combine/vbsvvh/datacards/Private_C2W_C2Z_allmerged_v1
:
python scripts/make_datacards_vbsvvh.py v1 --private --allmerged # use --semimerged for semi-merged channel
cd ../combine # i.e. cd to the vbs/combine directory!
# Now set up combine:
source /cvmfs/cms.cern.ch/cmsset_default.sh
cmssw-el7 # launch an SLC7 Singularity container
cmsrel CMSSW_11_3_4
cd CMSSW_11_3_4/src
cmsenv
git clone https://github.com/cms-analysis/HiggsAnalysis-CombinedLimit.git HiggsAnalysis/CombinedLimit
cd HiggsAnalysis/CombinedLimit
scramv1 b clean
scramv1 b -j 4
cd ../../../../
bash <(curl -s https://raw.githubusercontent.com/cms-analysis/CombineHarvester/main/CombineTools/scripts/sparse-checkout-https.sh)
cd CMSSW_11_3_4/src/
scramv1 b -j 4
cd ../../
cd vbsvvh
sh runLimits.sh datacards/VBSVVH_allmerged_v1 # results written to results/VBSVVH_allmerged_v1
sh plotLimits.sh results/VBSVVH_allmerged_v1