The Object Condensation loss - developed by Jan Kieseler - is now being used by several groups in high energy physics for both track reconstruction and shower reconstruction in calorimeters.
Several implementations of this idea already exist, but often they are maintained by very few people. This repository aims to provide an easy to use implementation for both the TensorFlow and PyTorch backend.
Existing Implementations:
- cms-pepr [TensorFlow]
- gnn-tracking [PyTorch]
python3 -m pip install -e 'object_condensation[pytorch]'
# or
python3 -m pip install -e 'object_condensation[tensorflow]'
For the development setup, clone this repository and also add the dev
and
testing
extra options, e.g.,
python3 -m pip install -e '.[pytorch,dev,testing]'
Please also install pre-commit:
python3 -m pip install pre-commit
pre-commit install # in top-level directory of repository
Note
For a comparison of the performance of the different implementations, see the docs.
condensation_loss
is a straightforward implementation that is easy to read and
to verify. It is used as baseline.
condensation_loss_tiger
saves memory by "masking instead of multiplying".
Consider the repulsive loss: It is an aggregation of potentials between
condensation points (CPs) and individual nodes. If these potentials are taken to
be hinge losses relu(1-dist)
, then they vanish for most CP-node pairs
(assuming a sufficiently well-trained model).
Compare now the following two implementation strategies (where dist
is the
CP-node distance matrix):
# Simplified by assuming that all points belong to repulsive potential
# Strategy 1
v_rep = sum(relu(1 - dist))
# Strategy 2 (tiger)
rep_mask = dist < 1
v_rep = sum((1 - dist)[rep_mask])
In strategy 1, pytorch will keep all elements of dist
in memory for
backpropagation (even though most of the relu
-differentials will be 0). In
strategy 2 (because the mask is detached from the computational graph), the
number of elements to backpropagate with will be greatly reduced.
However, there is still one problem: What if our latent space collapses at some
point (or at the beginning of the training)? This would result in batches with a
greatly increased memory consumption, possibly crashing the run. To counter
this, an additional parameter max_n_rep
is introduced. If the number of
repulsive pairs (rep_mask.sum()
in the example above) exceeds max_n_rep
,
then rep_mask
will sample max_n_rep
elements and upweight them by
n_rep/max_n_rep
. To check whether this approximation is being used,
condensation_loss_tiger
will return n_rep
in addition to the losses.