Method

Here the considerations taken in combining the datasets and defining the test are outlined

Toolchain

Contur Toolchain

The software tool chain used in [152]. Other generators and interfaces may also be substituted for e.g. Feynrules or Herwig7.

Contur exploits three important developments to survey existing measurements and set limits on new physics.

  1. SM predictions for differential and exclusive, or semi-exclusive, final states are made using sophisticated calculational software, often embedded in Monte Carlo generators capable of simulating full, realistic final states [143]. These generators now incorporate matrix-elements for higher-order processes matched to logarithmic parton showers, and successful models of soft physics such as hadronisation and the underlying event.

  2. As the search for many of the favoured BSM scenarios has been unsuccessful, there has been a diverisification of models for new physics, including simplified models [100, 112], complementing potentially ultra-violet complete theories such as Supersymmetry, and effective lower-energy theories (EFTs). All these approaches are readily imported into the event generators moentioned above, thus allowing the rapid prediction of their impact on a wide variety of final states simultaneously. In this paper we make extensive use of these capabilities within Herwig 7 [120, 125].

  3. The precision measurements from the LHC have mostly been made in a manner which minimises their model-dependence. That is, they are defined in terms of final-state signatures in fiducial regions well-matched to the acceptance of the detector. Many such measurements are readily available for analysis and comparison in the Rivet library [128, 141].

These three developments together make it possible to efficiently bring the power of a very wide range of data to bear on the search for new physics. While such a generic approach is unlikely to compete in terms of sensitivity with a search optimised for a specific theory, the breadth of potential signatures and models which can be covered makes it a powerful complementary approach. 1

1

Limits from existing searches can sometimes be applied to new models, for example by accessing archived versions of the original analysis code and detector simulation via the RECAST [174] project, or by independent implementations of experimental searches, see, for example, Refs. [123, 171, 182, 208, 217].

Strategy

The current approach considers BSM models in the light of existing measurements which have already been shown to agree with SM expectations. Thus this is inherently an exercise in limit-setting rather than discovery. The assumption is that a generic, measurement-based approach such as this will not be competitive in terms of sensitivity, or speed of discovery, with a dedicated search for a specific BSM final-state signature. However, it will have the advantage of breadth of coverage, and will make a valuable contribution to physics at the energy frontier whether or not new signatures are discovered at the LHC.

In the case of a new discovery, many models will be put forward to explain the data (as was seen for example [184] after the 750 GeV diphoton anomaly reported by ATLAS and CMS at the end of 2015 and start of 2016 [169, 170]). Checking these models for consistency with existing measurements will be vital for unravelling whatever the data might be telling us. Models designed to explain one signature may have somewhat unexpected consequences in different final states, some of which have already been precisely measured.

If it should turn out that no BSM signatures are in the end confirmed at the LHC, Contur offers potentially the broadest and most generic constraint on new physics, and motivates the precise model-independent measurements over a wide range of final states, giving the best chance of an indirect pointer to the eventual scale of new physics.

Dynamical data selection

We define a procedure to combine exclusion limits from different measured distributions. The data used for comparison in come in the form of histograms (or 2D scatter plots), some of which carry information about the correlations between systematic uncertainties.

There are also overlaps between event samples used in many different measurements, which lead to non-trivial correlations in the statistical uncertainties. To avoid spuriously high exclusion rates due to multiply-counting what might be the same exclusion against several datasets, we take the following approach:

  1. Divide the measurements into groups that have no overlap in the event samples used, and hence no statistical correlation between them. These measurements are grouped by, crudely, different final states, different experiments, and different beam energies (referred to as pools, see data listing).

  2. Scan within each group for the most significant deviation between BSM+SM and SM. This is done distribution-by-distribution and bin-by-bin within distributions. Use only the measurement with the most significant deviation, and disregard the rest. Although the selection of the most significant deviation sounds intuitively suspect, in this case it is a conservative approach, since we are setting limits, and discarding the less-significant deviations simply reduces sensitivity. If correlations are not used (or unavailable) the single most discrepant bin from the most discrepant measurement is used, removing the dominant effect of highly correlated systematic uncertainties within a single measurement. Where a number of statistically-independent measurements exists within a pool, their likelihoods may be combined to give a single likelihood ratio from the group.

  3. Combine the likelihood ratios of the different groups to give a single exclusion limit.

Statistical Method

The question we wish to ask of any given BSM proposal is ‘at what significance do existing measurements, which agree with the SM, already exclude this’. For all the measurements considered, comparisons to SM calculations have shown consistency between them and the data. Thus as a starting point, we take the data as our “null signal”, and we superimpose onto them the contribution from the BSM scenario under consideration. The uncertainties on the data will define the allowed space for these extra BSM contributions.

Since unfolded measurements generally have reasonably high statistics, a simple \(\chi^2\) method is appropriate and is used for most of these results, for speed and simplicity. However, this has been validated against the more sopisticate likelihood method described below.

Taking each bin of each distribution considered as a separate statistic to be tested, a likelihood function for each bin can be constructed as follows,

(1)\[\begin{aligned} L(\mu, {b}, {\sigma}_{b}, {s}) = { \frac{(\mu s + b)^{n}}{n!} \exp\big(-(\mu s + b)\big) \times \frac{1}{\sqrt{2 \pi} \sigma_{b}} \exp\left(-\frac{(m - b)^{2}}{2 \sigma_{b}^{2}}\right)} \times \frac{(\tau s)^{k}}{k!}\exp\big(-\tau s\big)\,,\end{aligned}\]

where the three factors are:

  • A Poisson event count, noting that the measurements considered are differential cross section measurements, hence the counts are multiplied by a factor of the integrated luminosity taken from the experimental paper behind each analysis, to convert to an event count in each bin (and subsequently the additional events that the new physics would have added to the measurement made). This statistic in each tested bin then is comprised of:

    • \(s\), the parameter defining the BSM signal event count.

    • \(b\), the parameter defining the background event count.

    • \(n\), the observed event count.

    • \(\mu\), the signal strength parameter modulating the strength of the signal hypothesis tested, thus \(\mu=0\) corresponds to the background-only hypothesis and \(\mu=1\) the full signal strength hypothesis;

  • A convolution with a Gaussian defining the distribution of the background count, where the following additional components are identified:

    • \(m\), the background count. The expectation value of this count, which is used to construct the test, is taken as the central value of the measured data point.

    • \(\sigma_{b}\), the uncertainty in the background event count taken, from the data, as 1 \(\sigma\) error on a Gaussian (uncertainties taken as the combination of statistical and systematics uncertainties in quadrature. Typically the systematic uncertainty dominates).

  • An additional Poisson term describing the Monte Carlo error on the simulated BSM signal count with \(k\) being the actual number of generated BSM events. The expectation value of \(k\) is related to \(s\) by a factor \(\tau\), which is the ratio of the generated MC luminosity to the experimental luminosity.

This likelihood is then used to construct a test statistic based on the profile likelihood ratio, following the arguments laid out in Ref. [173]. In particular, the \(\tilde{q}_{\mu}\) test statistic is constructed. This enables the setting of a one-sided upper limit on the confidence in the strength parameter hypothesis, \(\mu\), desirable since in the situation that the observed strength parameter exceeds the tested hypothesis, agreement with the hypothesis should not diminish. In addition this construction places a lower limit on the strength parameter, where any observed fluctuations below the background-only hypothesis are said to agree with the background-only hypothesis  3. The required information then is the sampling distribution of this test statistic. This can either be evaluated either using the so called Asimov data set to build an approximate distribution of the considered test statistic, or explicitly using multiple Monte Carlo ‘toy model’ tests  4.

The information needed to build the approximate sampling distributions is contained in the covariance matrix composed of the second derivatives with respect to the parameters (\(\mu, b\) and \(s\)), of the log of the likelihood given in equation (1). They are as follows:

(2)\[\begin{split}\begin{aligned} \mu \mu :& &\frac{\partial^2{\text{ln}L}}{\partial{\mu^2}} = & \frac{-ns^2}{(\mu s + b)^2} \\ b b :& &\frac{\partial^2{\text{ln}L}}{\partial{b^2}} = & \frac{-n}{(\mu s + b)^2} - \frac{1}{\sigma_b^2} \\ s s :& &\frac{\partial^2{\text{ln}L}}{\partial{s^2}} = & \frac{-n\mu^2}{(\mu s + b)^2} - \frac{k}{s^2} \\ \mu s = s \mu :& &\frac{\partial^2{\text{ln}L}}{\partial{\mu \partial s}} = & \frac{nb}{(\mu s + b)^2} - 1 \\ \mu b = b \mu :& &\frac{\partial^2{\text{ln}L}}{\partial{\mu \partial b}} = &\frac{-ns}{(\mu s + b)^2} \\ b s = sb :& &\frac{\partial^2{\text{ln}L}}{\partial{s \partial b}} =& \frac{-n\mu}{(\mu s + b)^2}.\end{aligned}\end{split}\]

Which are arranged in the inverse covariance matrix as follows.

(3)\[\begin{split}\begin{aligned} V^{-1} = - E \begin{bmatrix} \mu\mu & \mu s & \mu b \\ s \mu & s s & s b \\ b \mu & b s & b b \end{bmatrix} \end{aligned}\end{split}\]

The variance of \(\mu\) is extracted from the inverse of the matrix given in (3) as;

\[\sigma_\mu^{2} = V_{\mu,\mu}\]

In order to evaluate this, the counting parameters (\(n, m\) and \(k\)) are evaluated at their Asimov values, following arguments detailed in Ref. [173]. These are taken as follows,

  • \(n_{A} = E[n] = \mu' s + b\). The total count under the assumed signal strength, \(\mu'\), which for the purposes of this argument is equal to 1

  • \(m_{A}=E[m] = b\). The background count is defined as following a Gaussian distribution with a mean of \(b\).

  • \(k_{A} = E[k] = \tau s\). The signal count is defined following a Poisson distribution with a mean of \(\tau s\)

Using this data set the variance of the strength parameter, \(\mu\), under the assumption of a hypothesised value, \(\mu'\), can be found. This is then taken to define the distribution of the \(\tilde{q}_{\mu}\) statistic, and consequently the size of test corresponding to the observed value of the count. The size of the test can be quoted as a \(p\)-value, or equivalently the confidence level which is the inverse of the size of the test. As is convention in the particle physics community, the final measure of statistical agreement is presented in terms of what is known as the CL\(_{s}\) method [192, 218]. Then, for a given distribution, CL\(_{s}\) can be evaluated separately for each bin, where the bin with the largest CL\(_{s}\) value (and correspondingly smallest \(p_{s+b}\) value) is taken to represent the sensitivity measure used to evaluate each distribution, a process outlined in section [sec:selec].

Armed then with a list of selected sensitive distributions with minimal correlations, a total combined CL\(_{s}\) across all considered channels can then be constructed from the product of the likelihoods. This leaves the core of the methodology presented here unchanged, the effect is simply extending the covariances matrix. The overall result gives a probability, for each tested parameter set, that the observed counts \(n_{i}\), across all the measurement bins considered, are compatible with the full signal strength hypothesis.

Finally it is noted that this methodology has been designed to simply profile BSM contributions against data taken. This can be extended to incorporate a separate background simulation or include correlation between bins where available.

3

This is not unexpected, the construction up to this point has been designed to look at smoothly falling well-measured processes at energies that the LHC is designed to probe. This is however a result that should be monitored when considering different models.

4

For the cases considered here the results were found to be equivalent, implying that the tested parameter space values fall into the asymptotic, or large sample, limit, and so the Asimov approach is used

Limitations

Most of the limitations come from the fact that (in its default mode) Contur assumes the data are identically equal to the SM. This is an assumption that is reasonable for distributions where the uncertainties on the SM prediction are not larger than the uncertainties on the data. It is also the assumption made in the control regions of many searches, where the background evaluation is “data driven”.

Because of this, Contur as currently implemented is best adapted to identifying kinematic features (mass peaks, kinematic edges) and may be less reliable for smooth deviations in normalisation. In particular, since we currently take the data to be identically equal to the SM expectation, we will be insensitive to a signal which might in principle arise as the cumulative effect of a number of statistically insignificant deviations across a range of experimental measurements. To do this properly requires an extensive evaluation of the theoretical uncertainties on the SM predictions for each channel. These predictions and uncertainties are gradually being added to Contur and can be tried out using a command-line option (see [135] for a first demonstration of this).

Additionally, in low statistics regions, outlying events in the tails of the data will not lead to a weakening of the limit, as would be the case in a search. However, measurements unfolded to the particle-level are typically performed in bins with a requirement of minimum number of events in any given bin, reducing the impact of this effect (and also weakening the exclusion limits). Although some searches are available in Rivet and can be used by Contur (since [135]) by selecting the appropriate command-line option, our limits generally focus on the impact of high precision measurements on the BSM model, in which systematic uncertainties typically dominate.

For these reasons, the limits derived are best described as expected limits, seen as delineating regions where the measurements are sensitive and deviations are disfavoured. In regions where the confidence level is high, they do represent a real exclusion.