5.1. General parameters

The following parameters describe general run information. See Input structure for how to use them in a configuration file or on the command line.

5.1.1. EVENTS

This parameter specifies the number of events to be generated.

It can alternatively be set on the command line through option -e, see Command Line Options.

5.1.2. EVENT_TYPE

This parameter specifies the kind of events to be generated. It can alternatively be set on the command line through option -t, see Command Line Options.

  • The default event type is StandardPerturbative, which will generate a hard event through exact matrix elements matched and/or merged with the paerton shower, eventually including hadronization, hadron decays, etc..

Alternatively there are two more specialised modes, namely:

  • MinimumBias, which generates minimum bias events through the SHRIMPS model implemented in Sherpa, see Minimum bias events

  • HadronDecay, which allows to simulate the decays of a specific hadron.

5.1.3. SHERPA_VERSION

This parameter ties a config file to a specific Sherpa version, e.g. SHERPA_VERSION: 2.2.0. If two parameters are given they are interpreted as a range of Sherpa versions: SHERPA_VERSION: [2.2.0, 2.2.5] specifies that this config file can be used with any Sherpa version between (and including) 2.2.0 and 2.2.5.

5.1.4. TUNE

Warning

This parameter is currently not supported.

5.1.5. OUTPUT

This parameter specifies the screen output level (verbosity) of the program. If you are looking for event file output options please refer to section Event output formats.

It can alternatively be set on the command line through option -O, see Command Line Options. A different output level can be specified for the event generation step through EVT_OUTPUT or command line option -o, see Command Line Options

The value can be any sum of the following:

  • 0: Error messages (-> always displayed).

  • 1: Event display.

  • 2: Informational messages during the run.

  • 4: Tracking messages (lots of output).

  • 8: Debugging messages (even more output).

E.g. OUTPUT=3 would display information, events and errors. Use OUTPUT_PRECISION to set the default output precision (default 6). Note: this may be overriden in specific functions’ output.

For expert users: The output level can be overriden for individual functions, e.g. like this

FUNCTION_OUTPUT:
  "void SHERPA::Matrix_Element_Handler::BuildProcesses()": 8
  ...

where the function signature is given by the value of __PRETTY_FUNCTION__ in the function block. Another expert parameter is EVT_OUTPUT_START, with which the first event affected by EVT_OUTPUT can be specified. This can be useful to generate debugging output only for events affected by a some issue.

5.1.6. LOG_FILE

This parameter specifies the log file. If set, the standard output from Sherpa is written to the specified file, but output from child processes is not redirected. This option is particularly useful to produce clean log files when running the code in MPI mode, see MPI parallelization. A file name can alternatively be specified on the command line through option -l, see Command Line Options.

5.1.7. RANDOM_SEED

Sherpa uses different random-number generators. The default is the Ran3 generator described in [PTVF07]. Alternatively, a combination of George Marsaglias KISS and SWB [MZ91] can be employed, see this website. The integer-valued seeds of the generators are specified by RANDOM_SEED: [A, .., D]. They can also be set individually using RANDOM_SEED1: A through RANDOM_SEED4: D. The Ran3 generator takes only one argument (in this case, you can simply use RANDOM_SEED: A). This value can also be set using the command line option -R, see Command Line Options.

5.1.8. EVENT_SEED_MODE

The tag EVENT_SEED_MODE can be used to enforce the same seeds in different runs of the generator. When set to 1, existing random seed files are read and the seed is set to the next available value in the file before each event. When set to 2, seed files are written to disk. These files are gzip compressed, if Sherpa was compiled with option --enable-gzip. When set to 3, Sherpa uses an internal bookkeeping mechanism to advance to the next predefined seed. No seed files are written out or read in.

5.1.9. ANALYSIS

Analysis routines can be switched on or off using the ANALYSIS parameter. The default is no analysis. This parameter can also be specified on the command line using option -a, see Command Line Options.

The following analysis handlers are currently available

Internal
Sherpa’s internal analysis handler.
To use this option, the package must be configured with option --enable-analysis.
An output directory can be specified using ANALYSIS_OUTPUT.
Rivet
The Rivet package, see Rivet Website.
To enable it, Rivet and HepMC have to be installed and Sherpa must be configured
as described in Rivet analyses.
HZTool
The HZTool package, see HZTool Website.
To enable it, HZTool and CERNLIB have to be installed and Sherpa must be configured
as described in HZTool analyses.

Multiple options can also be specified, e.g. ANALYSIS: [Internal, Rivet].

5.1.10. ANALYSIS_OUTPUT

Name of the directory for histogram files when using the internal analysis and name of the Yoda file when using Rivet, see ANALYSIS. The directory/file will be created w.r.t. the working directory. The default value is Analysis/. This parameter can also be specified on the command line using option -A, see Command Line Options.

5.1.11. TIMEOUT

A run time limitation can be given in user CPU seconds through TIMEOUT. This option is of some relevance when running SHERPA on a batch system. Since in many cases jobs are just terminated, this allows to interrupt a run, to store all relevant information and to restart it without any loss. This is particularly useful when carrying out long integrations. Alternatively, setting the TIMEOUT variable to -1, which is the default setting, translates into having no run time limitation at all. The unit is seconds.

5.1.12. RLIMIT_AS

A memory limitation can be given to prevent Sherpa to crash the system it is running on as it continues to build up matrix elements and loads additional libraries at run time. Per default the maximum RAM of the system is determined and set as the memory limit. This can be changed by giving RLIMIT_AS: where the size is given as e.g. 500 MB, 4 GB, or 10 %. When running with MPI parallelization it might be necessary to divide the total maximum by the number of cores. This can be done by setting RLIMIT_BY_CPU: true.

Sherpa checks for memory leaks during integration and event generation. If the allocated memory after start of integration or event generation exceeds the parameter MEMLEAK_WARNING_THRESHOLD, a warning is printed. Like RLIMIT_AS, MEMLEAK_WARNING_THRESHOLD can be set using units. The warning threshold defaults to 16MB.

5.1.13. BATCH_MODE

Whether or not to run Sherpa in batch mode. The default is 1, meaning Sherpa does not attempt to save runtime information when catching a signal or an exception. On the contrary, if option 0 is used, Sherpa will store potential integration information and analysis results, once the run is terminated abnormally. All possible settings are:

0

Sherpa attempts to write out integration and analysis results when catching an exception.

1

Sherpa does not attempt to write out integration and analysis results when catching an exception.

2

Sherpa outputs the event counter continously, instead of overwriting the previous one (default when using LOG_FILE).

4

Sherpa increases the on-screen event counter in constant steps of 100 instead of an increase relative to the current event number. The interval length can be adjusted with EVENT_DISPLAY_INTERVAL.

The settings are additive such that multiple settings can be employed at the same time.

Note

When running the code on a cluster or in a grid environment, BATCH_MODE should always contain setting 1 (i.e. BATCH_MODE=[1|3|5|7]).

The command line option -b should therefore not be used in this case, see Command Line Options.

5.1.14. NUM_ACCURACY

The targeted numerical accuracy can be specified through NUM_ACCURACY, e.g. for comparing two numbers. This might have to be reduced if gauge tests fail for numerical reasons. The default is 1E-10.

5.1.15. SHERPA_CPP_PATH

The path in which Sherpa will eventually store dynamically created C++ source code. If not specified otherwise, sets SHERPA_LIB_PATH to $SHERPA_CPP_PATH/Process/lib. This value can also be set using the command line option -L, see Command Line Options. Both settings can also be set using environment variables.

5.1.16. SHERPA_LIB_PATH

The path in which Sherpa looks for dynamically linked libraries from previously created C++ source code, cf. SHERPA_CPP_PATH.

5.1.17. Event output formats

Sherpa provides the possibility to output events in various formats, e.g. the HepEVT common block structure or the HepMC format. The authors of Sherpa assume that the user is sufficiently acquainted with these formats when selecting them.

If the events are to be written to file, the parameter EVENT_OUTPUT must be specified together with a file name. An example would be EVENT_OUTPUT: HepMC_GenEvent[MyFile], where MyFile stands for the desired file base name. More than one output can also be specified:

EVENT_OUTPUT:
  - HepMC_GenEvent[MyFile]
  - Root[MyFile]

The following formats are currently available:

HepMC_GenEvent

Generates output in HepMC::IO_GenEvent format. The HepMC::GenEvent::m_weights weight vector stores the following items: [0] event weight, [1] combined matrix element and PDF weight (missing only phase space weight information, thus directly suitable for evaluating the matrix element value of the given configuration), [2] event weight normalisation (in case of unweighted events event weights of ~ +/-1 can be obtained by (event weight)/(event weight normalisation)), and [3] number of trials. The total cross section of the simulated event sample can be computed as the sum of event weights divided by the sum of the number of trials. This value must agree with the total cross section quoted by Sherpa at the end of the event generation run, and it can serve as a cross-check on the consistency of the HepMC event file. Note that Sherpa conforms to the Les Houches 2013 suggestion (http://phystev.in2p3.fr/wiki/2013:groups:tools:hepmc) of indicating interaction types through the GenVertex type-flag. Multiple event weights can also be enabled with HepMC versions >=2.06, cf. Scale and PDF variations. The following additional customisations can be used

HEPMC_USE_NAMED_WEIGHTS: <false|true> Enable filling weights with an associated name. The nominal event weight has the key Weight. MEWeight, WeightNormalisation and NTrials provide additional information for each event as described above. Needs HepMC version >=2.06.

HEPMC_EXTENDED_WEIGHTS: <false|true> Write additional event weight information needed for a posteriori reweighting into the WeightContainer, cf. A posteriori scale and PDF variations using the HepMC GenEvent Output. Necessitates the use of HEPMC_USE_NAMED_WEIGHTS.

HEPMC_TREE_LIKE: <false|true> Force the event record to be stricly tree-like. Please note that this removes some information from the matrix-element-parton-shower interplay which would be otherwise stored.

HepMC_Short

Generates output in HepMC::IO_GenEvent format, however, only incoming beams and outgoing particles are stored. Intermediate and decayed particles are not listed. The event weights stored as the same as above, and HEPMC_USE_NAMED_WEIGHTS and HEPMC_EXTENDED_WEIGHTS can be used to customise.

HepMC3_GenEvent

Generates output using HepMC3 library. The format of the output is set with HEPMC3_IO_TYPE: <0|1|2|3|4> tag. The default value is 0 and corresponds to ASCII GenEvent. Other available options are 1: HepEvt 2: ROOT file with every event written as an object of class GenEvent. 3: ROOT file with GenEvent objects writen into TTree. Otherwise similar to HepMC_GenEvent.

Delphes_GenEvent

Generates output in Root format, which can be passed to Delphes for analyses. Input events are taken from the HepMC interface. Storage space can be reduced by up to 50% compared to gzip compressed HepMC. This output format is available only if Sherpa was configured and installed with options --enable-root and --enable-delphes=/path/to/delphes.

Delphes_Short

Generates output in Root format, which can be passed to Delphes for analyses. Only incoming beams and outgoing particles are stored.

PGS

Generates output in StdHEP format, which can be passed to PGS for analyses. This output format is available only if Sherpa was configured and installed with options --enable-hepevtsize=4000 and --enable-pgs=/path/to/pgs. Please refer to the PGS documentation for how to pass StdHEP event files on to PGS. If you are using the LHC olympics executeable, you may run ./olympics --stdhep events.lhe <other options>.

PGS_Weighted

Generates output in StdHEP format, which can be passed to PGS for analyses. Event weights in the HEPEV4 common block are stored in the event file.

HEPEVT

Generates output in HepEvt format.

LHEF

Generates output in Les Houches Event File format. This output format is intended for output of matrix element configurations only. Since the format requires PDF information to be written out in the outdated PDFLIB/LHAGLUE enumeration format this is only available automatically if LHAPDF is used, the identification numbers otherwise have to be given explicitly via LHEF_PDF_NUMBER (LHEF_PDF_NUMBER_1 and LHEF_PDF_NUMBER_2 if both beams carry different structure functions). This format currently outputs matrix element information only, no information about the large-Nc colour flow is given as the LHEF output format is not suited to communicate enough information for meaningful parton showering on top of multiparton final states.

Root

Generates output in ROOT ntuple format for NLO event generation only. For details on the ntuple format, see A posteriori scale and PDF variations using the ROOT NTuple Output. This output option is available only if Sherpa was linked to ROOT during installation by using the configure option --enable-root=/path/to/root. ROOT ntuples can be read back into Sherpa and analyzed using the option EVENT_INPUT. This feature is described in Production of NTuples.

The output can be further customized using the following options:

FILE_SIZE

Number of events per file (default: unlimited).

EVENT_FILE_PATH

Directory where the files will be stored.

EVENT_OUTPUT_PRECISION

Steers the precision of all numbers written to file (default: 12).

For all output formats except ROOT and Delphes, events can be written directly to gzipped files instead of plain text. The option --enable-gzip must be given during installation to enable this feature.

5.1.18. Scale and PDF variations

Sherpa can compute alternative event weights for different scale, PDF and AlphaS(MZ) choices on-the-fly, resulting in alternative weights for the generated event. This can be evoked with the following syntax:

VARIATIONS:
- ScaleFactors:
    MuR2: <muR2-fac-1>
    MuF2: <muF2-fac-1>
    QCUT: <qcut-fac-1>
  PDF: <PDF-1>
- ScaleFactors:
    MuR2: <muR2-fac-2>
    MuF2: <muF2-fac-2>
    QCUT: <qcut-fac-2>
  PDF: <PDF-2>
...

The key word VARIATIONS takes a list of variations. Each variation is specified by a set of scale factors, and a PDF choice (or AlphaS(MZ) choice, see below).

Scale factors can be given for the renormalisation, factorisation and for the merging scale. The corresponding keys are MuR2, MuF2 and QCUT, respectively. The factors for the renormalisation and factorisation scales must be given in their quadratic form, i.e. MuR2: 4.0 applies a scale factor of 2.0 to the renormalisation scale. All scale factors can be omitted (they default to 1.0). Instead of MuR2 and MuF2, one can also use the keyword Mu2. In this case, the given factor is applied to both the renormalisation and the factorisation scale.

For the PDF specification, any set present in any of the PDF library interfaces loaded through PDF_LIBRARY can be used. If no PDF set is given it defaults to the nominal one. Specific PDF members can be specified by appending the PDF set name with /<member-id>.

Instead of using PDF: <PDF> (which consistently also varies the strong coupling if the PDF has a different specification of it!), one can also specify a pure AlphaS variation by giving its value at the Z mass scale: AlphaS(MZ): <alphas(mz)-value>. This can be useful e.g. for leptonic productions.

It can be painful to write every variation explicitly, e.g. for 7-point scale factor variations or if one want variations for all members of a PDF set. Therefore an asterisk can be appended to some values, which results in an expansion. For PDF sets, this means that the variation is repeated for each member of that set. For scale factors, 4.0* is expanded to itself, unity, and its inverse: 1.0/4.0, 1.0, 4.0. A special meaning is reserved for specifying Mu2: 4.0*, which expands to a 7-point scale variation:

VARIATIONS:
  - ScaleFactors:
      Mu2: 4.0*

is therefore equivalent to

VARIATIONS:
  - ScaleFactors:
      MuF2: 0.25
      MuR2: 0.25
  - ScaleFactors:
      MuF2: 1.0
      MuR2: 0.25
  - ScaleFactors:
      MuF2: 0.25
      MuR2: 1.0
  - ScaleFactors:
      MuF2: 1.0
      MuR2: 1.0
  - ScaleFactors:
      MuF2: 4.0
      MuR2: 1.0
  - ScaleFactors:
      MuF2: 1.0
      MuR2: 4.0
  - ScaleFactors:
      MuF2: 4.0
      MuR2: 4.0

As another example, a complete variation using the PDF4LHC convention would read

VARIATIONS:
  - ScaleFactors:
      Mu2: 4.0*
  - PDF: CT10nlo*
  - PDF: MMHT2014nlo68cl*
  - PDF: NNPDF30_nlo_as_0118*

Please note, this syntax will create \(7+53+51+101=212\) additional weights for each event. Even though reweighting is used to reduce the amount of additional calculation as far as possible, this can still necessitate a considerable amount of additional CPU hours, in particular when parton-shower reweighting is enabled (see below).

Note that asterisk expansions include trivial scale variations and the central PDF set. Depending on the other specifications in a variation, this could result in a completely trivial variation. Per default, these are omitted during the calculation, since the nominal calculation is anyway included in the Sherpa output. Trivial variations can be explicitly allowed using VARIATIONS_INCLUDE_CV: false.

The additional event weights can then be written into the event output. However, this is currently only supported for HepMC_GenEvent and HepMC_Short with versions >=2.06 and HEPMC_USE_NAMED_WEIGHTS: true. The alternative event weights follow the Les Houches naming convention for such variations, i.e. they are named MUR<fac>_MUF<fac>_PDF<id>. When using Sherpa’s interface to Rivet, Rivet analyses, separate instances of Rivet, one for each alternative event weight in addition to the nominal one, are instantiated leading to one set of histograms each. They are again named using the MUR<fac>_MUF<fac>_PDF<id> convention. Extending this convention, for pure strong coupling variations, an additional tag ASMZ<val> is appended. Another set of tags is appended if shower scale variations are enabled, then giving PSMUR<fac>_PSMUF<fac>.

The user must also be aware that, of course, the cross section of the event sample, changes when using an alternative event weight as compared to the nominal one. Any histogramming therefore has to account for this and recompute the total cross section as the sum of weights divided by the number of trials, cf. Cross section determination.

The on-the-fly reweighting works for all event generation modes (weighted or (partially) unweighted) and all calculation types (LO, LOPS, NLO, NLOPS, MEPS@LO, MEPS@NLO and MENLOPS). However, the reweighting of parton shower emissions has to be enabled explicitly, using CSS_REWEIGHT: true. This should work out of the box for all types of variations. However, parton-shower reweighting (even though formally exact), tends to be numerically less stable than the reweighting of the hard process. If numerical issues are encountered, one can try to increase CSS_REWEIGHT_SCALE_CUTOFF (default: 5, measured in GeV). This disables shower variations for emissions at scales below the value. An additional safeguard against rare spuriously large shower variation weights is implemented as CSS_MAX_REWEIGHT_FACTOR (default: 1e3). Any variation weights accumulated during an event and larger than this factor will be ignored and reset to 1.

To include the ME-only variations along with the full variations in the HepMC/Rivet output, you can use HEPMC_INCLUDE_ME_ONLY_VARIATIONS: true and RIVET: { INCLUDE_HEPMC_ME_ONLY_VARIATIONS: true }, respectively.

5.1.19. Associated contributions variations

Similar to Scale and PDF variations, Sherpa can also compute alternative event weights for different combinations of associated EW contributions. This can be evoked with the following syntax:

ASSOCIATED_CONTRIBUTIONS_VARIATIONS:
- [EW]
- [EW, LO1]
- [EW, LO1, LO2]
- [EW, LO1, LO2, LO3]

Each entry of ASSOCIATED_CONTRIBUTIONS_VARIATIONS defines a variation and the different associated contributions that should be taken into account for the corresponding alternative weight.

The additional event weights can then be written into the event output. However, this is currently only supported for HepMC_GenEvent and HepMC_Short with versions >=2.06 and HEPMC_USE_NAMED_WEIGHTS: true. The alternative event weight names are either ASS<contrib> or MULTIASS<contrib>, for additive and multiplicative combinations, correspondingly.

5.1.20. MPI parallelization

MPI parallelization in Sherpa can be enabled using the configuration option --enable-mpi. Sherpa supports OpenMPI and MPICH2 . For detailed instructions on how to run a parallel program, please refer to the documentation of your local cluster resources or the many excellent introductions on the internet. MPI parallelization is mainly intended to speed up the integration process, as event generation can be parallelized trivially by starting multiple instances of Sherpa with different random seed, cf. RANDOM_SEED. However, both the internal analysis module and the Root NTuple writeout can be used with MPI. Note that these require substantial data transfer.

Please note that the process information contained in the Process directory for both Amegic and Comix needs to be generated without MPI parallelization first. Therefore, first run

$ Sherpa INIT_ONLY=1 <Sherpa.yaml>

and, in case of using Amegic, compile the libraries. Then start your parallized integration, e.g.

$ mpirun -n <n> Sherpa -e 0 <Sherpa.yaml>

After the integration has finished, you can submit individual jobs to generate event samples (with a different random seed for each job). Upon completion, the results can be merged.