nlpackage

Description

nlpackage is a collection of programs used for DNL data processing, analysis and simulation used in thesis DESY-THESIS-2014-006 (B. Vormwald). The programs depend partially on the BOOST library and are build with make. The programs are organised in two main categories DATA and MC (MISC and old are outdated).

The package can be checked out by the command:

git clone /afs/desy.de/group/flc/pool/polarimeter/git/nlpackage nlpackage

Monte-Carlo Programs (MC)

generateRandomNL

This program generates n random, 3rd order polynomial transfer functions T(x) which feature a specific nonlinearity nl within a given tolerance (NL=[nl-tolerance;nl]) and number of extrema (0<=extrma<=2) in the NL. Additionally, sign={-1,0,+1} can be specified in order to force the extremum to be negative or positive. 0 is the default setting and enables both possibilities. The program is run like

generateRandomNL n extrema nl tolerance nl sign

It outputs an overlay of n functions:

simevents

Generates a Monte-Carlo dataset of a typical DNL measurement. The pedestal as well as the detector response to a base light pulse and a base+differential light pulse are simulated. The simulation parameters are handed over via a config file. The path to this config file needs to be specified as command line argument:

simevents parameters.in

A typical example of a parameter file looks like

NL {
 type      -3
 nl        0.02     # non-linearity
 tolerance 0.001    # if random generates nl -> nl = [nl-tolerance; nl]
 maxima    0        # maximum number of extrema in nl function - possible values: 0,1,2
 inlsign  -1        # if inlsign != 0 -> force maximum nl to have given sign
 driftfrom 1        # relative drift of differential pulse: from
 driftto   1        # relative drift of differential pulse: to
}

EVENTS {
 events    1e6      # events per point
 steps     5        # different points
 forceapprox true   # forces gaussian approximation
 approx    true     # allow gaussian approximation for high photon multiplicities instead of binomial
}

QDC {
 lowres       25        # lowRes [fC]
 highres      200       # highRes [fC]
 pedestal     225       # pedestal [QDC counts]
 pedestale    1.33209   # pedestal error [QDC counts]
}

PMT {
 gain         2.7e5     # gain
 gaine        0.01      # relative gain error
 transmission 0.0025775 # transmission (filter+quantum efficiency)
}

LED {
 ledbstart   12000   #13000 52000 # mean(photons) of base pulse per step
 ledbstop    400000   #13000 52000 # mean(photons) of base pulse per step
 ledd   2200     # mean(photons) of piggypick pulse
 sigma  27       # sigma * sqrt(number of photons)
}

The NL section describes the properties of the transfer function which is used for the simulation. NL.type defines the type of the used transfer function. The following types are implemented:

NL.type=-3 generates a new random transfer function and writes out the used transfer function into the file randomnl.root. If this file is already existing, the stored transfer function is used instead. Thus, it is possible to regenerate data with exactly the same transfer function. It is possible to include also a drift of the differential pulse within the dynamic range (comment for the future: this setting should maybe better go to the LED section?).

The EVENTS section defines the simulation steering parameters: EVENTS.events defines the number of simulated light flashes per simulation point. EVENTS.steps defines the number of simulation points within the dynamic range. EVENTS.forceapprox forces the simulation to always use a Gaussian approximation for the binomial parts in the simulation (filter, quantum efficiency). In contrary, EVENTS.approx allows the simulation to only use the Gaussian approximation if this criterion is fulfilled.

The sections QDC and PMT define simulation parameters for the QDC and the photomultiplier. Section LED contains parameters for the base and differential LED light pulse, like number of photons. It also defines the covered simulation range.

A detailed description of the simulation as well as a schematic picture can be found in DESY-THESIS-2014-006, Figure 9.2.

The program outputs per scan point four ROOT files XXXXX_p.root, XXXXX.root, XXXXX_dp.root, XXXXX_d.root corresponding to the simulation of the pedestal, the simulation of the base pulse only, the simulation of second pedestal, and the simulation of the base+differential pulse. All files contain a histogram for the simulated QDC channel. It also generates a parameters.root file which contains the meta data of the run. This data structure is identical to the structure of real measurement data and, thus, the same analysis and correction tools can be applied. Additionally, a log file which summarises the actual used simulation data is outputted.

simpol

This program simulates the effect of a random nonlinear transfer function on the measurement precision of the polarisation. For each detector channel, the detector response is simulated based on the given detector parameters and nonlinearities. In the next step, the overall measurable polarisation is calculated as the weighted average of the determined results in the individual detector channels. Deviations from the input polarisation allows to estimate systematic uncertainties originating from the detector nonlinearity. In order to reduce the impact of one specifically chosen transfer function, the simulation is repeated runs times with independent random transfer functions.

The program is executed in the following way:

simpol parameters.in lcpolmc.in

An example file for lcpolmc.in (originating from an LCPolMC run assuming a detector with 18 channels) looks like:

channel/I:leftCE/F:rightCE/F:anPow/F:statW/F
1    101.0248      37.81618   -0.5699387   0.4100827
2    76.71595      27.43357   -0.5916136   0.451042
3    60.48241      23.37977   -0.5530365   0.3802883
4    48.99113      21.88092   -0.4778973   0.2674831
5    40.33161      21.4803    -0.381573    0.1605593
6    33.66876      21.56528   -0.2736309   0.07864234
7    28.32936      21.8883    -0.1600739   0.02605086
8    23.99582      22.35505   -0.0446779   0.001998668
9    20.37068      22.80301   0.07015965   0.004937932
10    17.30594     23.29281   0.182936     0.03419803
11    14.72479     23.73289   0.2927682    0.09068801
12    12.44931     24.13367   0.3991975    0.1774575
13    10.47851     24.55062   0.5018709    0.3002794
14    8.750646     24.92956   0.6006435    0.4690808
15    7.198029     25.2552    0.6955569    0.7007844
16    5.816128     25.59539   0.7866539    1.024624
17    4.592312     25.89423   0.8740689    1.494977
18    2.734539     19.95969   0.9483872    2.11952

A typical simulation steering file (parameters.in) looks like this:

RUN {
 runs             200      # number of identical runs with different random NL
}

NL {
 nl               0.008    # non-linearity
 tolerancePercent 1        # if random generates nl -> nl = [nl-nl*tolerancePercent; nl]
 maxima           0        # maximum number of extrema in nl function - possible values: 0,1,2
}

EVENTS {
 events    1e5      # events per point
}

QDC {
 lowres       25        # lowRes [fC]
 highres      200       # highRes [fC]
 QDCbins      4096      # number of QDC bins (12bit=4096)
}

PMT {
 gain         3e5     # gain
 gaine        0.1     # relative gain error
}

DETECTOR {
  PEperCE     6.5       # number of photo electrons per Compton electrons
}

BEAM {
  lumimultiplicator     2.0       # multiplicator for Compton electrons in channel
}

The sections are to a large extend similar to those of the config file of simevents. Additionally, the field DETECTOR.PEperCE defines the number of simulated photons per Compton electron in a channel and BEAM.lumimultiplicator allows to modify the assumed luminosity in the LCPolMC input file (=multiplicator of number of Compton electrons per channel).

The result of each individual run is stored as a ROOT file and PDF file in a uniquely named subfolder. Additionally, a summary .txt file for all runs is written out containing the relevant simulation parameters per run and the numerical simulation results. This summary file can easily read in as a TTree and by that the mean and/or RMS can be calculated quickly.

applycorrectionMC

This program applies the correction function obtained from linearizeFit, linearizeSpline, or linearizeFitUpDow. It is executed in the following way:

applycorrectionMC path_to_data

The program needs to find the following files in the location path_to_data/:

The program outputs the nonlinearity before and after the correction with respect to the Monte-Carlo truth information (number of simulated photons), which is available for simulation data. Thus, a comparison of both graphs allows to check how well the correction algorithm worked. In scans of the simulation input parameters (drift, NL, ...), it can therefore be estimated under which conditions the correction algorithm breaks down.

Data Processing Programs(DATA)

extractMean

This program loops over all individual measurement files, which are registered in the TTree in path_to_data/parameters.root. For each run number, it tries to find the four ROOT files XXXXX_p.root, XXXXX.root, XXXXX_dp.root, XXXXX_d.root and extracts for each histogram in the file the histogram mean and rms. Additionally, the pedestal corrected mean value is calculated. If XXXXX_dp.root is missing the corresponding value of the file XXXXX_p.root is used instead.

The program is run like:

extractMean path_to_data rmsmethod

rmsmethod specifies how the RMS value is calculated. If no value is given, the ordinary RMS value is calculated. If 0<rmsmethod<1 is specified, only the fraction rmsmethod around the histogram mean is used for the RMS value calculation.

It outputs up to six TTrees: meanRaw, meanDNLRaw, meanPed, meanDNLPed, mean, and meanDNL which all contain the following values:

ch%i_lo_mean:ch%i_lo_emean:ch%i_lo_RMS:ch%i_lo_eRMS:ch%i_hi_mean:ch%i_hi_emean:ch%i_hi_RMS:ch%i_hi_eRMS

where %i runs over all available channels and hi/lo discriminate the simultaneously obtained data from the dual range QDC (high range = 200fC/bin; low range = 25fC/bin). mean and meanDNL contain the pedestal corrected values which are usually used for further data processing.

fitevents

This program fits Gaussian functions to the historams in the data files in order to obtain the mean and rms.

Outdated. Not used in thesis.

mergeDataArithmetic

This program averages over a given number of runs n using the arithmetic mean. It is executed by the following command:

mergeDataArithmetic path_to_input_data path_to_output_data n

The program looks for out.root in the given input folder path_to_input_data and creates a new file out.root in the folder path_to_output_data. Every n entries from the old TTree result in one entry in the new TTree. If the target folder already exists the program asks whether this folder should be overwritten. In the averaging process only the pedestal corrected TTrees and the pure pedestal TTrees are processed. Also the meta measurement data (LED voltages, temperature monitoring, etc.) is averaged over n runs resulting in a new parameter file parameters.root.

makeControlPlots

This program creates many standard graphs and stores them as TGraphs in graphs.root for further processing and plotting. A collection of the most important graphs is written out as overview*.pdf. The program is run like

makeControlPlots path_to_data channel comment

The program needs to find the files out.root and parameters.root in the specified folder path_to_data. channel indicates for which channel the graphs are to be produced. comment is a comment which is printed on every graph. If the path to the measurement config file which was used for setting up the measurement/simulation is given here instead of a simple comment, the comment in this config file is used.

linearizeFit

This program calculates the DNL from the pedestal corrected TTrees in out.root, fits the data points, and calculates from this fit the NL correction function. It is run like:

linearizeFit channelnr skipstartpoint skipendpoints fixedorder

where skipstartpoint and skipendpoints specifies how many data points at the beginning/end of the graph should be skipped for the fitting procedure. In the normal fitting procedure, the graph is fitted with nth order polynomial, where n is running from 20 to 0. The fit with the best fit probability is used for the further processing. If fixedorder is specified, the given order is used for the processing.

The program calculates the integral of the fit, the relative nonlinearity, and the correction function and stores all functions persistently as ROOT functions in the file calibrationparameters.root such that they can be used for further processing. For all functions, also the errors are calculated based on the covariance matrix of the fit and this functions are stored as well. The program also writes out a few plots of graphs and functions directly as PNG for quick checks.

linearizeFitUpDown

Same procedure as in linearizeFit, but uses two datasets which are averaged before the DNL is calculated. This is useful in order to eliminate a bias in the DNL measurement from the scan direction of the base pulse (see thesis B. Vormwald). The program is run like

linearizeFitUpDown input_folder_upscan input_folder_downscan channelnr skipstartpoint skipendpoints fixedorder

The program output is similar to that of linearizeFit.

linearizeSpline

Same procedure as in linearizeFit, but uses splines instead of a polynomial fit.

Outdated. Not used in thesis.

applycorrectionData

This program applies the correction function present in the file calibrationparameters.root to the QDC values found in out.root. In order to use a calibration function obtained from a different dataset, the corresponding file has to be copied to the folder of the data which is to be corrected. The program is run like:

applycorrectionData channelnr skipstartpoint skipendpoints

As output, the program creates a DNL TGraph before and after the correction and saves it in the file graphs.root. Ideally, after the correction, the DNL data points should lie on a constant line.

average

Old program to do the data averaging.

Outdated. Not used in thesis.

averageWaveforms

Helper program, which reads in waveform files from the oscilloscope and averages the values. The waveforms can be accessed via a web browser (IP address of the scope) and saved as coma separated values, which needs to be convert to whitespace separated values first before ROOT can handle the files. The program is run like:

averageWaveforms title input1 input2 input3 input4 ... inputN

where inputX is an arbitrary number of input waveform files and title specifies the title of the generated, averaged TGraph. The program adds the averaged TGraph to the ROOT file graphs.root.

compare

This program compares different mean extraction methods.

Outdated. Not used in thesis.

extractGateDelay

This program analyses gate/delay scan data (created with the measurement program findSignalInGate) and needs the files parameters.root and out.root. It produces a set of 1D and 2D graphs visualising the signal integration as well as a derivative of the integrated signal, which shows the signal pulse shape. The program needs to be executed in the folder where the files parameters.root and out.root are located. It is run without additional command line arguments. The channel number which is analysed is hard coded (should probably be changed in the future!).

makeLedPlot

This program creates a TGraph for an LED scan, stores the object to the ROOT file graphs.root, and outputs a PDF file. The program is run like

makeLedPlot path_to_data channelnr comment


CategoryPolarimetry

nlpackage (last edited 2015-02-20 18:22:56 by AnnikaVauth)