Posts Tagged Monte Carlo

Recent Postings from Monte Carlo

A local factorization of the fermion determinant in lattice QCD

We introduce a factorization of the fermion determinant in lattice QCD with Wilson-type fermions that leads to a bosonic action local in the block fields. The interaction among gauge fields on distant blocks is mediated by multiboson fields located on the boundaries of the blocks. The resultant multiboson domain-decomposed hybrid Monte Carlo passes extensive numerical tests carried out by measuring standard gluonic observables. The combination of the determinant factorization and of the one of the propagator, that we put forward recently, paves the way for multilevel Monte Carlo integration in presence of fermions. We test this possibility by computing the disconnected correlator of two flavor-diagonal pseudoscalar densities, and we observe a significant increase of the signal-to-noise ratio due to a two-level integration.

Testing the imprint of non-standard cosmologies on void profiles using Monte Carlo random walks

Using a Monte Carlo random walks of a log-normal distribution, we show how to qualitatively study void properties for non-standard cosmologies. We apply this method to an f(R) modified gravity model and recover the N-body simulation results of (Achitouv et al. 2016) for the void profiles and their deviation from GR. This method can potentially be extended to study other properties of the large scale structures such as the abundance of voids or overdense environments. We also introduce a new way to identify voids in the cosmic web, using only a few measurements of the density fluctuations around random positions. This algorithm allows to select voids with specific profiles and radii. As a consequence, we can target classes of voids with higher differences between f(R) and standard gravity void profiles. Finally we apply our void criteria to galaxy mock catalogues and discuss how the flexibility of our void finder can be used to reduce systematics errors when probing the growth rate in the galaxy-void correlation function.

Testing the imprint of non-standard cosmologies on void profiles using Monte Carlo random walks [Cross-Listing]

Using a Monte Carlo random walks of a log-normal distribution, we show how to qualitatively study void properties for non-standard cosmologies. We apply this method to an f(R) modified gravity model and recover the N-body simulation results of (Achitouv et al. 2016) for the void profiles and their deviation from GR. This method can potentially be extended to study other properties of the large scale structures such as the abundance of voids or overdense environments. We also introduce a new way to identify voids in the cosmic web, using only a few measurements of the density fluctuations around random positions. This algorithm allows to select voids with specific profiles and radii. As a consequence, we can target classes of voids with higher differences between f(R) and standard gravity void profiles. Finally we apply our void criteria to galaxy mock catalogues and discuss how the flexibility of our void finder can be used to reduce systematics errors when probing the growth rate in the galaxy-void correlation function.

Reweighting with Boosted Decision Trees [Cross-Listing]

Machine learning tools are commonly used in modern high energy physics (HEP) experiments. Different models, such as boosted decision trees (BDT) and artificial neural networks (ANN), are widely used in analyses and even in the software triggers. In most cases, these are classification models used to select the "signal" events from data. Monte Carlo simulated events typically take part in training of these models. While the results of the simulation are expected to be close to real data, in practical cases there is notable disagreement between simulated and observed data. In order to use available simulation in training, corrections must be introduced to generated data. One common approach is reweighting - assigning weights to the simulated events. We present a novel method of event reweighting based on boosted decision trees. The problem of checking the quality of reweighting step in analyses is also discussed.

FastDIRC: a fast Monte Carlo and reconstruction algorithm for DIRC detectors [Cross-Listing]

FastDIRC is a novel fast Monte Carlo and reconstruction algorithm for DIRC detectors. A DIRC employs rectangular fused-silica bars both as Cherenkov radiators and as light guides. Cherenkov-photon imaging and time-of-propagation information are utilized by a DIRC to identify charged particles. GEANT-based DIRC Monte Carlo simulations are extremely CPU intensive. The FastDIRC algorithm permits fully simulating a DIRC detector more than 10000 times faster than using GEANT. This facilitates designing a DIRC-reconstruction algorithm that improves the Cherenkov-angle resolution of a DIRC detector by about 30% compared to existing algorithms. FastDIRC also greatly reduces the time required to study competing DIRC-detector designs.

FastDIRC: a fast Monte Carlo and reconstruction algorithm for DIRC detectors [Cross-Listing]

FastDIRC is a novel fast Monte Carlo and reconstruction algorithm for DIRC detectors. A DIRC employs rectangular fused-silica bars both as Cherenkov radiators and as light guides. Cherenkov-photon imaging and time-of-propagation information are utilized by a DIRC to identify charged particles. GEANT-based DIRC Monte Carlo simulations are extremely CPU intensive. The FastDIRC algorithm permits fully simulating a DIRC detector more than 10000 times faster than using GEANT. This facilitates designing a DIRC-reconstruction algorithm that improves the Cherenkov-angle resolution of a DIRC detector by about 30% compared to existing algorithms. FastDIRC also greatly reduces the time required to study competing DIRC-detector designs.

Exploring the connection between stellar halo profiles and accretion histories in $L_*$ galaxies

I use a library of controlled minor merger N-body simulations, a particle tagging technique and Monte Carlo generated $\Lambda$CDM accretion histories to study the highly stochastic process of stellar deposition onto the accreted stellar halos (ASHs) of $L_*$ galaxies. I explore the main physical mechanisms that drive the connection between the accretion history and the density profile of the ASH. I find that: i) through dynamical friction, more massive satellites are more effective at delivering their stars deeper into the host; ii) as a consequence, ASHs feature a negative gradient between radius and the local mass-weighed virial satellite-to-host mass ratio; iii) in $L_*$ galaxies, most ASHs feature a density profile that steepens towards sharper logarithmic slopes at increasing radii, though with significant halo-to-halo scatter; iv) the ASHs with the largest total ex-situ mass are such because of the chance accretion of a small number of massive satellites (rather than of a large number of low-mass ones).

Constraining the Movement of the Spiral Features and the Locations of Planetary Bodies within the AB Aur System

We present new analysis of multi-epoch, H-band, scattered light images of the AB Aur system. We used a Monte Carlo, radiative transfer code to simultaneously model the system's SED and H-band polarized intensity imagery. We find that a disk-dominated model, as opposed to one that is envelope dominated, can plausibly reproduce AB Aur's SED and near-IR imagery. This is consistent with previous modeling attempts presented in the literature and supports the idea that at least a subset of AB Aur's spirals originate within the disk. In light of this, we also analyzed the movement of spiral structures in multi-epoch H-band total light and polarized intensity imagery of the disk. We detect no significant rotation or change in spatial location of the spiral structures in these data, which span a 5.8 year baseline. If such structures are caused by disk-planet interactions, the lack of observed rotation constrains the location of the orbit of planetary perturbers to be >47 AU.

Uncertainties in the production of $p$ nuclei in massive stars obtained from Monte Carlo variations

Nuclear uncertainties in the production of $p$ nuclei in massive stars have been quantified in a Monte Carlo procedure. Bespoke temperature-dependent uncertainties were assigned to different types of reactions involving nuclei from Fe to Bi. Their simultaneous impact was studied in postprocessing explosive trajectories for three different stellar models. It was found that the grid of mass zones in the model of a 25 $M_\odot$ star, which is widely used for investigations of $p$ nucleosynthesis, is too crude to properly resolve the detailed temperature changes required for describing the production of $p$ nuclei. Using models with finer grids for 15 $M_\odot$ and 25 $M_\odot$ stars with initial solar metallicity, it was found that most of the production uncertainties introduced by nuclear reaction uncertainties are smaller than a factor of two. Since a large number of rates were varied at the same time in the Monte Carlo procedure, possible cancellation effects of several uncertainties could be taken into account. Key rates were identified for each $p$ nucleus, which provide the dominant contribution to the production uncertainty. These key rates were found by examining correlations between rate variations and resulting abundance changes. This method is superior to studying flow patterns, especially when the flows are complex, and to individual, sequential variation of a few rates.

Uncertainties in the production of $p$ nuclei in massive stars obtained from Monte Carlo variations [Cross-Listing]

Nuclear uncertainties in the production of $p$ nuclei in massive stars have been quantified in a Monte Carlo procedure. Bespoke temperature-dependent uncertainties were assigned to different types of reactions involving nuclei from Fe to Bi. Their simultaneous impact was studied in postprocessing explosive trajectories for three different stellar models. It was found that the grid of mass zones in the model of a 25 $M_\odot$ star, which is widely used for investigations of $p$ nucleosynthesis, is too crude to properly resolve the detailed temperature changes required for describing the production of $p$ nuclei. Using models with finer grids for 15 $M_\odot$ and 25 $M_\odot$ stars with initial solar metallicity, it was found that most of the production uncertainties introduced by nuclear reaction uncertainties are smaller than a factor of two. Since a large number of rates were varied at the same time in the Monte Carlo procedure, possible cancellation effects of several uncertainties could be taken into account. Key rates were identified for each $p$ nucleus, which provide the dominant contribution to the production uncertainty. These key rates were found by examining correlations between rate variations and resulting abundance changes. This method is superior to studying flow patterns, especially when the flows are complex, and to individual, sequential variation of a few rates.

Uncertainties in the production of $p$ nuclei in massive stars obtained from Monte Carlo variations [Replacement]

Nuclear uncertainties in the production of $p$ nuclei in massive stars have been quantified in a Monte Carlo procedure. Bespoke temperature-dependent uncertainties were assigned to different types of reactions involving nuclei from Fe to Bi. Their simultaneous impact was studied in postprocessing explosive trajectories for three different stellar models. It was found that the grid of mass zones in the model of a 25 $M_\odot$ star, which is widely used for investigations of $p$ nucleosynthesis, is too crude to properly resolve the detailed temperature changes required for describing the production of $p$ nuclei. Using models with finer grids for 15 $M_\odot$ and 25 $M_\odot$ stars with initial solar metallicity, it was found that most of the production uncertainties introduced by nuclear reaction uncertainties are smaller than a factor of two. Since a large number of rates were varied at the same time in the Monte Carlo procedure, possible cancellation effects of several uncertainties could be taken into account. Key rates were identified for each $p$ nucleus, which provide the dominant contribution to the production uncertainty. These key rates were found by examining correlations between rate variations and resulting abundance changes. This method is superior to studying flow patterns, especially when the flows are complex, and to individual, sequential variation of a few rates.

Uncertainties in the production of $p$ nuclei in massive stars obtained from Monte Carlo variations [Cross-Listing]

Nuclear uncertainties in the production of $p$ nuclei in massive stars have been quantified in a Monte Carlo procedure. Bespoke temperature-dependent uncertainties were assigned to different types of reactions involving nuclei from Fe to Bi. Their simultaneous impact was studied in postprocessing explosive trajectories for three different stellar models. It was found that the grid of mass zones in the model of a 25 $M_\odot$ star, which is widely used for investigations of $p$ nucleosynthesis, is too crude to properly resolve the detailed temperature changes required for describing the production of $p$ nuclei. Using models with finer grids for 15 $M_\odot$ and 25 $M_\odot$ stars with initial solar metallicity, it was found that most of the production uncertainties introduced by nuclear reaction uncertainties are smaller than a factor of two. Since a large number of rates were varied at the same time in the Monte Carlo procedure, possible cancellation effects of several uncertainties could be taken into account. Key rates were identified for each $p$ nucleus, which provide the dominant contribution to the production uncertainty. These key rates were found by examining correlations between rate variations and resulting abundance changes. This method is superior to studying flow patterns, especially when the flows are complex, and to individual, sequential variation of a few rates.

Uncertainties in the production of $p$ nuclei in massive stars obtained from Monte Carlo variations [Replacement]

Nuclear uncertainties in the production of $p$ nuclei in massive stars have been quantified in a Monte Carlo procedure. Bespoke temperature-dependent uncertainties were assigned to different types of reactions involving nuclei from Fe to Bi. Their simultaneous impact was studied in postprocessing explosive trajectories for three different stellar models. It was found that the grid of mass zones in the model of a 25 $M_\odot$ star, which is widely used for investigations of $p$ nucleosynthesis, is too crude to properly resolve the detailed temperature changes required for describing the production of $p$ nuclei. Using models with finer grids for 15 $M_\odot$ and 25 $M_\odot$ stars with initial solar metallicity, it was found that most of the production uncertainties introduced by nuclear reaction uncertainties are smaller than a factor of two. Since a large number of rates were varied at the same time in the Monte Carlo procedure, possible cancellation effects of several uncertainties could be taken into account. Key rates were identified for each $p$ nucleus, which provide the dominant contribution to the production uncertainty. These key rates were found by examining correlations between rate variations and resulting abundance changes. This method is superior to studying flow patterns, especially when the flows are complex, and to individual, sequential variation of a few rates.

Parton distribution functions in Monte Carlo factorisation scheme

A next step in development of the KrkNLO method of including complete NLO QCD corrections to hard processes in a LO parton-shower Monte Carlo (PSMC) is presented. It consists of generalisation of the method, previously used for the Drell-Yan process, to Higgs-boson production. This extension is accompanied with the complete description of parton distribution functions (PDFs) in a dedicated, Monte Carlo (MC) factorisation scheme, applicable to any process of production of one or more colour-neutral particles in hadron-hadron collisions.

Track reconstruction through the application of the Legendre Transform on ellipses [Cross-Listing]

We propose a pattern recognition method that identifies the common tangent lines of a set of ellipses. The detection of the tangent lines is attained by applying the Legendre transform on a given set of ellipses. As context, we consider a hypothetical detector made out of layers of chambers, each of which returns an ellipse as an output signal. The common tangent of these ellipses represents the trajectory of a charged particle crossing the detector. The proposed method is evaluated using ellipses constructed from Monte Carlo generated tracks.

Performance of the ALICE secondary vertex b-tagging algorithm [Cross-Listing]

The identification of jets originating from beauty quarks in heavy-ion collisions is important to study the properties of the hot and dense matter produced in such collisions. A variety of algorithms for b-jet tagging was elaborated at the LHC experiments. They rely on the properties of B hadrons, i.e. their long lifetime, large mass and large multiplicity of decay products. In this work, the b-tagging algorithm based on displaced secondary-vertex topologies is described. We present Monte Carlo based performance studies of the algorithm for charged jets reconstructed with the ALICE tracking system in p-Pb collisions at $\sqrt{s_\text{NN}}$ = 5.02 TeV. The tagging efficiency, rejection rate and the correction of the smearing effects of non-ideal detector response are presented.

Performance of the ALICE secondary vertex b-tagging algorithm [Cross-Listing]

The identification of jets originating from beauty quarks in heavy-ion collisions is important to study the properties of the hot and dense matter produced in such collisions. A variety of algorithms for b-jet tagging was elaborated at the LHC experiments. They rely on the properties of B hadrons, i.e. their long lifetime, large mass and large multiplicity of decay products. In this work, the b-tagging algorithm based on displaced secondary-vertex topologies is described. We present Monte Carlo based performance studies of the algorithm for charged jets reconstructed with the ALICE tracking system in p-Pb collisions at $\sqrt{s_\text{NN}}$ = 5.02 TeV. The tagging efficiency, rejection rate and the correction of the smearing effects of non-ideal detector response are presented.

Fe K$\alpha$ Profiles from Simulations of Accreting Black Holes

We present first results from a new technique for the prediction of Fe K$\alpha$ profiles directly from general relativistic magnetohydrodynamic (GRMHD) simulations. Data from a GRMHD simulation are processed by a Monte Carlo global radiation transport code, which determines the X-ray flux irradiating the disk surface and the coronal electron temperature self-consistently. With that irradiating flux and the disk's density structure drawn from the simulation, we determine the reprocessed Fe K$\alpha$ emission from photoionization equilibrium and solution of the radiation transfer equation. We produce maps of the surface brightness of Fe K$\alpha$ emission over the disk surface, which---for our example of a $10 M_\odot$, Schwarzschild black hole accreting at $1\%$ the Eddington value---rises steeply one gravitational radius outside the radius of the innermost stable circular orbit and then falls $\propto r^{-2}$ at larger radii. We explain these features of the Fe K$\alpha$ radial surface brightness profile as consequences of the disk's ionization structure and an extended coronal geometry, respectively. We also present the corresponding Fe K$\alpha$ line profiles as would be seen by distant observers at several inclinations. Both the shapes of the line profiles and the equivalent widths of our predicted K$\alpha$ lines are qualitatively similar to those typically observed from accreting black holes. Most importantly, this work represents a direct link between theory and observation: in a fully self-consistent way, we produce observable results---iron fluorescence line profiles---from the theory of black hole accretion with almost no phenomenological assumptions.

Production of tau lepton pairs with high pT jets at the LHC and the TauSpinner reweighting algorithm

The TauSpinner algorithm allows to modify the physics of the Monte Carlo generated samples due to the changed assumptions of event production dynamics, without re-generating events. To each event it attributes weights: the spin effects of tau-lepton production or decay, or the production mechanism are modified. There is no need to repeat the detector response simulation. We document the extension to 2 to 4 processes in which the matrix elements for the parton-parton scattering amplitudes into a tau-lepton pair and two outgoing partons are used. Tree-level matrix elements for the Standard Model processes, including the Higgs boson production are used. Automatically generated codes by MadGraph5 have been adapted. Tests of the matrix elements, reweighting algorithm and numerical results are presented. For averaged tau lepton polarisation, we perform comparison of 2 to 2 and 2 to 4 matrix elements used to calculate the spin weight in pp to tau tau j j events. We show, that for events with tau-lepton pair close to the Z-boson peak, the tau-lepton polarisation calculated using 2 to 4 matrix elements is very close to the one calculated using 2 to 2 Born process only. For the m_(tautau) masses above the Z-boson peak, the effect from including 2 to 4 matrix elements is also marginal, however when restricting into subprocesses qq,q bar q to tau tau j j only, it can lead to a 10% difference on the predicted tau-lepton polarisation. Choice of electroweak scheme can have significant impact. The modification of the electroweak or strong interaction can be performed with the re-weighting technique. TauSpinner v.2.0.0, allows to introduce non-standard couplings for the Higgs boson and study their effects in the vector-boson-fusion. The discussion is relegated to forthcoming publications.

Production of tau lepton pairs with high pT jets at the LHC and the TauSpinner reweighting algorithm [Cross-Listing]

The TauSpinner algorithm allows to modify the physics of the Monte Carlo generated samples due to the changed assumptions of event production dynamics, without re-generating events. To each event it attributes weights: the spin effects of tau-lepton production or decay, or the production mechanism are modified. There is no need to repeat the detector response simulation. We document the extension to 2 to 4 processes in which the matrix elements for the parton-parton scattering amplitudes into a tau-lepton pair and two outgoing partons are used. Tree-level matrix elements for the Standard Model processes, including the Higgs boson production are used. Automatically generated codes by MadGraph5 have been adapted. Tests of the matrix elements, reweighting algorithm and numerical results are presented. For averaged tau lepton polarisation, we perform comparison of 2 to 2 and 2 to 4 matrix elements used to calculate the spin weight in pp to tau tau j j events. We show, that for events with tau-lepton pair close to the Z-boson peak, the tau-lepton polarisation calculated using 2 to 4 matrix elements is very close to the one calculated using 2 to 2 Born process only. For the m_(tautau) masses above the Z-boson peak, the effect from including 2 to 4 matrix elements is also marginal, however when restricting into subprocesses qq,q bar q to tau tau j j only, it can lead to a 10% difference on the predicted tau-lepton polarisation. Choice of electroweak scheme can have significant impact. The modification of the electroweak or strong interaction can be performed with the re-weighting technique. TauSpinner v.2.0.0, allows to introduce non-standard couplings for the Higgs boson and study their effects in the vector-boson-fusion. The discussion is relegated to forthcoming publications.

A proposed method for measurement of cosmic-ray chemical composition based on geomagnetic spectroscopy [Replacement]

The effect of the geomagnetic Lorentz force on the muon component of extensive air shower (EAS) has been studied in a Monte Carlo generated simulated data sample. This geomagnetic field affects the paths of muons in an EAS, causing a local contrast or polar asymmetry in the abundance of positive and negative muons about the shower axis. The asymmetry can be approximately expressed as a function of transverse separation between the positive and negative muons barycentric positions in the EAS through opposite quadrants across the shower core in the shower front plane. In the present study, it is found that the transverse muon barycenter separation and its maximum value obtained from the polar variation of the parameter are higher for Iron primaries than protons for highly inclined showers. Hence, in principle, these parameters can be exploited to the measurement of primary cosmic-ray chemical composition. Possibility of practical realization of the proposed method in a real experiment is briefly discussed.

Properties of Carbon-Oxygen White Dwarfs From Monte Carlo Stellar Models

We investigate properties of carbon-oxygen white dwarfs with respect to the composite uncertainties in the reaction rates using the stellar evolution toolkit, Modules for Experiments in Stellar Astrophysics (MESA) and the probability density functions in the reaction rate library STARLIB. These are the first Monte Carlo stellar evolution studies that use complete stellar models. Focusing on 3 M$_{\odot}$ models evolved from the pre main-sequence to the first thermal pulse, we survey the remnant core mass, composition, and structure properties as a function of 26 STARLIB reaction rates covering hydrogen and helium burning using a Principal Component Analysis and Spearman Rank-Order Correlation. Relative to the arithmetic mean value, we find the width of the 95\% confidence interval to be $\Delta M_{{\rm 1TP}}$ $\approx$ 0.019 M$_{\odot}$ for the core mass at the first thermal pulse, $\Delta$$t_{\rm{1TP}}$ $\approx$ 12.50 Myr for the age, $\Delta \log(T_{{\rm c}}/{\rm K}) \approx$ 0.013 for the central temperature, $\Delta \log(\rho_{{\rm c}}/{\rm g \ cm}^{-3}) \approx$ 0.060 for the central density, $\Delta Y_{\rm{e,c}} \approx$ 2.6$\times$10$^{-5}$ for the central electron fraction, $\Delta X_{\rm c}(^{22}\rm{Ne}) \approx$ 5.8$\times$10$^{-4}$, $\Delta X_{\rm c}(^{12}\rm{C}) \approx$ 0.392, and $\Delta X_{\rm c}(^{16}\rm{O}) \approx$ 0.392. Uncertainties in the experimental $^{12}$C($\alpha,\gamma)^{16}\rm{O}$, triple-$\alpha$, and $^{14}$N($p,\gamma)^{15}\rm{O}$ reaction rates dominate these variations. We also consider a grid of 1 to 6 M$_{\odot}$ models evolved from the pre main-sequence to the final white dwarf to probe the sensitivity of the initial-final mass relation to experimental uncertainties in the hydrogen and helium reaction rates.

Improved Algorithms and Coupled Neutron-Photon Transport for Auto-Importance Sampling Method

Auto-Importance Sampling (AIS) method is a Monte Carlo variance reduction technique proposed by Tsinghua University for deep penetration problem, which can improve computational efficiency significantly without pre-calculations for importance distribution. However AIS method is only validated with several basic deep penetration problems of simple geometries and cannot be used for coupled neutron-photon transport. This paper firstly presented the latest algorithm improvements for AIS method including particle transport, fictitious particles creation and adjustment, fictitious surface geometry, random number allocation and calculation of estimated relative error, which made AIS method applicable to complicated deep penetration problem. Then, a coupled Neutron-Photon Auto-Importance Sampling (NP-AIS) method was proposed to apply AIS method with the improved algorithms in coupled neutron-photon Monte Carlo transport. Finally, the NUREG/CR-6115 PWR benchmark model was calculated with the method of geometry splitting with Russian roulette, NP-AIS and analog Monte Carlo respectively. The calculation results of NP-AIS method were in good agreement with the results of geometry splitting with Russian roulette method and the benchmark solutions. The computational efficiencies of NP-AIS method for both neutron and photon were much better than those of geometry splitting with Russian roulette method in most cases, and increased by several orders of magnitude compared with those of analog Monte Carlo method.

Improved Algorithms and Coupled Neutron-Photon Transport for Auto-Importance Sampling Method [Replacement]

The Auto-Importance Sampling (AIS) method is a Monte Carlo variance reduction technique proposed for deep penetration problems, which can significantly improve computational efficiency without pre-calculations for importance distribution. However, the AIS method is only validated with several simple examples, and cannot be used for coupled neutron-photon transport. This paper presents the improved algorithms for the AIS method, including particle transport, fictitious particles creation and adjustment, fictitious surface geometry, random number allocation and calculation of the estimated relative error. These improvements allow the AIS method to be applicable to complicated deep penetration problems with complex geometry and multiple materials. A coupled Neutron-Photon Auto-Importance Sampling (NP-AIS) method is proposed to solve the deep penetration problems of coupled neutron-photon transport using the improved algorithms. The NUREG/CR-6115 PWR benchmark was calculated by using the methods of NP-AIS, geometry splitting with Russian roulette and the analog Monte Carlo, respectively. The calculation results of NP-AIS were in good agreement with those of geometry splitting with Russian roulette and the benchmark solutions. The computational efficiency of NP-AIS for both neutron and photon was much better than that of geometry splitting with Russian roulette in most cases, and increased by several orders of magnitude compared with that of the analog Monte Carlo.

Improved Algorithms and Coupled Neutron-Photon Transport for Auto-Importance Sampling Method [Replacement]

The Auto-Importance Sampling (AIS) method is a Monte Carlo variance reduction technique proposed for deep penetration problems, which can significantly improve computational efficiency without pre-calculations for importance distribution. However, the AIS method is only validated with several simple examples, and cannot be used for coupled neutron-photon transport. This paper presents the improved algorithms for the AIS method, including particle transport, fictitious particles creation and adjustment, fictitious surface geometry, random number allocation and calculation of the estimated relative error. These improvements allow the AIS method to be applicable to complicated deep penetration problems with complex geometry and multiple materials. A coupled Neutron-Photon Auto-Importance Sampling (NP-AIS) method is proposed to solve the deep penetration problems of coupled neutron-photon transport using the improved algorithms. The NUREG/CR-6115 PWR benchmark was calculated by using the methods of NP-AIS, geometry splitting with Russian roulette and the analog Monte Carlo, respectively. The calculation results of NP-AIS were in good agreement with those of geometry splitting with Russian roulette and the benchmark solutions. The computational efficiency of NP-AIS for both neutron and photon was much better than that of geometry splitting with Russian roulette in most cases, and increased by several orders of magnitude compared with that of the analog Monte Carlo.

SPAMCART: a code for smoothed particle Monte Carlo radiative transfer [Replacement]

We present a code for generating synthetic SEDs and intensity maps from Smoothed Particle Hydrodynamics simulation snapshots. The code is based on the Lucy (1999) Monte Carlo Radiative Transfer method, i.e. it follows discrete luminosity packets, emitted from external and/or embedded sources, as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The density is not mapped onto a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Second, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

We present a code for generating synthetic SEDs and intensity maps from Smoothed Particle Hydrodynamics simulation snapshots. The code is based on the Lucy (1999) Monte Carlo Radiative Transfer method, i.e. it follows discrete luminosity packets, emitted from external and/or embedded sources, as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The density is not mapped onto a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Second, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

SPAMCART: a code for smoothed particle Monte Carlo radiative transfer [Replacement]

We present a code for generating synthetic SEDs and intensity maps from Smoothed Particle Hydrodynamics simulation snapshots. The code is based on the Lucy (1999) Monte Carlo Radiative Transfer method, i.e. it follows discrete luminosity packets, emitted from external and/or embedded sources, as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The density is not mapped onto a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Second, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

Million-Body Star Cluster Simulations: Comparisons between Monte Carlo and Direct $N$-body

We present the first detailed comparison between million-body globular cluster simulations computed with a H\'enon-type Monte Carlo code, CMC, and a direct $N$-body code, NBODY6++GPU. Both simulations start from an identical cluster model with $10^6$ particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes "frozen" (no fine-tuning of any free parameters or internal algorithms of the codes) we find excellent agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (> 1000) are retained for 12 Gyr. Thus, the very accurate direct $N$-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-$N$ dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct $N$-body, is capable of producing a very accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.

On Potassium and Other Abundance Anomalies of Red Giants in NGC 2419

Globular clusters are of paramount importance for testing theories of stellar evolution and early galaxy formation. Strong evidence for multiple populations of stars in globular clusters derives from observed abundance anomalies. A puzzling example is the recently detected Mg-K anticorrelation in NGC 2419. We perform Monte Carlo nuclear reaction network calculations to constrain the temperature-density conditions that gave rise to the elemental abundances observed in this elusive cluster. We find a correlation between stellar temperature and density values that provide a satisfactory match between simulated and observed abundances in NGC 2419 for all relevant elements (Mg, Si, K, Ca, Sc, Ti, and V). Except at the highest densities ($\rho \gtrsim 10^8$~g/cm$^3$), the acceptable conditions range from $\approx$ $100$~MK at $\approx$ $10^8$~g/cm$^3$ to $\approx$ $200$~MK at $\approx$ $10^{-4}$~g/cm$^3$. This result accounts for uncertainties in nuclear reaction rates and variations in the assumed initial composition. We review hydrogen burning sites and find that low-mass stars, AGB stars, massive stars, or supermassive stars cannot account for the observed abundance anomalies in NGC 2419. Super-AGB stars could be viable candidates for the polluter stars if stellar model parameters can be fine-tuned to produce higher temperatures. Novae, either involving CO or ONe white dwarfs, could be interesting polluter candidates, but a current lack of low-metallicity nova models precludes firmer conclusions. We also discuss if additional constraints for the first-generation polluters can be obtained by future measurements of oxygen, or by evolving models of second-generation low-mass stars with a non-canonical initial composition.

Microsopic nuclear level densities by the shell model Monte Carlo method

The configuration-interaction shell model approach provides an attractive framework for the calculation of nuclear level densities in the presence of correlations, but the large dimensionality of the model space has hindered its application in mid-mass and heavy nuclei. The shell model Monte Carlo (SMMC) method permits calculations in model spaces that are many orders of magnitude larger than spaces that can be treated by conventional diagonalization methods. We discuss recent progress in the SMMC approach to level densities, and in particular the calculation of level densities in heavy nuclei. We calculate the distribution of the axial quadrupole operator in the laboratory frame at finite temperature and demonstrate that it is a model-independent signature of deformation in the rotational invariant framework of the shell model. We propose a method to use these distributions for calculating level densities as a function of intrinsic deformation.

The shell model Monte Carlo approach to level densities: recent developments and perspectives

We review recent advances in the shell model Monte Carlo approach for the microscopic calculation of statistical and collective properties of nuclei. We discuss applications to the calculation of (i) level densities in nickel isotopes, implementing a recent method to circumvent the odd-particle sign problem; (ii) state densities in heavy nuclei; (iii) spin distributions of nuclear levels; and (iv) finite-temperature quadrupole distributions.

Preweighting method in Monte-Carlo sampling with complex action --- Strong-Coupling Lattice QCD with $1/g^2$ corrections, as an example ---

We investigate the QCD phase diagram in the strong-coupling lattice QCD with fluctuation and $1/g^2$ effects by using the auxiliary field Monte-Carlo simulations. The complex phase of the Fermion determinant at finite chemical potential is found to be suppressed by introducing a complex shift of integral path for one of the auxiliary fields, which corresponds to introducing a repulsive vector mean field for quarks. The obtained phase diagram in the chiral limit shows suppressed $T_c$ in the second order phase transition region compared with the strong-coupling limit results. We also argue that we can approximately guess the statistical weight cancellation from the complex phase in advance in the case where the complex phase distribution is Gaussian. We demonstrate that correct expectation values are obtained by using this guess in the importance sampling (preweighting).

Constrained-Path Quantum Monte-Carlo Approach for Non-Yrast States Within the Shell Model [Replacement]

The present paper intends to present an extension of the constrained-path quantum Monte-Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the $sd$ valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction.

Constrained-Path Quantum Monte-Carlo Approach for Non-Yrast States Within the Shell Model

The present paper intends to present an extension of the constrained-path quantum Monte-Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the $sd$ valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction.

Focusing effect of bent GaAs crystals for gamma-ray Laue lenses: Monte Carlo and experimental results [Cross-Listing]

We report on results of observation of the focusing effect from the planes (220) of Gallium Arsenide (GaAs) crystals. We have compared the experimental results with the simulations of the focusing capability of GaAs tiles through a developed Monte Carlo. The GaAs tiles were bent using a lapping process developed at the cnr/imem - Parma (Italy) in the framework of the laue project, funded by ASI, dedicated to build a broad band Laue lens prototype for astrophysical applications in the hard X-/soft gamma-ray energy range (80-600 keV). We present and discuss the results obtained from their characterization, mainly in terms of focusing capability. Bent crystals will significantly increase the signal to noise ratio of a telescope based on a Laue lens, consequently leading to an unprecedented enhancement of sensitivity with respect to the present non focusing instrumentation.

Measures of galaxy dust and gas mass with Herschel photometry and prospects for ALMA

(Abridged) Combining the deepest Herschel extragalactic surveys (PEP, GOODS-H, HerMES), and Monte Carlo mock catalogs, we explore the robustness of dust mass estimates based on modeling of broad band spectral energy distributions (SEDs) with two popular approaches: Draine & Li (2007, DL07) and a modified black body (MBB). As long as the observed SED extends to at least 160-200 micron in the rest frame, M(dust) can be recovered with a >3 sigma significance and without the occurrence of systematics. An average offset of a factor ~1.5 exists between DL07- and MBB-based dust masses, based on consistent dust properties. At the depth of the deepest Herschel surveys (in the GOODS-S field) it is possible to retrieve dust masses with a S/N>=3 for galaxies on the main sequence of star formation (MS) down to M(stars)~1e10 [M(sun)] up to z~1. At higher redshift (z<=2) the same result is achieved only for objects at the tip of the MS or lying above it. Molecular gas masses, obtained converting M(dust) through the metallicity-dependent gas-to-dust ratio delta(GDR), are consistent with those based on the scaling of depletion time, and on CO spectroscopy. Focusing on CO-detected galaxies at z>1, the delta(GDR) dependence on metallicity is consistent with the local relation. We combine far-IR Herschel data and sub-mm ALMA expected fluxes to study the advantages of a full SED coverage.

Measures of galaxy dust and gas mass with Herschel photometry and prospects for ALMA [Replacement]

(Abridged) Combining the deepest Herschel extragalactic surveys (PEP, GOODS-H, HerMES), and Monte Carlo mock catalogs, we explore the robustness of dust mass estimates based on modeling of broad band spectral energy distributions (SEDs) with two popular approaches: Draine & Li (2007, DL07) and a modified black body (MBB). As long as the observed SED extends to at least 160-200 micron in the rest frame, M(dust) can be recovered with a >3 sigma significance and without the occurrence of systematics. An average offset of a factor ~1.5 exists between DL07- and MBB-based dust masses, based on consistent dust properties. At the depth of the deepest Herschel surveys (in the GOODS-S field) it is possible to retrieve dust masses with a S/N>=3 for galaxies on the main sequence of star formation (MS) down to M(stars)~1e10 [M(sun)] up to z~1. At higher redshift (z<=2) the same result is achieved only for objects at the tip of the MS or lying above it. Molecular gas masses, obtained converting M(dust) through the metallicity-dependent gas-to-dust ratio delta(GDR), are consistent with those based on the scaling of depletion time, and on CO spectroscopy. Focusing on CO-detected galaxies at z>1, the delta(GDR) dependence on metallicity is consistent with the local relation. We combine far-IR Herschel data and sub-mm ALMA expected fluxes to study the advantages of a full SED coverage.

First Numerical Implementation of the Loop-Tree Duality Method

The Loop-Tree Duality (LTD) is a novel perturbative method in QFT that establishes a relation between loop-level and tree-level amplitudes, which gives rise to the idea of treating them simultaneously in a common Monte Carlo. Initially introduced for one-loop scalar integrals, the applicability of the LTD has been expanded to higher order loops and Feynman graphs beyond simple poles. For the first time, a numerical implementation relying on the LTD was realized in the form of a computer program that calculates one-loop scattering amplitudes. We present details on the employed contour deformation as well as results for scalar and tensor integrals.

Particles Acceleration in Converged Two Shocks

Observations show that there is a proton spectral "break" with E$_{break}$ at 1-10MeV in some large CME-driven shocks. Theoretical model usually attribute this phenomenon to a diffusive shock acceleration. However, the underlying physics of the shock acceleration still remains uncertain. Although previous numerical models can hardly predict this "break" due to either high computational expense or shortcomings of current models, the present paper focuses on simulating this energy spectrum in converged two shocks by Monte Carlo numerical method. Considering the Dec 13 2006 CME-driven shock interaction with an Earth bow shock, we examine whether the energy spectral "break" could occur on an interaction between two shocks. As result, we indeed obtain the maximum proton energy up to 10MeV, which is the premise to investigate the existence of the energy spectral "break". Unexpectedly, we further find a proton spectral "break" appears distinctly at the energy $\sim$5MeV.

A Monte Carlo template-based analysis for very high definition imaging atmospheric Cherenkov telescopes as applied to the VERITAS telescope array

We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.

Second large-scale Monte Carlo study for the Cherenkov Telescope Array

The Cherenkov Telescope Array (CTA) represents the next generation of ground based instruments for Very High Energy gamma-ray astronomy. It is expected to improve on the sensitivity of current instruments by an order of magnitude and provide energy coverage from 20 GeV to more than 200 TeV. In order to achieve these ambitious goals Monte Carlo (MC) simulations play a crucial role, guiding the design of CTA. Here, results of the second large-scale MC production are reported, providing a realistic estimation of feasible array candidates for both Northern and Sourthern Hemisphere sites performance, placing CTA capabilities into the context of the current generation of High Energy $\gamma$-ray detectors.

Self-consistent modelling of line-driven hot-star winds with Monte Carlo radiation hydrodynamics

Radiative pressure exerted by line interactions is a prominent driver of outflows in astrophysical systems, being at work in the outflows emerging from hot stars or from the accretion discs of cataclysmic variables, massive young stars and active galactic nuclei. In this work, a new radiation hydrodynamical approach to model line-driven hot-star winds is presented. By coupling a Monte Carlo radiative transfer scheme with a finite-volume fluid dynamical method, line-driven mass outflows may be modelled self-consistently, benefiting from the advantages of Monte Carlo techniques in treating multi-line effects, such as multiple scatterings, and in dealing with arbitrary multidimensional configurations. In this work, we introduce our approach in detail by highlighting the key numerical techniques and verifying their operation in a number of simplified applications, specifically in a series of self-consistent, one-dimensional, Sobolev-type, hot-star wind calculations. The utility and accuracy of our approach is demonstrated by comparing the obtained results with the predictions of various formulations of the so-called CAK theory and by confronting the calculations with modern sophisticated techniques of predicting the wind structure. Using these calculations, we also point out some useful diagnostic capabilities our approach provides. Finally we discuss some of the current limitations of our method, some possible extensions and potential future applications.

Phenomenology of Large Extra Dimensions Models at Hadrons Colliders using Monte Carlo Techniques (Spin-2 Graviton)

Large Extra Dimensions Models have been proposed to remove the hierarchy problem and give an explanation why the gravity is so much weaker than the other three forces. In this work, we present an analysis of Monte Carlo data events for new physics signatures of spin-2 Graviton in context of ADD model with total dimensions $D=4+\delta,$ $\delta = 1,2,3,4,5,6 $ where $ \delta $ is the extra special dimension, this model involves missing momentum $P_{T}^{miss}$ in association with jet in the final state via the process $pp(\bar{p}) \rightarrow G+jet$, Also, we present an analysis in context of the RS model with 5-dimensions via the process $pp(\bar{p}) \rightarrow G+jet$, $G \rightarrow e^{+}e^{-}$ with final state $e^{+}e^{-}+jet$. We used Monte Carlo event generator Pythia8 to produce efficient signal selection rules at the Large Hadron Collider with $\sqrt{s}$=14TeV and at the Tevatron $\sqrt{s}$=1.96TeV .

Analytic Boosted Boson Discrimination [Replacement]

Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, $D_2$, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted $Z$ boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

Analytic Boosted Boson Discrimination [Replacement]

Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, $D_2$, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted $Z$ boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

Analytic Boosted Boson Discrimination

Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, $D_2$, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted $Z$ boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

Analytic Boosted Boson Discrimination [Cross-Listing]

Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, $D_2$, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted $Z$ boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

Analytic Boosted Boson Discrimination [Replacement]

Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, $D_2$, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted $Z$ boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

Analytic Boosted Boson Discrimination [Replacement]

Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, $D_2$, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted $Z$ boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.

 

You need to log in to vote

The blog owner requires users to be logged in to be able to vote for this post.

Alternatively, if you do not have an account yet you can create one here.

Powered by Vote It Up

^ Return to the top of page ^