# Posts Tagged Monte Carlo

## Recent Postings from Monte Carlo

### A User-Friendly Dark Energy Model Generator

We provide software with a graphical user interface to calculate the phenomenology of a wide class of dark energy models featuring multiple scalar fields. The user chooses a subclass of models and, if desired, initial conditions, or else a range of initial parameters for Monte Carlo. The code calculates the energy density of components in the universe, the equation of state of dark energy, and the linear growth of density perturbations, all as a function of redshift and scale factor. The output also includes an approximate conversion into the average equation of state, as well as the common $(w_0, w_a)$ parametrization. The code is available here: http://github.com/kahinton/Dark-Energy-UI-and-MC

### SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations

The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

### An Unbiased Hessian Representation for Monte Carlo PDFs

We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (CMC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the CMC-H PDF set.

### An Unbiased Hessian Representation for Monte Carlo PDFs [Replacement]

We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (CMC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the CMC-H PDF set.

### Monte Carlo Method for Calculating Oxygen Abundances and Their Uncertainties from Strong-Line Flux Measurements

We present the open-source Python code pyMCZ that determines oxygen abundance and its distribution from strong emission lines in the standard metallicity scales, based on the original IDL code of Kewley & Dopita (2002) with updates from Kewley & Ellison (2008), and expanded to include more recently developed scales. The standard strong-line diagnostics have been used to estimate the oxygen abundance in the interstellar medium through various emission line ratios in many areas of astrophysics, including galaxy evolution and supernova host galaxy studies. We introduce a Python implementation of these methods that, through Monte Carlo (MC) sampling, better characterizes the statistical reddening-corrected oxygen abundance confidence region. Given line flux measurements and their uncertainties, our code produces synthetic distributions for the oxygen abundance in up to 13 metallicity scales simultaneously, as well as for E(B-V), and estimates their median values and their 66% confidence regions. In addition, we provide the option of outputting the full MC distributions, and their kernel density estimates. We test our code on emission line measurements from a sample of nearby supernova host galaxies ($z<0.15$) and compare our metallicity results with those from previous methods. We show that our metallicity estimates are consistent with previous methods but yield smaller uncertainties. We also offer visualization tools to assess the spread of the oxygen abundance in the different scales, as well as the shape of the estimated oxygen abundance distribution in each scale, and develop robust metrics for determining the appropriate MC sample size. The code is open access and open source and can be found at https://github.com/nyusngroup/pyMCZ

### Monte-Carlo approach to particle-field interactions and the kinetics of the chiral phase transition

The kinetics of the chiral phase transition is studied within a linear quark-meson-$\sigma$ model, using a Monte-Carlo approach to semiclassical particle-field dynamics. The meson fields are described on the mean-field level and quarks and antiquarks as ensembles of test particles. Collisions between quarks and antiquarks as well as the $q\overline{q}$ annihilation to $\sigma$ mesons and the decay of $\sigma$ mesons is treated, using the corresponding transition-matrix elements from the underlying quantum field theory, obeying strictly the rule of detailed balance and energy-momentum conservation. The approach allows to study fluctuations without making ad hoc assumptions concerning the statistical nature of the random process as necessary in Langevin-Fokker-Planck frameworks.

### Polarisation spectral synthesis for Type Ia supernova explosion models

We present a Monte Carlo radiative transfer technique for calculating synthetic spectropolarimetry for multi-dimensional supernova explosion models. The approach utilises "virtual-packets" that are generated during the propagation of the Monte Carlo quanta and used to compute synthetic observables for specific observer orientations. Compared to extracting synthetic observables by direct binning of emergent Monte Carlo quanta, this virtual-packet approach leads to a substantial reduction in the Monte Carlo noise. This is vital for calculating synthetic spectropolarimetry (since the degree of polarisation is typically very small) but also useful for calculations of light curves and spectra. We first validate our approach via application of an idealised test code to simple geometries. We then describe its implementation in the Monte Carlo radiative transfer code ARTIS and present test calculations for simple models for Type Ia supernovae. Specifically, we use the well-known one-dimensional W7 model to verify that our scheme can accurately recover zero polarisation from a spherical model, and to demonstrate the reduction in Monte Carlo noise compared to a simple packet-binning approach. To investigate the impact of aspherical ejecta on the polarisation spectra, we then use ARTIS to calculate synthetic observables for prolate and oblate ellipsoidal models with Type Ia supernova compositions.

### Transition redshift in $f(T)$ cosmology and observational constraints [Replacement]

We extract constraints on the transition redshift $z_{tr}$, determining the onset of cosmic acceleration, predicted by an effective cosmographic construction, in the framework of $f(T)$ gravity. In particular, employing cosmography we obtain bounds on the viable $f(T)$ forms and their derivatives. Since this procedure is model independent, as long as the scalar curvature is fixed, we are able to determine intervals for $z_{tr}$. In this way we guarantee that the Solar-System constraints are preserved and moreover we extract bounds on the transition time and the free parameters of the scenario. We find that the transition redshifts predicted by $f(T)$ cosmology, although compatible with the standard $\Lambda$CDM predictions, are slightly smaller. Finally, in order to obtain observational constraints on $f(T)$ cosmology, we perform a Monte Carlo fitting using supernova data, involving the most recent union 2.1 data set.

### Transition redshift in $f(T)$ cosmology and observational constraints [Cross-Listing]

We extract constraints on the transition redshift $z_{tr}$, determining the onset of cosmic acceleration, predicted by an effective cosmographic construction, in the framework of $f(T)$ gravity. In particular, employing cosmography we obtain bounds on the viable $f(T)$ forms and their derivatives. Since this procedure is model independent, as long as the scalar curvature is fixed, we are able to determine intervals for $z_{tr}$. In this way we guarantee that the Solar-System constraints are preserved and moreover we extract bounds on the transition time and the free parameters of the scenario. We find that the transition redshifts predicted by $f(T)$ cosmology, although compatible with the standard $\Lambda$CDM predictions, are slightly smaller. Finally, in order to obtain observational constraints on $f(T)$ cosmology, we perform a Monte Carlo fitting using supernova data, involving the most recent union 2.1 data set.

### Transition redshift in $f(T)$ cosmology and observational constraints

We extract constraints on the transition redshift $z_{tr}$, determining the onset of cosmic acceleration, predicted by an effective cosmographic construction, in the framework of $f(T)$ gravity. In particular, employing cosmography we obtain bounds on the viable $f(T)$ forms and their derivatives. Since this procedure is model independent, as long as the scalar curvature is fixed, we are able to determine intervals for $z_{tr}$. In this way we guarantee that the Solar-System constraints are preserved and moreover we extract bounds on the transition time and the free parameters of the scenario. We find that the transition redshifts predicted by $f(T)$ cosmology, although compatible with the standard $\Lambda$CDM predictions, are slightly smaller. Finally, in order to obtain observational constraints on $f(T)$ cosmology, we perform a Monte Carlo fitting using supernova data, involving the most recent union 2.1 data set.

### Transition redshift in $f(T)$ cosmology and observational constraints [Replacement]

We extract constraints on the transition redshift $z_{tr}$, determining the onset of cosmic acceleration, predicted by an effective cosmographic construction, in the framework of $f(T)$ gravity. In particular, employing cosmography we obtain bounds on the viable $f(T)$ forms and their derivatives. Since this procedure is model independent, as long as the scalar curvature is fixed, we are able to determine intervals for $z_{tr}$. In this way we guarantee that the Solar-System constraints are preserved and moreover we extract bounds on the transition time and the free parameters of the scenario. We find that the transition redshifts predicted by $f(T)$ cosmology, although compatible with the standard $\Lambda$CDM predictions, are slightly smaller. Finally, in order to obtain observational constraints on $f(T)$ cosmology, we perform a Monte Carlo fitting using supernova data, involving the most recent union 2.1 data set.

### Transition redshift in $f(T)$ cosmology and observational constraints [Cross-Listing]

We extract constraints on the transition redshift $z_{tr}$, determining the onset of cosmic acceleration, predicted by an effective cosmographic construction, in the framework of $f(T)$ gravity. In particular, employing cosmography we obtain bounds on the viable $f(T)$ forms and their derivatives. Since this procedure is model independent, as long as the scalar curvature is fixed, we are able to determine intervals for $z_{tr}$. In this way we guarantee that the Solar-System constraints are preserved and moreover we extract bounds on the transition time and the free parameters of the scenario. We find that the transition redshifts predicted by $f(T)$ cosmology, although compatible with the standard $\Lambda$CDM predictions, are slightly smaller. Finally, in order to obtain observational constraints on $f(T)$ cosmology, we perform a Monte Carlo fitting using supernova data, involving the most recent union 2.1 data set.

### Transition redshift in $f(T)$ cosmology and observational constraints [Replacement]

We extract constraints on the transition redshift $z_{tr}$, determining the onset of cosmic acceleration, predicted by an effective cosmographic construction, in the framework of $f(T)$ gravity. In particular, employing cosmography we obtain bounds on the viable $f(T)$ forms and their derivatives. Since this procedure is model independent, as long as the scalar curvature is fixed, we are able to determine intervals for $z_{tr}$. In this way we guarantee that the Solar-System constraints are preserved and moreover we extract bounds on the transition time and the free parameters of the scenario. We find that the transition redshifts predicted by $f(T)$ cosmology, although compatible with the standard $\Lambda$CDM predictions, are slightly smaller. Finally, in order to obtain observational constraints on $f(T)$ cosmology, we perform a Monte Carlo fitting using supernova data, involving the most recent union 2.1 data set.

### Binary population synthesis for the core-degenerate scenario of type Ia supernova progenitors [Replacement]

The core-degenerate (CD) scenario has been suggested to be a possible progenitor model of type Ia supernovae (SNe Ia), in which a carbon-oxygen white dwarf (CO WD) merges with the hot CO core of a massive asymptotic giant branch (AGB) star during their common-envelope phase. However, the SN Ia birthrates for this scenario are still uncertain. We conducted a detailed investigation into the CD scenario and then gave the birthrates for this scenario using a detailed Monte Carlo binary population synthesis approach. We found that the delay times of SNe Ia from this scenario are ~70Myrs-1400Myrs, which means that the CD scenario contributes to young SN Ia populations. The Galactic SN Ia birthrates for this scenario are in the range of ~7.4*10^{-5} yr^{-1}-3.7*10^{-4}yr^{-1}, which roughly accounts for ~2-10% of all SNe Ia. This indicates that, under the assumptions made here, the CD scenario only contributes a small portion of all SNe Ia, which is not consistent with the results of Ilkov & Soker (2013).

### Binary population synthesis for the core-degenerate scenario of type Ia supernova progenitors

The core-degenerate (CD) scenario has been suggested to be a possible progenitor model of type Ia supernovae (SNe Ia), in which a carbon-oxygen white dwarf (CO WD) merges with the hot CO core of a massive asymptotic giant branch (AGB) star during their common-envelope phase. However, the SN Ia birthrates for this scenario are still uncertain. We conducted a detailed investigation into the CD scenario and then gave the birthrates for this scenario using a detailed Monte Carlo binary population synthesis approach. We found that the delay times of SNe Ia from this scenario are ~70Myrs-1400Myrs, which means that the CD scenario contributes to young SN Ia populations. The Galactic SN Ia birthrates for this scenario are in the range of ~7.4*10^{-5} yr^{-1}-3.7*10^{-4}yr^{-1}, which roughly accounts for ~2-10% of all SNe Ia. This indicates that, under the assumptions made here, the CD scenario only contributes a small portion of all SNe Ia, which is not consistent with the results of Ilkov & Soker (2013).

### Radiation-hydrodynamical simulations of massive star formation using Monte Carlo radiative transfer: I. Algorithms and numerical methods

We present a set of new numerical methods that are relevant to calculating radiation pressure terms in hydrodynamics calculations, with a particular focus on massive star formation. The radiation force is determined from a Monte Carlo estimator and enables a complete treatment of the detailed microphysics, including polychromatic radiation and anisotropic scattering, in both the free-streaming and optically-thick limits. Since the new method is computationally demanding we have developed two new methods that speed up the algorithm. The first is a photon packet splitting algorithm that enables efficient treatment of the Monte Carlo process in very optically thick regions. The second is a parallelisation method that distributes the Monte Carlo workload over many instances of the hydrodynamic domain, resulting in excellent scaling of the radiation step. We also describe the implementation of a sink particle method that enables us to follow the accretion onto, and the growth of, the protostars. We detail the results of extensive testing and benchmarking of the new algorithms.

### Ab initio lattice results for Fermi polarons in two and three dimensions [Replacement]

We investigate the attractive Fermi polaron problem in two and three dimensions using non-perturbative Monte Carlo simulations. We introduce a new Monte Carlo algorithm called the impurity lattice Monte Carlo method. This algorithm samples the path integral in a computationally efficient manner and has only small sign oscillations for systems with a single impurity. To benchmark the method, we calculate the universal polaron energy in three dimensions in the scale-invariant unitarity limit and find agreement with published results. We then present the first fully non-perturbative calculations of the polaron energy in two dimensions in the limit of zero range interactions. We find evidence for a smooth crossover transition from a fermionic quasiparticle to a molecular state as a function of interaction strength with significant mixing between the two descriptions in the crossover region.

### The tau leptons theory and experimental data: Monte Carlo, fits, software and systematic errors

Status of tau lepton decay Monte Carlo generator TAUOLA is reviewed. Recent efforts on development of new hadronic currents are presented. Multitude new channels for anomalous tau decay modes and parametrization based on defaults used by BaBar collaboration are introduced. Also parametrization based on theoretical considerations are presented as an alternative. Lesson from comparison and fits to the BaBar and Belle data is recalled. It was found that as in the past, in particular at a time of comparisons with CLEO and ALEPH data, proper fitting, to as detailed as possible representation of the experimental data, is essential for appropriate developments of models of tau decays. In the later part of the presentation, use of the TAUOLA program for phenomenology of W,Z,H decays at LHC is adressed. Some new results, relevant for QED bremsstrahlung in such decays are presented as well.

### Monte Carlo error analyses of Spearman's rank test

Spearman’s rank correlation test is commonly used in astronomy to discern whether a set of two variables are correlated or not. Unlike most other quantities quoted in astronomical literature, the Spearman’s rank correlation coefficient is generally quoted with no attempt to estimate the errors on its value. This is a practice that would not be accepted for those other quantities, as it is often regarded that an estimate of a quantity without an estimate of its associated uncertainties is meaningless. This manuscript describes a number of easily implemented, Monte Carlo based methods to estimate the uncertainty on the Spearman’s rank correlation coefficient, or more precisely to estimate its probability distribution.

### Monte Carlo error analyses of Spearman's rank test [Replacement]

Spearman’s rank correlation test is commonly used in astronomy to discern whether a set of two variables are correlated or not. Unlike most other quantities quoted in astronomical literature, the Spearman’s rank correlation coefficient is generally quoted with no attempt to estimate the errors on its value. This is a practice that would not be accepted for those other quantities, as it is often regarded that an estimate of a quantity without an estimate of its associated uncertainties is meaningless. This manuscript describes a number of easily implemented, Monte Carlo based methods to estimate the uncertainty on the Spearman’s rank correlation coefficient, or more precisely to estimate its probability distribution.

### Cosmographic transition redshift in $f(R)$ gravity [Cross-Listing]

We propose a strategy to infer the transition redshift $z_{da}$, which characterizes the passage through the universe decelerated to accelerated phases, in the framework $f(R)$ gravities. To this end, we numerically reconstruct $f(z)$, i.e. the corresponding $f(R)$ function re-expressed in terms of the redshift $z$ and we show how to match $f(z)$ with cosmography. In particular, we relate $f(z)$ and its derivatives to the cosmographic coefficients, i.e. $H_0, q_0$ and $j_0$ and demonstrate that its corresponding evolution may be framed by means of an effective logarithmic dark energy term $\Omega_X$, slightly departing from the case of a pure cosmological constant. Afterwards, we show that our model predicts viable transition redshift constraints, which agree with $\Lambda$CDM. To do so, we compute the corresponding $z_{da}$ in terms of cosmographic outcomes and find that $z_{da}\leq1$. Finally, we reproduce an effective $f(z)$ and show that this class of models is fairly well compatible with present-time data. To do so, we get numerical constraints employing Monte Carlo fits with the Union 2.1 supernova survey and with the Hubble measurement data set.

### Cosmographic transition redshift in $f(R)$ gravity

We propose a strategy to infer the transition redshift $z_{da}$, which characterizes the passage through the universe decelerated to accelerated phases, in the framework $f(R)$ gravities. To this end, we numerically reconstruct $f(z)$, i.e. the corresponding $f(R)$ function re-expressed in terms of the redshift $z$ and we show how to match $f(z)$ with cosmography. In particular, we relate $f(z)$ and its derivatives to the cosmographic coefficients, i.e. $H_0, q_0$ and $j_0$ and demonstrate that its corresponding evolution may be framed by means of an effective logarithmic dark energy term $\Omega_X$, slightly departing from the case of a pure cosmological constant. Afterwards, we show that our model predicts viable transition redshift constraints, which agree with $\Lambda$CDM. To do so, we compute the corresponding $z_{da}$ in terms of cosmographic outcomes and find that $z_{da}\leq1$. Finally, we reproduce an effective $f(z)$ and show that this class of models is fairly well compatible with present-time data. To do so, we get numerical constraints employing Monte Carlo fits with the Union 2.1 supernova survey and with the Hubble measurement data set.

### Cosmographic transition redshift in $f(R)$ gravity [Cross-Listing]

We propose a strategy to infer the transition redshift $z_{da}$, which characterizes the passage through the universe decelerated to accelerated phases, in the framework $f(R)$ gravities. To this end, we numerically reconstruct $f(z)$, i.e. the corresponding $f(R)$ function re-expressed in terms of the redshift $z$ and we show how to match $f(z)$ with cosmography. In particular, we relate $f(z)$ and its derivatives to the cosmographic coefficients, i.e. $H_0, q_0$ and $j_0$ and demonstrate that its corresponding evolution may be framed by means of an effective logarithmic dark energy term $\Omega_X$, slightly departing from the case of a pure cosmological constant. Afterwards, we show that our model predicts viable transition redshift constraints, which agree with $\Lambda$CDM. To do so, we compute the corresponding $z_{da}$ in terms of cosmographic outcomes and find that $z_{da}\leq1$. Finally, we reproduce an effective $f(z)$ and show that this class of models is fairly well compatible with present-time data. To do so, we get numerical constraints employing Monte Carlo fits with the Union 2.1 supernova survey and with the Hubble measurement data set.

### Power Counting to Better Jet Observables

Optimized jet substructure observables for identifying boosted topologies will play an essential role in maximizing the physics reach of the Large Hadron Collider. Ideally, the design of discriminating variables would be informed by analytic calculations in perturbative QCD. Unfortunately, explicit calculations are often not feasible due to the complexity of the observables used for discrimination, and so many validation studies rely heavily, and solely, on Monte Carlo. In this paper we show how methods based on the parametric power counting of the dynamics of QCD, familiar from effective theory analyses, can be used to design, understand, and make robust predictions for the behavior of jet substructure variables. As a concrete example, we apply power counting for discriminating boosted Z bosons from massive QCD jets using observables formed from the n-point energy correlation functions. We show that power counting alone gives a definite prediction for the observable that optimally separates the background-rich from the signal-rich regions of phase space. Power counting can also be used to understand effects of phase space cuts and the effect of contamination from pile-up, which we discuss. As these arguments rely only on the parametric scaling of QCD, the predictions from power counting must be reproduced by any Monte Carlo, which we verify using Pythia8 and Herwig++. We also use the example of quark versus gluon discrimination to demonstrate the limits of the power counting technique.

### Conformal field theories at non-zero temperature: operator product expansions, Monte Carlo, and holography [Replacement]

We compute the non-zero temperature conductivity of conserved flavor currents in conformal field theories (CFTs) in 2+1 spacetime dimensions. At frequencies much greater than the temperature, $\hbar\omega>> k_B T$, the $\omega$ dependence can be computed from the operator product expansion (OPE) between the currents and operators which acquire a non-zero expectation value at T > 0. Such results are found to be in excellent agreement with quantum Monte Carlo studies of the O(2) Wilson-Fisher CFT. Results for the conductivity and other observables are also obtained in vector 1/N expansions. We match these large $\omega$ results to the corresponding correlators of holographic representations of the CFT: the holographic approach then allows us to extrapolate to small $\hbar \omega/(k_B T)$. Other holographic studies implicitly only used the OPE between the currents and the energy-momentum tensor, and this yields the correct leading large $\omega$ behavior for a large class of CFTs. However, for the Wilson-Fisher CFT a relevant "thermal" operator must also be considered, and then consistency with the Monte Carlo results is obtained without a previously needed ad hoc rescaling of the T value. We also establish sum rules obeyed by the conductivity of a wide class of CFTs.

### Conformal field theories at non-zero temperature: operator product expansions, Monte Carlo, and holography [Replacement]

We compute the non-zero temperature conductivity of conserved flavor currents in conformal field theories (CFTs) in 2+1 spacetime dimensions. At frequencies much greater than the temperature, $\hbar\omega>> k_B T$, the $\omega$ dependence can be computed from the operator product expansion (OPE) between the currents and operators which acquire a non-zero expectation value at T > 0. Such results are found to be in excellent agreement with quantum Monte Carlo studies of the O(2) Wilson-Fisher CFT. Results for the conductivity and other observables are also obtained in vector 1/N expansions. We match these large $\omega$ results to the corresponding correlators of holographic representations of the CFT: the holographic approach then allows us to extrapolate to small $\hbar \omega/(k_B T)$. Other holographic studies implicitly only used the OPE between the currents and the energy-momentum tensor, and this yields the correct leading large $\omega$ behavior for a large class of CFTs. However, for the Wilson-Fisher CFT a relevant "thermal" operator must also be considered, and then consistency with the Monte Carlo results is obtained without a previously needed ad hoc rescaling of the T value. We also establish sum rules obeyed by the conductivity of a wide class of CFTs.

### Conformal field theories at non-zero temperature: operator product expansions, Monte Carlo, and holography [Cross-Listing]

We compute the non-zero temperature conductivity of conserved flavor currents in conformal field theories (CFTs) in 2+1 spacetime dimensions. At frequencies much greater than the temperature, $\hbar\omega>> k_B T$, the $\omega$ dependence can be computed from the operator product expansion (OPE) between the currents and operators which acquire a non-zero expectation value at T > 0. Such results are found to be in excellent agreement with quantum Monte Carlo studies of the O(2) Wilson-Fisher CFT. Results for the conductivity and other observables are also obtained in vector 1/N expansions. We match these large $\omega$ results to the corresponding correlators of holographic representations of the CFT: the holographic approach then allows us to extrapolate to small $\hbar \omega/(k_B T)$. Other holographic studies implicitly only used the OPE between the currents and the energy-momentum tensor, and this yields the correct leading large $\omega$ behavior for a large class of CFTs. However, for the Wilson-Fisher CFT a relevant "thermal" operator must also be considered, and then consistency with the Monte Carlo results is obtained without a previously needed ad hoc rescaling of the T value. We also establish sum rules obeyed by the conductivity of a wide class of CFTs.

### Conformal field theories at non-zero temperature: operator product expansions, Monte Carlo, and holography [Cross-Listing]

We compute the non-zero temperature conductivity of conserved flavor currents in conformal field theories (CFTs) in 2+1 spacetime dimensions. At frequencies much greater than the temperature, $\hbar\omega>> k_B T$, the $\omega$ dependence can be computed from the operator product expansion (OPE) between the currents and operators which acquire a non-zero expectation value at T > 0. Such results are found to be in excellent agreement with quantum Monte Carlo studies of the O(2) Wilson-Fisher CFT. Results for the conductivity and other observables are also obtained in vector 1/N expansions. We match these large $\omega$ results to the corresponding correlators of holographic representations of the CFT: the holographic approach then allows us to extrapolate to small $\hbar \omega/(k_B T)$. Other holographic studies implicitly only used the OPE between the currents and the energy-momentum tensor, and this yields the correct leading large $\omega$ behavior for a large class of CFTs. However, for the Wilson-Fisher CFT a relevant "thermal" operator must also be considered, and then consistency with the Monte Carlo results is obtained without a previously needed ad hoc rescaling of the T value. We also establish sum rules obeyed by the conductivity of a wide class of CFTs.

### Signal Attenuation Curve for Different Surface Detector Arrays

Modern cosmic ray experiments consisting of large array of particle detectors measure the signals of electromagnetic or muon components or their combination. The correction for an amount of atmosphere passed is applied to the surface detector signal before its conversion to the shower energy. Either Monte Carlo based approach assuming certain composition of primaries or indirect estimation using real data and assuming isotropy of arrival directions can be used. Toy surface arrays of different sensitivities to electromagnetic and muon components are assumed in MC simulations to study effects imposed on attenuation curves for varying composition or possible high energy anisotropy. The possible sensitivity of the attenuation curve to the mass composition is also tested for different array types focusing on a future apparatus that can separate muon and electromagnetic component signals.

### Heavy Ions Collision evolution modeling with ECHO-QGP

We present a numerical code modeling the evolution of the medium formed in relativistic heavy ion collisions, ECHO-QGP. The code solves relativistic hydrodynamics in $(3+1)-$D, with dissipative terms included within the framework of Israel-Stewart theory; it can work both in Minkowskian and in Bjorken coordinates. Initial conditions are provided through an implementation of the Glauber model (both Optical and Monte Carlo), while freezeout and particle generation are based on the Cooper-Frye prescription. The code is validated against several test problems and shows remarkable stability and accuracy with the combination of a conservative (shock-capturing) approach and the high-order methods employed. In particular it beautifully agrees with the semi-analytic solution known as Gubser flow, both in the ideal and in the viscous Israel-Stewart case, up to very large times and without any ad hoc tuning of the algorithm.

### Properties of Long Gamma Ray Burst Progenitors in Cosmological Simulations

We study the nature of long gamma ray burst (LGRB) progenitors using cosmological simulations of structure formation and galactic evolution. LGRBs are potentially excellent tracers of stellar evolution in the early universe. We developed a Monte Carlo numerical code which generates LGRBs coupled to cosmological simulations. The simulations allows us to follow the ormation of galaxies self-consistently. We model the detectability of LGRBs and their host galaxies in order to compare results with observational data obtained by high-energy satellites. Our code also includes stochastic effects in the observed rate of LGRBs.

### The density of states approach to dense quantum systems [Cross-Listing]

We develop a first-principle generalised density of state method for studying numerically quantum field theories with a complex action. As a proof of concept, we show that with our approach we can solve numerically the strong sign problem of the $Z_3$ spin model at finite density. Our results are confirmed by standard simulations of the theory dual to the considered model, which is free from a sign problem. Our method opens new perspectives on ab initio simulations of cold dense quantum systems, and in particular of Yang-Mills theories with matter at finite densities, for which Monte Carlo based importance sampling are unable to produce sufficiently accurate results.

### The density of states approach to dense quantum systems [Cross-Listing]

We develop a first-principle generalised density of state method for studying numerically quantum field theories with a complex action. As a proof of concept, we show that with our approach we can solve numerically the strong sign problem of the $Z_3$ spin model at finite density. Our results are confirmed by standard simulations of the theory dual to the considered model, which is free from a sign problem. Our method opens new perspectives on ab initio simulations of cold dense quantum systems, and in particular of Yang-Mills theories with matter at finite densities, for which Monte Carlo based importance sampling are unable to produce sufficiently accurate results.

### Small-$x$ dynamics in forward-central dijet decorrelations at the LHC [Replacement]

We provide a description, within the High Energy Factorization formalism, of central-forward dijet decorrelation data measured by the CMS experiment and the predictions for nuclear modification ratio~$R_{pA}$ in p+Pb collisions. In our study, we use the unintegrated gluon density derived from the BFKL and BK equations supplemented with subleading corrections and a hard scale dependence. The latter is introduced at the final step of the calculation by reweighting the Monte Carlo generated events using suitable Sudakov form factors, without changing the total cross section. We achieve a good description of data in the whole region of the azimuthal angle.

### Monte Carlo Radiation Hydrodynamics with Implicit Methods

We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics problems. We use a time-dependent, frequency-dependent, 3-dimensional radiation transport code, that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different 1-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas-radiation energy coupling is treated implicitly, allowing us to take hydrodyanimcal time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional radiation hydrodynamics of astrophysical systems.

### Monte Carlo Radiation Hydrodynamics with Implicit Methods [Replacement]

We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics problems. We use a time-dependent, frequency-dependent, 3-dimensional radiation transport code, that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different 1-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas-radiation energy coupling is treated implicitly, allowing us to take hydrodyanimcal time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional radiation hydrodynamics of astrophysical systems.

### (MC)**3 -- a Multi-Channel Markov Chain Monte Carlo algorithm for phase-space sampling

A new Monte Carlo algorithm for phase-space integration, named (MC)**3, is presented. It is based on Markov Chain Monte Carlo techniques but at the same time incorporates prior knowledge about the target distribution in the form of suitable phase-space mappings from a corresponding Multi-Channel importance sampling Monte Carlo. The combined approach inherits the benefits of both techniques while typical drawbacks of either solution get ameliorated.

### (MC)**3 -- a Multi-Channel Markov Chain Monte Carlo algorithm for phase-space sampling [Replacement]

A new Monte Carlo algorithm for phase-space sampling, named (MC)**3, is presented. It is based on Markov Chain Monte Carlo techniques but at the same time incorporates prior knowledge about the target distribution in the form of suitable phase-space mappings from a corresponding Multi-Channel importance sampling Monte Carlo. The combined approach inherits the benefits of both techniques while typical drawbacks of either solution get ameliorated.

### (MC)**3 -- a Multi-Channel Markov Chain Monte Carlo algorithm for phase-space sampling [Replacement]

A new Monte Carlo algorithm for phase-space sampling, named (MC)**3, is presented. It is based on Markov Chain Monte Carlo techniques but at the same time incorporates prior knowledge about the target distribution in the form of suitable phase-space mappings from a corresponding Multi-Channel importance sampling Monte Carlo. The combined approach inherits the benefits of both techniques while typical drawbacks of either solution get ameliorated.

### (MC)**3 -- a Multi-Channel Markov Chain Monte Carlo algorithm for phase-space sampling [Replacement]

A new Monte Carlo algorithm for phase-space sampling, named (MC)**3, is presented. It is based on Markov Chain Monte Carlo techniques but at the same time incorporates prior knowledge about the target distribution in the form of suitable phase-space mappings from a corresponding Multi-Channel importance sampling Monte Carlo. The combined approach inherits the benefits of both techniques while typical drawbacks of either solution get ameliorated.

### Heavy dense QCD and nuclear matter from an effective lattice theory [Cross-Listing]

A three-dimensional effective lattice theory of Polyakov loops is derived from QCD by expansions in the fundamental character of the gauge action, u, and the hopping parameter, \kappa, whose action is correct to \kappa^n u^m with n+m=4. At finite baryon density, the effective theory has a sign problem which meets all criteria to be simulated by complex Langevin as well as by Monte Carlo on small volumes. The theory is valid for the thermodynamics of heavy quarks, where its predictions agree with simulations of full QCD at zero and imaginary chemical potential. In its region of convergence, it is moreover amenable to perturbative calculations in the small effective couplings. In this work we study the challenging cold and dense regime. We find unambiguous evidence for the nuclear liquid gas transition once the baryon chemical potential approaches the baryon mass, and calculate the nuclear equation of state. In particular, we find a negative binding energy per nucleon causing the condensation, whose absolute value decreases exponentially as mesons get heavier. For decreasing meson mass, we observe a first order liquid gas transition with an endpoint at some finite temperature, as well as gap between the onset of isospin and baryon condensation.

### Validation of Compton Scattering Monte Carlo Simulation Models [Cross-Listing]

Several models for the Monte Carlo simulation of Compton scattering on electrons are quantitatively evaluated with respect to a large collection of experimental data retrieved from the literature. Some of these models are currently implemented in general purpose Monte Carlo systems; some have been implemented and evaluated for possible use in Monte Carlo particle transport for the first time in this study. Here we present first and preliminary results concerning total and differential Compton scattering cross sections.

### The Influence of Galactic Cosmic Rays on Ion-Neutral Hydrocarbon Chemistry in the Upper Atmospheres of Free-Floating Exoplanets

Cosmic rays may be linked to the formation of volatiles necessary for prebiotic chemistry. We explore the effect of cosmic rays in a hydrogen-dominated atmosphere, as a proof-of-concept that ion-neutral chemistry may be important for modelling hydrogen-dominated atmospheres. In order to accomplish this, we utilize Monte Carlo cosmic ray transport models with particle energies of $10^6$ eV $< E < 10^{12}$ eV in order to investigate the cosmic ray enhancement of free electrons in substellar atmospheres. Ion-neutral chemistry is then applied to a Drift-Phoenix model of a free-floating giant gas planet. Our results suggest that the activation of ion-neutral chemistry in the upper atmosphere significantly enhances formation rates for various species, and we find that C$_2$H$_2$, C$_2$H$_4$, NH$_3$, C$_6$H$_6$ and possibly C$_{10}$H are enhanced in the upper atmospheres because of cosmic rays. Our results suggest a potential connection between cosmic ray chemistry and the hazes observed in the upper atmospheres of various extrasolar planets. Chemi-ionization reactions are briefly discussed, as they may enhance the degree of ionization in the cloud layer.

### First determination of $f_+(0) |V_{us}|$ from a combined analysis of $\tau\to K\pi \nu_\tau$ decay and $\pi K$ scattering with constraints from $K_{\ell3}$ decays [Replacement]

We perform a combined analysis of $\tau\to K\pi \nu_\tau$ decay and $\pi K$ scattering with constraints from $K_{\ell3}$ data using a $N/D$ approach that fulfills requirements from unitarity and analyticity. We obtain a good fit of the $I=1/2$ $\pi K$ amplitude in the $P$ wave using the LASS data above the elastic region while in this region data are generated via Monte Carlo using the FOCUS results based on $D_{\ell 4}$ decay. The spectrum and branching ratio of $\tau\to K\pi \nu_\tau$ constrained by $K_{\ell3}$ decays are also well reproduced leading to $f_+(0) |V_{us}|= 0.2163 \pm 0.0014$. Furthermore, we obtain the slope of the vector form factor $\lambda_+=(25.56 \pm 0.40) \times 10^{-3}$ while the value of the scalar form factor at the Callan-Treiman point is $\ln C=0.2062 \pm 0.0089$. Given the experimental precision our results are compatible with the Standard model.

### Short-Term Variability of X-rays from Accreting Neutron Star Vela X-1: II. Monte-Carlo Modeling

We develop a Monte Carlo Comptonization model for the X-ray spectrum of accretion-powered pulsars. Simple, spherical, thermal Comptonization models give harder spectra for higher optical depth, while the observational data from Vela X-1 show that the spectra are harder at higher luminosity. This suggests a physical interpretation where the optical depth of the accreting plasma increases with mass accretion rate. We develop a detailed Monte-Carlo model of the accretion flow, including the effects of the strong magnetic field ($\sim 10^{12}$ G) both in geometrically constraining the flow into an accretion column, and in reducing the cross section. We treat bulk-motion Comptonization of the infalling material as well as thermal Comptonization. These model spectra can match the observed broad-band {\it Suzaku} data from Vela X-1 over a wide range of mass accretion rates. The model can also explain the so-called "low state", in which the uminosity decreases by an order of magnitude. Here, thermal Comptonization should be negligible, so the spectrum instead is dominated by bulk-motion Comptonization.

### Hierarchical octree and k-d tree grids for 3D radiative transfer simulations

A crucial ingredient for numerically solving the 3D radiative transfer problem is the choice of the grid that discretizes the transfer medium. Many modern radiative transfer codes, whether using Monte Carlo or ray tracing techniques, are equipped with hierarchical octree-based grids to accommodate a wide dynamic range in densities. We critically investigate two different aspects of octree grids in the framework of Monte Carlo dust radiative transfer. Inspired by their common use in computer graphics applications, we test hierarchical k-d tree grids as an alternative for octree grids. On the other hand, we investigate which node subdivision-stopping criteria are optimal for constructing of hierarchical grids. We implemented a k-d tree grid in the 3D radiative transfer code SKIRT and compared it with the previously implemented octree grid. We also considered three different node subdivision-stopping criteria (based on mass, optical depth, and density gradient thresholds). Based on a small suite of test models, we compared the efficiency and accuracy of the different grids, according to various quality metrics. For a given set of requirements, the k-d tree grids only require half the number of cells of the corresponding octree. Moreover, for the same number of grid cells, the k-d tree is characterized by higher discretization accuracy. Concerning the subdivision stopping criteria, we find that an optical depth criterion is not a useful alternative to the more standard mass threshold, since the resulting grids show a poor accuracy. Both criteria can be combined; however, in the optimal combination, for which we provide a simple approximate recipe, this can lead to a 20% reduction in the number of cells needed to reach a certain grid quality. An additional density gradient threshold criterion can be added that solves the problem of poorly resolving sharp edges and… (abridged).

### Comparative Analysis of Numerical Methods for Parameter Determination

We made a comparative analysis of numerical methods for multidimensional optimization. The main parameter is a number of computations of the test function to reach necessary accuracy, as it is computationally "slow". For complex functions, analytic differentiation by many parameters can cause problems associated with a significant complication of the program and thus slowing its operation. For comparison, we used the methods: "brute force" (or minimization on a regular grid), Monte Carlo, steepest descent, conjugate gradients, Brent’s method (golden section search), parabolic interpolation etc. The Monte-Carlo method was applied to the eclipsing binary system AM Leo.

### Calculations of the Propagated LIS Electron Spectrum Which Describe the Cosmic Ray Electron Spectrum below ~100 MeV Measured Beyond 122 AU at Voyager 1 and its Relationship to the PAMELA Electron Spectrum above 200 MeV [Cross-Listing]

The new Voyager measurements of cosmic ray electrons between 6-60 MeV beyond 122 AU are very sensitive indicators of cosmic ray propagation and acceleration in the galaxy at a very low modulation level. Using a Monte Carlo diffusion model with a source spectrum with a single spectral index of -2.2 at all energies we are able to fit this observed Voyager spectrum and the contemporary PAMELA electron spectrum over an energy range from 6 MeV to ~200 GeV. This spectrum has a break in it but this break is due to propagation effects, not changes in the primary spectrum. This break is gradual, starting at > 2 GeV where the spectrum is ~E^-3.2 and continuing down to ~100 MeV or below where the spectrum becomes ~E^-1.5. At the higher energies the loss terms due to synchrotron radiation and inverse Compton effects which are ~E^2.0 steepen the exponent of the source spectrum by 1.0. At lower energies, these terms become unimportant and the loss is governed by diffusion and escape from the galaxy. A diffusion term which is proportional to beta^-1 below ~0.32 GV (which also fit the H and He spectra measured at Voyager) and has a value = 3×1028 cm^2 x s^-1 at 1 GV and a boundary at +/-1 Kpc will fit the Voyager or other similar spectra at low energies.

### The First Circumstellar Disk Imaged in Silhouette with Adaptive Optics: MagAO Imaging of Orion 218-354

We present high resolution adaptive optics (AO) corrected images of the silhouette disk Orion 218-354 taken with Magellan AO (MagAO) and its visible light camera, VisAO, in simultaneous differential imaging (SDI) mode at H-alpha. This is the first image of a circumstellar disk seen in silhouette with adaptive optics and is among the first visible light adaptive optics results in the literature. We derive the disk extent, geometry, intensity and extinction profiles and find, in contrast with previous work, that the disk is likely optically-thin at H-alpha. Our data provide an estimate of the column density in primitive, ISM-like grains as a function of radius in the disk. We estimate that only ~10% of the total sub-mm derived disk mass lies in primitive, unprocessed grains. We use our data, Monte Carlo radiative transfer modeling and previous results from the literature to make the first self-consistent multiwavelength model of Orion 218-354. We find that we are able to reproduce the 1-1000micron SED with a ~2-540AU disk of the size, geometry, small vs. large grain proportion and radial mass profile indicated by our data. This inner radius is a factor of ~15 larger than the sublimation radius of the disk, suggesting that it is likely cleared in the very interior.

### The First Circumstellar Disk Imaged in Silhouette with Adaptive Optics: MagAO Imaging of Orion 218-354 [Replacement]

We present high resolution adaptive optics (AO) corrected images of the silhouette disk Orion 218-354 taken with Magellan AO (MagAO) and its visible light camera, VisAO, in simultaneous differential imaging (SDI) mode at H-alpha. This is the first image of a circumstellar disk seen in silhouette with adaptive optics and is among the first visible light adaptive optics results in the literature. We derive the disk extent, geometry, intensity and extinction profiles and find, in contrast with previous work, that the disk is likely optically-thin at H-alpha. Our data provide an estimate of the column density in primitive, ISM-like grains as a function of radius in the disk. We estimate that only ~10% of the total sub-mm derived disk mass lies in primitive, unprocessed grains. We use our data, Monte Carlo radiative transfer modeling and previous results from the literature to make the first self-consistent multiwavelength model of Orion 218-354. We find that we are able to reproduce the 1-1000micron SED with a ~2-540AU disk of the size, geometry, small vs. large grain proportion and radial mass profile indicated by our data. This inner radius is a factor of ~15 larger than the sublimation radius of the disk, suggesting that it is likely cleared in the very interior.

The blog owner requires users to be logged in to be able to vote for this post.

Alternatively, if you do not have an account yet you can create one here.