Posts Tagged methodology

Recent Postings from methodology

An Unbiased Hessian Representation for Monte Carlo PDFs

We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (CMC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the CMC-H PDF set.

An Unbiased Hessian Representation for Monte Carlo PDFs [Replacement]

We develop a methodology for the construction of a Hessian representation of Monte Carlo sets of parton distributions, based on the use of a subset of the Monte Carlo PDF replicas as an unbiased linear basis, and of a genetic algorithm for the determination of the optimal basis. We validate the methodology by first showing that it faithfully reproduces a native Monte Carlo PDF set (NNPDF3.0), and then, that if applied to Hessian PDF set (MMHT14) which was transformed into a Monte Carlo set, it gives back the starting PDFs with minimal information loss. We then show that, when applied to a large Monte Carlo PDF set obtained as combination of several underlying sets, the methodology leads to a Hessian representation in terms of a rather smaller set of parameters (CMC-H PDFs), thereby providing an alternative implementation of the recently suggested Meta-PDF idea and a Hessian version of the recently suggested PDF compression algorithm (CMC-PDFs). The mc2hessian conversion code is made publicly available together with (through LHAPDF6) a Hessian representations of the NNPDF3.0 set, and the CMC-H PDF set.

High redshift galaxies in the ALHAMBRA survey: I. selection method and number counts based on redshift PDFs

Context. Most observational results on the high redshift restframe UV-bright galaxies are based on samples pinpointed using the so called dropout technique or Ly-alpha selection. However, the availability of multifilter data allows now replacing the dropout selections by direct methods based on photometric redshifts. In this paper we present the methodology to select and study the population of high redshift galaxies in the ALHAMBRA survey data. Aims. Our aim is to develop a less biased methodology than the traditional dropout technique to study the high redshift galaxies in ALHAMBRA and other multifilter data. Thanks to the wide area ALHAMBRA covers, we especially aim at contributing in the study of the brightest, less frequent, high redshift galaxies. Methods. The methodology is based on redshift probability distribution functions (zPDFs). It is shown how a clean galaxy sample can be obtained by selecting the galaxies with high integrated probability of being within a given redshift interval. However, reaching both a complete and clean sample with this method is challenging. Hence, a method to derive statistical properties by summing the zPDFs of all the galaxies in the redshift bin of interest is introduced. Results. Using this methodology we derive the galaxy rest frame UV number counts in five redshift bins centred at z=2.5, 3.0, 3.5, 4.0, and 4.5, being complete up to the limiting magnitude at m_UV(AB)=24. With the wide field ALHAMBRA data we especially contribute in the study of the brightest ends of these counts, sampling well the surface densities down to m_UV(AB)=21-22. Conclusions. We show that using the zPDFs it is easy to select a clean sample of high redshift galaxies. We also show that statistical analysis of the properties of galaxies is better done using a probabilistic approach, which takes into account both the incompleteness and contamination in a natural way.

High redshift galaxies in the ALHAMBRA survey: I. selection method and number counts based on redshift PDFs [Replacement]

Context. Most observational results on the high redshift restframe UV-bright galaxies are based on samples pinpointed using the so called dropout technique or Ly-alpha selection. However, the availability of multifilter data allows now replacing the dropout selections by direct methods based on photometric redshifts. In this paper we present the methodology to select and study the population of high redshift galaxies in the ALHAMBRA survey data. Aims. Our aim is to develop a less biased methodology than the traditional dropout technique to study the high redshift galaxies in ALHAMBRA and other multifilter data. Thanks to the wide area ALHAMBRA covers, we especially aim at contributing in the study of the brightest, less frequent, high redshift galaxies. Methods. The methodology is based on redshift probability distribution functions (zPDFs). It is shown how a clean galaxy sample can be obtained by selecting the galaxies with high integrated probability of being within a given redshift interval. However, reaching both a complete and clean sample with this method is challenging. Hence, a method to derive statistical properties by summing the zPDFs of all the galaxies in the redshift bin of interest is introduced. Results. Using this methodology we derive the galaxy rest frame UV number counts in five redshift bins centred at z=2.5, 3.0, 3.5, 4.0, and 4.5, being complete up to the limiting magnitude at m_UV(AB)=24. With the wide field ALHAMBRA data we especially contribute in the study of the brightest ends of these counts, sampling well the surface densities down to m_UV(AB)=21-22. Conclusions. We show that using the zPDFs it is easy to select a clean sample of high redshift galaxies. We also show that statistical analysis of the properties of galaxies is better done using a probabilistic approach, which takes into account both the incompleteness and contamination in a natural way.

Parton distributions for the LHC Run II

We present NNPDF3.0, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test. NNPDF3.0 uses a global dataset including HERA-II deep-inelastic inclusive cross-sections, the combined HERA charm data, jet production from ATLAS and CMS, vector boson rapidity and transverse momentum distributions from ATLAS, CMS and LHCb, W+c data from CMS and top quark pair production total cross sections from ATLAS and CMS. Results are based on LO, NLO and NNLO QCD theory and also include electroweak corrections. To validate our methodology, we show that PDFs determined from pseudo-data generated from a known underlying law correctly reproduce the statistical distributions expected on the basis of the assumed experimental uncertainties. This closure test ensures that our methodological uncertainties are negligible in comparison to the generic theoretical and experimental uncertainties of PDF determination. This enables us to determine with confidence PDFs at different perturbative orders and using a variety of experimental datasets ranging from HERA-only up to a global set including the latest LHC results, all using precisely the same validated methodology. We explore some of the phenomenological implications of our results for the upcoming 13 TeV Run of the LHC, in particular for Higgs production cross-sections.

Parton distributions for the LHC Run II [Replacement]

We present NNPDF3.0, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test. NNPDF3.0 uses a global dataset including HERA-II deep-inelastic inclusive cross-sections, the combined HERA charm data, jet production from ATLAS and CMS, vector boson rapidity and transverse momentum distributions from ATLAS, CMS and LHCb, W+c data from CMS and top quark pair production total cross sections from ATLAS and CMS. Results are based on LO, NLO and NNLO QCD theory and also include electroweak corrections. To validate our methodology, we show that PDFs determined from pseudo-data generated from a known underlying law correctly reproduce the statistical distributions expected on the basis of the assumed experimental uncertainties. This closure test ensures that our methodological uncertainties are negligible in comparison to the generic theoretical and experimental uncertainties of PDF determination. This enables us to determine with confidence PDFs at different perturbative orders and using a variety of experimental datasets ranging from HERA-only up to a global set including the latest LHC results, all using precisely the same validated methodology. We explore some of the phenomenological implications of our results for the upcoming 13 TeV Run of the LHC, in particular for Higgs production cross-sections.

Parton distributions for the LHC Run II [Replacement]

We present NNPDF3.0, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test. NNPDF3.0 uses a global dataset including HERA-II deep-inelastic inclusive cross-sections, the combined HERA charm data, jet production from ATLAS and CMS, vector boson rapidity and transverse momentum distributions from ATLAS, CMS and LHCb, W+c data from CMS and top quark pair production total cross sections from ATLAS and CMS. Results are based on LO, NLO and NNLO QCD theory and also include electroweak corrections. To validate our methodology, we show that PDFs determined from pseudo-data generated from a known underlying law correctly reproduce the statistical distributions expected on the basis of the assumed experimental uncertainties. This closure test ensures that our methodological uncertainties are negligible in comparison to the generic theoretical and experimental uncertainties of PDF determination. This enables us to determine with confidence PDFs at different perturbative orders and using a variety of experimental datasets ranging from HERA-only up to a global set including the latest LHC results, all using precisely the same validated methodology. We explore some of the phenomenological implications of our results for the upcoming 13 TeV Run of the LHC, in particular for Higgs production cross-sections.

Parton distributions for the LHC Run II [Replacement]

We present NNPDF3.0, the first set of parton distribution functions (PDFs) determined with a methodology validated by a closure test. NNPDF3.0 uses a global dataset including HERA-II deep-inelastic inclusive cross-sections, the combined HERA charm data, jet production from ATLAS and CMS, vector boson rapidity and transverse momentum distributions from ATLAS, CMS and LHCb, W+c data from CMS and top quark pair production total cross sections from ATLAS and CMS. Results are based on LO, NLO and NNLO QCD theory and also include electroweak corrections. To validate our methodology, we show that PDFs determined from pseudo-data generated from a known underlying law correctly reproduce the statistical distributions expected on the basis of the assumed experimental uncertainties. This closure test ensures that our methodological uncertainties are negligible in comparison to the generic theoretical and experimental uncertainties of PDF determination. This enables us to determine with confidence PDFs at different perturbative orders and using a variety of experimental datasets ranging from HERA-only up to a global set including the latest LHC results, all using precisely the same validated methodology. We explore some of the phenomenological implications of our results for the upcoming 13 TeV Run of the LHC, in particular for Higgs production cross-sections.

The SAMI Galaxy Survey: Cubism and covariance, putting round pegs into square holes

We present a methodology for the regularisation and combination of sparse sampled and irregularly gridded observations from fibre-optic multi-object integral-field spectroscopy. The approach minimises interpolation and retains image resolution on combining sub-pixel dithered data. We discuss the methodology in the context of the Sydney-AAO Multi-object Integral-field spectrograph (SAMI) Galaxy Survey underway at the Anglo-Australian Telescope. The SAMI instrument uses 13 fibre bundles to perform high-multiplex integral-field spectroscopy across a one degree diameter field of view. The SAMI Galaxy Survey is targeting 3000 galaxies drawn from the full range of galaxy environments. We demonstrate the subcritical sampling of the seeing and incomplete fill factor for the integral-field bundles results in only a 10% degradation in the final image resolution recovered. We also implement a new methodology for tracking covariance between elements of the resulting datacubes which retains 90% of the covariance information while incurring only a modest increase in the survey data volume.

The SAMI Galaxy Survey: Cubism and covariance, putting round pegs into square holes [Replacement]

We present a methodology for the regularisation and combination of sparse sampled and irregularly gridded observations from fibre-optic multi-object integral-field spectroscopy. The approach minimises interpolation and retains image resolution on combining sub-pixel dithered data. We discuss the methodology in the context of the Sydney-AAO Multi-object Integral-field spectrograph (SAMI) Galaxy Survey underway at the Anglo-Australian Telescope. The SAMI instrument uses 13 fibre bundles to perform high-multiplex integral-field spectroscopy across a one degree diameter field of view. The SAMI Galaxy Survey is targeting 3000 galaxies drawn from the full range of galaxy environments. We demonstrate the subcritical sampling of the seeing and incomplete fill factor for the integral-field bundles results in only a 10% degradation in the final image resolution recovered. We also implement a new methodology for tracking covariance between elements of the resulting datacubes which retains 90% of the covariance information while incurring only a modest increase in the survey data volume.

General parity-odd CMB bispectrum estimation [Replacement]

We develop a methodology for estimating parity-odd bispectra in the cosmic microwave background (CMB). This is achieved through the extension of the original separable modal methodology to parity-odd bispectrum domains ($\ell_1 + \ell_2 + \ell_3 = {\rm odd}$). Through numerical tests of the parity-odd modal decomposition with some theoretical bispectrum templates, we verify that the parity-odd modal methodology can successfully reproduce the CMB bispectrum, without numerical instabilities. We also present simulated non-Gaussian maps produced by modal-decomposed parity-odd bispectra, and show the consistency with the exact results. Our new methodology is applicable to all types of parity-odd temperature and polarization bispectra.

General parity-odd CMB bispectrum estimation

We develop a methodology for estimating parity-odd bispectra in the cosmic microwave background (CMB). This is achieved through the extension of the original separable modal methodology to parity-odd bispectrum domains ($\ell_1 + \ell_2 + \ell_3 = {\rm odd}$). Through numerical tests of the parity-odd modal decomposition with some theoretical bispectrum templates, we verify that the parity-odd modal methodology can successfully reproduce the CMB bispectrum, without numerical instabilities. We also present simulated non-Gaussian maps produced by modal-decomposed parity-odd bispectra, and show the consistency with the exact results. Our new methodology is applicable to all types of parity-odd temperature and polarization bispectra.

General parity-odd CMB bispectrum estimation [Cross-Listing]

We develop a methodology for estimating parity-odd bispectra in the cosmic microwave background (CMB). This is achieved through the extension of the original separable modal methodology to parity-odd bispectrum domains ($\ell_1 + \ell_2 + \ell_3 = {\rm odd}$). Through numerical tests of the parity-odd modal decomposition with some theoretical bispectrum templates, we verify that the parity-odd modal methodology can successfully reproduce the CMB bispectrum, without numerical instabilities. We also present simulated non-Gaussian maps produced by modal-decomposed parity-odd bispectra, and show the consistency with the exact results. Our new methodology is applicable to all types of parity-odd temperature and polarization bispectra.

General parity-odd CMB bispectrum estimation [Cross-Listing]

We develop a methodology for estimating parity-odd bispectra in the cosmic microwave background (CMB). This is achieved through the extension of the original separable modal methodology to parity-odd bispectrum domains ($\ell_1 + \ell_2 + \ell_3 = {\rm odd}$). Through numerical tests of the parity-odd modal decomposition with some theoretical bispectrum templates, we verify that the parity-odd modal methodology can successfully reproduce the CMB bispectrum, without numerical instabilities. We also present simulated non-Gaussian maps produced by modal-decomposed parity-odd bispectra, and show the consistency with the exact results. Our new methodology is applicable to all types of parity-odd temperature and polarization bispectra.

General parity-odd CMB bispectrum estimation [Replacement]

We develop a methodology for estimating parity-odd bispectra in the cosmic microwave background (CMB). This is achieved through the extension of the original separable modal methodology to parity-odd bispectrum domains ($\ell_1 + \ell_2 + \ell_3 = {\rm odd}$). Through numerical tests of the parity-odd modal decomposition with some theoretical bispectrum templates, we verify that the parity-odd modal methodology can successfully reproduce the CMB bispectrum, without numerical instabilities. We also present simulated non-Gaussian maps produced by modal-decomposed parity-odd bispectra, and show the consistency with the exact results. Our new methodology is applicable to all types of parity-odd temperature and polarization bispectra.

General parity-odd CMB bispectrum estimation [Replacement]

We develop a methodology for estimating parity-odd bispectra in the cosmic microwave background (CMB). This is achieved through the extension of the original separable modal methodology to parity-odd bispectrum domains ($\ell_1 + \ell_2 + \ell_3 = {\rm odd}$). Through numerical tests of the parity-odd modal decomposition with some theoretical bispectrum templates, we verify that the parity-odd modal methodology can successfully reproduce the CMB bispectrum, without numerical instabilities. We also present simulated non-Gaussian maps produced by modal-decomposed parity-odd bispectra, and show the consistency with the exact results. Our new methodology is applicable to all types of parity-odd temperature and polarization bispectra.

Asteroseismic Investigation of two Algol-type systems V1241 Tau and GQ Dra

We present new photometric observations of eclipsing binary systems V1241 Tau and GQ Dra. We use the following methodology: Initially, WD code is applied to the light curves, in order to determine the photometric elements of the systems. Then the residuals are analysed using Fourier Transformation techniques. The results show that one frequency can be barely attributed to the residual light variation of V1241 Tau, while there is no evidence of pulsation on the light curve of GQ Dra.

A Robust Determination of Milky Way Satellite Properties using Hierarchical Mass Modeling

We introduce a new methodology to robustly determine the mass profile, as well as the overall distribution, of local group satellite galaxies. Specifically we employ a statistical multi-level modeling technique, Bayesian hierarchical modeling, to simultaneously constrain the properties of individual local group Milky Way satellite galaxies and the characteristics of the Milky Way satellite population. We show that this methodology reduces the uncertainty in individual dwarf galaxy mass measurements up to a factor of a few for the faintest galaxies. We find that the distribution of Milky Way satellites inferred by this analysis, with the exception of the apparent lack of high mass halos, is perfectly consistent with the Lambda-CDM paradigm. In particular we find that both the measured relationship between the maximum circular velocity and the radius at this velocity, as well as the inferred relationship between the mass within 300pc and luminosity, match the values predicted by Lambda-CDM simulations for halos with maximum circular velocities below 20 km/sec. Perhaps more striking is that this analysis yields a cusped "average" halo shape that is shared by these galaxies. While this study reconciles many of the observed properties of the Milky Way satellite distribution with that of Lambda-CDM simulations, we find that there is still a deficit of satellites with maximum circular velocities of 20-40 km/sec.

Towards reconstruction of unlensed, intrinsic CMB power spectra from lensed map

We propose a method to extract the unlensed, intrinsic CMB temperature and polarization power spectra from the observed (i.e., lensed) spectra. Using a matrix inversion technique, we demonstrate how one can reconstruct the intrinsic CMB power spectra directly from lensed data for both flat sky and full sky analyses. The delensed spectra obtained by the technique are calibrated against the Code for Anisotropies in the Microwave Background (CAMB) using WMAP 7-year best-fit data and applied to WMAP 9-year unbinned data as well. In principle, our methodology may help in subtracting out the E-mode lensing contribution in order to obtain the intrinsic B-mode power thereby resolving the degeneracy between CMB E and B polarizations, when a more realistic situation can be taken into account

The Swift/BAT Hard X-ray Transient Monitor

The Swift/Burst Alert Telescope (BAT) hard X-ray transient monitor provides near real-time coverage of the X-ray sky in the energy range 15-50 keV. The BAT observes 88% of the sky each day with a detection sensitivity of 5.3 mCrab for a full-day observation and a time resolution as fine as 64 seconds. The three main purposes of the monitor are (1) the discovery of new transient X-ray sources, (2) the detection of outbursts or other changes in the flux of known X-ray sources, and (3) the generation of light curves of more than 900 sources spanning over eight years. The primary interface for the BAT transient monitor is a public web page. Between 2005 February 12 and 2013 April 30, 245 sources have been detected in the monitor, 146 of them persistent and 99 detected only in outburst. Among these sources, 17 were previously unknown and were discovered in the transient monitor. In this paper, we discuss the methodology and the data processing and filtering for the BAT transient monitor and review its sensitivity and exposure. We provide a summary of the source detections and classify them according to the variability of their light curves. Finally, we review all new BAT monitor discoveries; for the new sources that are previously unpublished, we present basic data analysis and interpretations.

An empirical formula for the distribution function of a thin exponential disc

An empirical formula for a Shu distribution function that reproduces a thin disc with exponential surface density to good accuracy is presented. The formula has two free parameters that specify the functional form of the velocity dispersion. Conventionally, this requires the use of an iterative algorithm to produce the correct solution, which is computationally taxing for applications like Markov Chain Monte Carlo (MCMC) model fitting. The formula has been shown to work for flat, rising and falling rotation curves. Application of this methodology to one of the Dehnen distribution functions is also shown. Finally, an extension of this formula to reproduce velocity dispersion profiles that are an exponential function of radius is also presented. Our empirical formula should greatly aid the efficient comparison of disc models with large stellar surveys or N-body simulations.

Calculating the Habitable Zone of Binary Star Systems II: P-Type Binaries [Replacement]

We have developed a comprehensive methodology for calculating the circumbinary HZ in planet-hosting P-type binary star systems. We present a general formalism for determining the contribution of each star of the binary to the total flux received at the top of the atmosphere of an Earth-like planet, and use the Sun’s HZ to calculate the locations of the inner and outer boundaries of the HZ around a binary star system. We apply our calculations to the Kepler’s currently known circumbinary planetary system and show the combined stellar flux that determines the boundaries of their HZs. We also show that the HZ in P-type systems is dynamic and depending on the luminosity of the binary stars, their spectral types, and the binary eccentricity, its boundaries vary as the stars of the binary undergo their orbital motion. We present the details of our calculations and discuss the implications of the results.

Calculating the Habitable Zone of Binary Star Systems II: P-Type Binaries

We have developed a comprehensive methodology for calculating the circumbinary HZ in planet-hosting P-type binary star systems. We present a general formalism for determining the contribution of each star of the binary to the total flux received at the top of the atmosphere of an Earth-like planet, and use the Sun’s HZ to calculate the locations of the inner and outer boundaries of the HZ around a binary star system. We apply our calculations to the Kepler’s currently known circumbinary planetary system and show the combined stellar flux that determines the boundaries of their HZs. We also show that the HZ in P-type systems is dynamic and depending on the luminosity of the binary stars, their spectral types, and the binary eccentricity, its boundaries vary as the stars of the binary undergo their orbital motion. We present the details of our calculations and discuss the implications of the results.

Calculating the Habitable Zone of Binary Star Systems I: S-Type Binaries [Replacement]

We have developed a comprehensive methodology for calculating the boundaries of the habitable zone (HZ) of planet-hosting S-type binary star systems. Our approach is general and takes into account the contribution of both stars to the location and extent of the Binary HZ with different stellar spectral types. We have studied how the binary eccentricity and stellar energy distribution affect the extent of the habitable zone. Results indicate that in binaries where the combination of mass-ratio and orbital eccentricity allows planet formation around a star of the system to proceed successfully, the effect of a less luminous secondary on the location of the primary’s habitable zone is generally negligible. However, when the secondary is more luminous, it can influence the extent of the HZ. We present the details of the derivations of our methodology and discuss its application to the Binary HZ around the primary and secondary main sequence stars of an FF, MM, and FM binary, as well as two known planet-hosting binaries alpha Cen AB and HD 196886.

Calculating the Habitable Zone of Binary Star Systems I: S-Type Binaries

We have developed a comprehensive methodology for calculating the boundaries of the habitable zone (HZ) of planet-hosting S-type binary star systems. Our approach is general and takes into account the contribution of both stars to the location and extent of the binary HZ with different stellar spectral types. We have studied how the binary eccentricity and stellar energy distribution affect the extent of the habitable zone. Results indicate that in binaries where the combination of mass-ratio and orbital eccentricity allows planet formation around a star of the system to proceed successfully, the effect of a less luminous secondary on the location of the primary’s habitable zone is generally negligible. However, when the secondary is more luminous, it can influence the extent of the HZ. We present the details of the derivations of our methodology and discuss its application to the binary HZ around the primary and secondary main sequence star of an FF, MM, and FM binary, as well as two known planet-hosting binaries alpha Cen AB and HD 196886.

Calculating the Habitable Zone of Binary Star Systems I: S-Type Binaries [Replacement]

We have developed a comprehensive methodology for calculating the boundaries of the habitable zone (HZ) of planet-hosting S-type binary star systems. Our approach is general and takes into account the contribution of both stars to the location and extent of the binary HZ with different stellar spectral types. We have studied how the binary eccentricity and stellar energy distribution affect the extent of the habitable zone. Results indicate that in binaries where the combination of mass-ratio and orbital eccentricity allows planet formation around a star of the system to proceed successfully, the effect of a less luminous secondary on the location of the primary’s habitable zone is generally negligible. However, when the secondary is more luminous, it can influence the extent of the HZ. We present the details of the derivations of our methodology and discuss its application to the binary HZ around the primary and secondary main sequence star of an FF, MM, and FM binary, as well as two known planet-hosting binaries alpha Cen AB and HD 196886.

Calculating the Habitable Zone of Binary Star Systems I: S-Type Binaries [Replacement]

We have developed a comprehensive methodology for calculating the bound- aries of the habitable zone (HZ) of planet-hosting S-type binary star systems. Our approach is general and takes into account the contribution of both stars to the location and extent of the Binary HZ with di?erent stellar spectral types. We have studied how the binary eccentricity and stellar energy distribution af- fect the extent of the habitable zone. Results indicate that in binaries where the combination of mass-ratio and orbital eccentricity allows planet formation around a star of the system to proceed successfully, the e?ect of a less luminous secondary on the location of the primary’s habitable zone is generally negligible. However, when the secondary is more luminous, it can in uence the extent of the HZ. We present the details of the derivations of our methodology and discuss its application to the Binary HZ around the primary and secondary main sequence stars of an FF, MM, and FM binary, as well as two known planet-hosting binaries ?alpha Cen AB and HD 196886.

The metallicity signature of evolved stars with planets

We determine in a homogeneous way the metallicity and individual abundances of a large sample of evolved stars, with and without known planetary companions. Our methodology is based on the analysis of high-resolution echelle spectra. The metallicity distributions show that giant stars hosting planets are not preferentially metal-rich having similar abundance patterns to giant stars without known planetary companions. We have found, however, a very strong relation between the metallicity distribution and the stellar mass within this sample. We show that the less massive giant stars with planets (M < 1.5 Msun) are not metal rich, but, the metallicity of the sample of massive (M > 1.5 Msun), young (age < 2 Gyr) giant stars with planets is higher than that of a similar sample of stars without planets. Regarding other chemical elements, giant stars with and without planets in the mass domain M < 1.5 Msun show similar abundance patterns. However, planet and non-planet hosts with masses M > 1.5 Msun show differences in the abundances of some elements, specially Na, Co, and Ni. In addition, we find the sample of subgiant stars with planets to be metal rich showing similar metallicities to main-sequence planet hosts. The fact that giant planet hosts in the mass domain M < 1.5 Msun do not show metal-enrichment is difficult to explain. Given that these stars have similar stellar parameters to subgiants and main-sequence planet hosts, the lack of the metal-rich signature in low-mass giants could be explained if originated from a pollution scenario in the main sequence that gets erased as the star become fully convective. However, there is no physical reason why it should play a role for giants with masses M < 1.5 Msun but is not observed for giants with M > 1.5 Msun.

The First Fermi LAT Gamma-Ray Burst Catalog

In three years of observations since the beginning of nominal science operations in August 2008, the Large Area Telescope (LAT) on board the Fermi Gamma Ray Space Telescope has observed high-energy (>20 MeV) \gamma-ray emission from 35 gamma-ray bursts (GRBs). Among these, 28 GRBs have been detected above 100 MeV and 7 GRBs above ~ 20 MeV. The first Fermi-LAT catalog of GRBs is a compilation of these detections and provides a systematic study of high-energy emission from GRBs for the first time. To generate the catalog, we examined 733 GRBs detected by the Gamma-Ray Burst Monitor (GBM) on Fermi and processed each of them using the same analysis sequence. Details of the methodology followed by the LAT collaboration for GRB analysis are provided. We summarize the temporal and spectral properties of the LAT-detected GRBs. We also discuss characteristics of LAT-detected emission such as its delayed onset and longer duration compared to emission detected by the GBM, its power-law temporal decay at late times, and the fact that it is dominated by a power-law spectral component that appears in addition to the usual Band model.

Excursion Set Theory for Correlated Random Walks [Replacement]

We present a new method to compute the first crossing distribution in excursion set theory for the case of correlated random walks. We use a combination of the path integral formalism of Maggiore & Riotto, and the integral equation solution of Zhang & Hui, and Benson et al. to find a numerically robust and convenient algorithm to derive the first crossing distribution in terms of a perturbative expansion around the limit of an uncorrelated random walk. We apply this methodology to the specific case of a Gaussian random density field filtered with a Gaussian smoothing function. By comparing our solutions to results from Monte Carlo calculations of the first crossing distribution we demonstrate that our method accurate for power spectra $P(k)\propto k^n$ for $n=1$, becoming less accurate for smaller values of $n$. It is therefore complementary to the method of Musso & Sheth, which will therefore be more useful for standard $\Lambda$CDM power spectra. Our approach is quite general, and can be adapted to other smoothing functions, and also to non-Gaussian density fields.

Excursion Set Theory for Correlated Random Walks

We present a new method to compute the first crossing distribution in excursion set theory for the case of correlated random walks. We use a combination of the path integral formalism of Maggiore & Riotto, and the integral equation solution of Zhang & Hui, and Benson et al. to find a numerically robust and convenient algorithm to derive the first crossing distribution in terms of a perturbative expansion around the limit of an uncorrelated random walk. We apply this methodology to the specific case of a Gaussian random density field filtered with a Gaussian smoothing function. By comparing our solutions to results from Monte Carlo calculations of the first crossing distribution we demonstrate that our method accurate for power spectra $P(k)\propto k^n$ for $n=1$, becoming less accurate for smaller values of $n$. It is therefore complementary to the method of Musso & Sheth, which will therefore be more useful for standard $\Lambda$CDM power spectra. Our approach is quite general, and can be adapted to other smoothing functions, and also to non-Gaussian density fields.

Hemispherical power asymmetries in the WMAP 7-year low-resolution temperature and polarization maps [Replacement]

We test the hemispherical power asymmetry of the WMAP 7-year low-resolution temperature and polarization maps. We consider two natural estimators for such an asymmetry and exploit our implementation of an optimal angular power spectrum estimator for all the six CMB spectra. By scanning the whole sky through a sample of 24 directions, we search for asymmetries in the power spectra of the two hemispheres, comparing the results with Monte Carlo simulations drawn from the WMAP 7-year best-fit model. Our analysis extends previous results to the polarization sector. The level of asymmetry on the ILC temperature map is found to be compatible with previous results, whereas no significant asymmetry on the polarized spectra is detected. Moreover, we show that our results are only weakly affected by the a posteriori choice of the maximum multipole considered for the analysis. We also forecast the capability to detect dipole modulation by our methodology at Planck sensitivity.

Image Registration for Stability Testing of MEMS [Cross-Listing]

Image registration, or alignment of two or more images covering the same scenes or objects, is of great interest in many disciplines such as remote sensing, medical imaging, astronomy, and computer vision. In this paper, we introduce a new application of image registration algorithms. We demonstrate how through a wavelet based image registration algorithm, engineers can evaluate stability of Micro-Electro-Mechanical Systems (MEMS). In particular, we applied image registration algorithms to assess alignment stability of the MicroShutters Subsystem (MSS) of the Near Infrared Spectrograph (NIRSpec) instrument of the James Webb Space Telescope (JWST). This work introduces a new methodology for evaluating stability of MEMS devices to engineers as well as a new application of image registration algorithms to computer scientists.

Automated classification of Hipparcos unsolved variables

We present an automated classification of stars exhibiting periodic, non-periodic and irregular light variations. The Hipparcos catalogue of unsolved variables is employed to complement the training set of periodic variables of Dubath et al. with irregular and non-periodic representatives, leading to 3881 sources in total which describe 24 variability types. The attributes employed to characterize light-curve features are selected according to their relevance for classification. Classifier models are produced with random forests and a multistage methodology based on Bayesian networks, achieving overall misclassification rates under 12 per cent. Both classifiers are applied to predict variability types for 6051 Hipparcos variables associated with uncertain or missing types in the literature.

Towards efficient and optimal analysis of CMB anisotropies on a masked sky

Strong foreground contamination in high resolution CMB data requires masking which introduces statistical anisotropies and renders a full maximum likelihood analysis numerically intractable. Standard analysis methods like the pseudo-C_l framework lead to information loss due to estimator suboptimalities. We set out and validate a methodology for numerically efficient estimators for a masked sky that recover nearly as much information as a full maximum likelihood procedure. In addition to the standard pseudo-C_l observables, the approach introduces an augmented basis designed to account for the mode coupling due to the masking of the sky. We motivate the choice of this basis by describing the basic structure of the covariance matrix. We demonstrate that the augmented estimator can achieve near-optimal results in the presence of a WMAP-realistic mask.

NICHE: The Non-Imaging CHErenkov Array

The accurate measurement of the Cosmic Ray (CR) nuclear composition around and above the Knee (~ 10^15.5 eV) has been difficult due to uncertainties inherent to the measurement techniques and/or dependence on hadronic Monte Carlo simulation models required to interpret the data. Measurement of the Cherenkov air shower signal, calibrated with air fluorescence measurements, offers a methodology to provide an accurate measurement of the nuclear composition evolution over a large energy range. NICHE will use an array of widely-spaced, non-imaging Cherenkov counters to measure the amplitude and time-spread of the air shower Cherenkov signal to extract CR nuclear composition measurements and to cross-calibrate the Cherenkov energy and composition measurements with TA/TALE fluorescence and surface detector measurements.

Analysis of 3 years of data from the gravitational wave detectors EXPLORER and NAUTILUS [Cross-Listing]

We performed a search for short gravitational wave bursts using about 3 years of data of the resonant bar detectors Nautilus and Explorer. Two types of analysis were performed: a search for coincidences with a low background of accidentals (0.1 over the entire period), and the calculation of upper limits on the rate of gravitational wave bursts. Here we give a detailed account of the methodology and we report the results: a null search for coincident events and an upper limit that improves over all previous limits from resonant antennas, and is competitive, in the range hrss 10^{-19}, with limits from interferometric detectors. Some new methodological features are introduced that have proven successful in the upper limits evaluation.

Analysis of 3 years of data from the gravitational wave detectors EXPLORER and NAUTILUS [Replacement]

We performed a search for short gravitational wave bursts using about 3 years of data of the resonant bar detectors Nautilus and Explorer. Two types of analysis were performed: a search for coincidences with a low background of accidentals (0.1 over the entire period), and the calculation of upper limits on the rate of gravitational wave bursts. Here we give a detailed account of the methodology and we report the results: a null search for coincident events and an upper limit that improves over all previous limits from resonant antennas, and is competitive, in the range h_rss ~1E-19, with limits from interferometric detectors. Some new methodological features are introduced that have proven successful in the upper limits evaluation.

Low Mach Number Modeling of Convection in Helium Shells on Sub-Chandrasekhar White Dwarfs. I. Methodology

We assess the robustness of a low Mach number hydrodynamics algorithm for modeling helium shell convection on the surface of a white dwarf in the context of the sub-Chandrasekhar model for Type Ia supernovae. We use the low Mach number stellar hydrodynamics code, MAESTRO, to perform three-dimensional, spatially-adaptive simulations of convection leading up to the point of the ignition of a burning front. We show that the low Mach number hydrodynamics model provides a robust description of the system.

The Mass Function of Primordial Rogue Planet MACHOs in quasar nanolensing

The recent Sumi et al (2010, 2011) detection of free roaming planet mass MACHOs in cosmologically significant numbers recalls their original detection in quasar microlening studies (Schild 1996, Colley and Schild 2003). We consider the microlensing signature of such a population, and find that the nano-lensing (microlensing) would be well characterized by a statistical microlensing theory published previously by Refsdal and Stabel (1991). Comparison of the observed First Lens microlensing amplitudes with the theoretical prediction gives close agreement and a methodology for determining the slope of the mass function describing the population. Our provisional estimate of the power law exponent in an exponential approximation to this distribution is $2.98^{+1.0}_{-0.5}.$ where a Salpeter slope is 2.35.

Subhaloes gone Notts: Spin across subhaloes and finders

We present a study of a comparison of spin distributions of subhaloes found associated with a host halo. The subhaloes are found within two cosmological simulation families of Milky Way-like galaxies, namely the Aquarius and GHALO simulations. These two simulations use different gravity codes and cosmologies. We employ ten different substructure finders, which span a wide range of methodologies from simple overdensity in configuration space to full 6-d phase space analysis of particles.We subject the results to a common post-processing pipeline to analyse the results in a consistent manner, recovering the dimensionless spin parameter. We find that spin distribution is an excellent indicator of how well the removal of background particles (unbinding) has been carried out. We also find that the spin distribution decreases for substructure the nearer they are to the host halo’s, and that the value of the spin parameter rises with enclosed mass towards the edge of the substructure. Finally subhaloes are less rotationally supported than field haloes, with the peak of the spin distribution having a lower spin parameter.

A Tale Of 160 Scientists, Three Applications, A Workshop and A Cloud

The NASA Exoplanet Science Institute (NExScI) hosts the annual Sagan Workshops, thematic meetings aimed at introducing researchers to the latest tools and methodologies in exoplanet research. The theme of the Summer 2012 workshop, held from July 23 to July 27 at Caltech, was to explore the use of exoplanet light curves to study planetary system architectures and atmospheres. A major part of the workshop was to use hands-on sessions to instruct attendees in the use of three open source tools for the analysis of light curves, especially from the Kepler mission. Each hands-on session involved the 160 attendees using their laptops to follow step-by-step tutorials given by experts. We describe how we used the Amazon Elastic Cloud 2 to run these applications.

Satellite Characterization of four candidate sites for the Cherenkov Telescope Array telescope

In this paper we have evaluated the amount of available telescope time at four sites which are candidate to host the future Cherenkov Telescope Array (CTA). We use the GOES 12 data for the years 2008 and 2009. We use a homogeneous methodology presented in several previous papers to classify the nights as clear (completely cloud-free), mixed (partially cloud-covered), and covered. Additionally, for the clear nights, we have evaluated the amount of satellite stable nights which correspond to the amount of ground based photometric nights, and the clear nights corresponding to the spectroscopic nights. We have applied this model to two sites in the Northern Hemisphere (San Pedro Martir (SPM), Mexico; Izana, Canary Islands) and to two sites in the Southern Hemisphere (El Leoncito, Argentine; San Antonio de Los Cobres (SAC), Argentine). We have obtained, from the two years considered, a mean amount of cloud free nights of 68.6% at Izana, 76.0% at SPM, 70.6% at Leoncito and 70.0% at SAC. We have evaluated, among the cloud free nights, an amount of stable nights of 62.6% at Izana, 69.6% at SPM, 64.9% at Leoncito, and 59.7% at SAC.

Satellite Characterization of four candidate sites for the Cherenkov Telescope Array telescope [Replacement]

In this paper we have evaluated the amount of available telescope time at four sites which are candidate to host the future Cherenkov Telescope Array (CTA). We use the GOES 12 data for the years 2008 and 2009. We use a homogeneous methodology presented in several previous papers to classify the nights as clear (completely cloud-free), mixed (partially cloud-covered), and covered. Additionally, for the clear nights, we have evaluated the amount of satellite stable nights which correspond to the amount of ground based photometric nights, and the clear nights corresponding to the spectroscopic nights. We have applied this model to two sites in the Northern Hemisphere (San Pedro Martir (SPM), Mexico; Izana, Canary Islands) and to two sites in the Southern Hemisphere (El Leoncito, Argentine; San Antonio de Los Cobres (SAC), Argentine). We have obtained, from the two years considered, a mean amount of cloud free nights of 68.6% at Izana, 76.0% at SPM, 70.6% at Leoncito and 70.0% at SAC. We have evaluated, among the cloud free nights, an amount of stable nights of 62.6% at Izana, 69.6% at SPM, 64.9% at Leoncito, and 59.7% at SAC.

Satellite characterization of four interesting sites for astronomical instrumentation [Replacement]

In this paper we have evaluated the amount of available telescope time at four interesting sites for astronomical instrumentation. We use the GOES 12 data for the years 2008 and 2009. We use a homogeneous methodology presented in several previous papers to classify the nights as clear (completely cloud-free), mixed (partially cloud-covered), and covered. Additionally, for the clear nights, we have evaluated the amount of satellite stable nights which correspond to the amount of ground based photometric nights, and the clear nights corresponding to the spectroscopic nights. We have applied this model to two sites in the Northern Hemisphere (San Pedro Martir (SPM), Mexico; Izana, Canary Islands) and to two sites in the Southern Hemisphere (El Leoncito, Argentine; San Antonio de Los Cobres (SAC), Argentine). We have obtained, from the two years considered, a mean amount of cloud free nights of 68.6% at Izana, 76.0% at SPM, 70.6% at Leoncito and 70.0% at SAC. We have evaluated, among the cloud free nights, an amount of stable nights of 62.6% at Izana, 69.6% at SPM, 64.9% at Leoncito, and 59.7% at SAC.

Signature of Differential Rotation in Sun-as-a-Star Ca II K Measurements

The characterization of solar surface differential rotation (SDR) from disk-integrated chromospheric measurements has important implications for the study of differential rotation and dynamo processes in other stars. Some chromospheric lines, such as Ca II K, are very sensitive to the presence of activity on the disk and are an ideal choice for investigating SDR in Sun-as-a star observations. Past studies indicate that when the activity is low, the determination of Sun’s differential rotation from integrated-sunlight measurements becomes uncertain. However, our study shows that using the proper technique, SDR can be detected from these type of measurements even during periods of extended solar minima. This paper describes results from the analysis of the temporal variations of Ca II K line profiles observed by the Integrated Sunlight Spectrometer (ISS) during the declining phase of Cycle 23 and the rising phase of Cycle 24, and discusses the signature of SDR in the power spectra computed from time series of parameters derived from these profiles. The described methodology is quite general, and could be applied to photometric time series of other Main-Sequence stars for detecting differential rotation.

Simulated histories of reionization with merger tree of HII regions

We describe a new methodology to analyze the reionization process in numerical simulations: The evo- lution of the reionization is investigated by focusing on the merger histories of individual HII regions. From the merger tree of ionized patches, one can track the individual evolution of the regions such as e.g. their size, or investigate the properties of the percolation process by looking at the formation rate, the frequency of mergers and the number of individual HII regions involved in the mergers. By applying this technique to cosmological simulations with radiative transfer, we show how this methodology is a good candidate to quantify the impact of the star formation adopted on the history of the reionization. As an application we show how different models of sources result in different evolutions and geometry of the reionization even though they produce e.g. similar ionized fraction or optical depth.

The Shortest Known Period Star Orbiting our Galaxy's Supermassive Black Hole

Stars with short orbital periods at the center of our galaxy offer a powerful and unique probe of a supermassive black hole. Over the past 17 years, the W. M. Keck Observatory has been used to image the Galactic center at the highest angular resolution possible today. By adding to this data set and advancing methodologies, we have detected S0-102, a star orbiting our galaxy’s supermassive black hole with a period of just 11.5 years. S0-102 doubles the number of stars with full phase coverage and periods less than 20 years. It thereby provides the opportunity with future measurements to resolve degeneracies in the parameters describing the central gravitational potential and to test Einstein’s theory of General Relativity in an unexplored regime.

Using Machine Learning for Discovery in Synoptic Survey Imaging

Modern time-domain surveys continuously monitor large swaths of the sky to look for astronomical variability. Astrophysical discovery in such data sets is complicated by the fact that detections of real transient and variable sources are highly outnumbered by bogus detections caused by imperfect subtractions, atmospheric effects and detector artefacts. In this work we present a machine learning (ML) framework for discovery of variability in time-domain imaging surveys. Our ML methods provide probabilistic statements, in near real time, about the degree to which each newly observed source is astrophysically relevant source of variable brightness. We provide details about each of the analysis steps involved, including compilation of the training and testing sets, construction of descriptive image-based and contextual features, and optimization of the feature subset and model tuning parameters. Using a validation set of nearly 30,000 objects from the Palomar Transient Factory, we demonstrate a missed detection rate of at most 7.7% at our chosen false-positive rate of 1% for an optimized ML classifier of 23 features, selected to avoid feature correlation and over-fitting from an initial library of 42 attributes. Importantly, we show that our classification methodology is insensitive to mis-labelled training data up to a contamination of nearly 10%, making it easier to compile sufficient training sets for accurate performance in future surveys. This ML framework, if so adopted, should enable the maximization of scientific gain from future synoptic survey and enable fast follow-up decisions on the vast amounts of streaming data produced by such experiments.

Active optics: deformable mirrors with a minimum number of actuators

We present two concepts of deformable mirror to compensate for first order optical aberrations. Deformation systems are designed using both elasticity theory and Finite Element Analysis in order to minimize the number of actuators. Starting from instrument specifications, we explain the methodology to design dedicated deformable mirrors. The work presented here leads to correcting devices optimized for specific functions. The Variable Off-Axis paraboLA concept is a 3-actuators, 3-modes system able to generate independently Focus, Astigmatism and Coma. The Correcting Optimized Mirror with a Single Actuator is a 1-actuator system able to generate a given combination of optical aberrations.

 

You need to log in to vote

The blog owner requires users to be logged in to be able to vote for this post.

Alternatively, if you do not have an account yet you can create one here.

Powered by Vote It Up

^ Return to the top of page ^