Posts Tagged probability

Recent Postings from probability

Temperature dependence of the probability of "small heating" and spectrum of UCNs up-scattered on the surface of Fomblin oil Y-HVAC 18/8 [Cross-Listing]

We performed precision measurements of the probability of small heating and spectrum of UCNs up-scattered on the surface of hydrogen-free oil Fomblin Y-HVAC 18/8 as a function of temperature. The probability is well reproducible, does not depend on sample thickness and does not evolve in time. It is equal (9.8+-0.2)10^(-6) at the ambient temperature. The spectrum coincides with those measured with solid-surface and nanoparticle samples. Indirect arguments indicate that spectrum shape weakly depends on temperature. Measured experimental data can be satisfactory described both within the model of near-surface nanodroplets and the model of capillary waves.

Probability of coincidental similarity among the orbits of small bodies - I. Pairing

Probability of coincidental clustering among the orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4, ... members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample, one can use to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

Derivation of Capture Probabilities for the Corotation Eccentric Mean Motion Resonances

We study in this paper the capture of a massless particle into an isolated, first order Corotation Eccentric Resonance (CER), in the framework of the Planar, Eccentric and Restricted Three-Body problem near a m+1:m mean motion commensurability (m integer). While capture into Lindblad Eccentric Resonances (where the perturber's orbit is circular) has been investigated years ago, capture into CER (where the perturber's orbit is elliptic) has not yet been investigated in detail. Here, we derive the generic equations of motion near a CER in the general case where both the perturber and the test particle migrate. We derive the probability of capture in that context, and we examine more closely two particular cases: (i) if only the perturber is migrating, capture is possible only if the migration is outward from the primary. Notably, the probability of capture is independent of the way the perturber migrates outward; (ii) if only the test particle is migrating, then capture is possible only if the algebraic value of its migration rate is a decreasing function of orbital radius. In this case, the probability of capture is proportional to the radial gradient of migration. These results differ from the capture into Lindblad Eccentric Resonance (LER), where it is necessary that the orbits of the perturber and the test particle converge for capture to be possible. Possible applications for planetary satellites are discussed.

How Often Do Diquarks Form? A Very Simple Model

Starting from a textbook result, the nearest-neighbor distribution of particles in an ideal gas, we develop estimates for the probability with which quarks $q$ in a mixed $q$, $\bar q$ gas are more strongly attracted to the nearest $q$, potentially forming a diquark, than to the nearest $\bar q$. Generic probabilities lie in the range of tens of percent, with values in the several percent range even under extreme assumptions favoring $q\bar q$ over $qq$ attraction.

Decoupling of the re-parametrization degree of freedom and a generalized probability in quantum cosmology

The high degree of symmetry renders the dynamics of cosmological as well as some black hole spacetimes describable by a system of finite degrees of freedom. These systems are generally known as minisuperspace models. One of their important key features is the invariance of the corresponding reduced actions under reparametrizations of the independent variable, a fact that can be seen as the remnant of the general covariance of the full theory. In the case of a system of $n$ degrees of freedom, described by a Lagrangian quadratic in velocities, one can use the lapse by either gauge fixing it or letting it be defined by the constraint and subsequently substitute into the rest of the equations. In the first case, the system is solvable for $n$ accelerations and the constraint becomes a restriction among constants. In the second case, the system can only be solved for $n-1$ accelerations and the "gauge" freedom is transferred to the choice of one of the scalar degrees of freedom. In this paper, we take the second path and express all $n-1$ scalar degrees of freedom in terms of the remaining one, say $q$. By considering these $n-1$ degrees of freedom as arbitrary but given functions of $q$, we manage to extract a two dimensional pure gauge system consisting of the lapse $N$ and the arbitrary $q$: in a way, we decouple the reparametrization invariance from the rest of the equations of motion. The solution of the corresponding quantum two dimensional system is used for the definition of a generalized probability for every configuration $f^i (q)$, be it classical or not. The main result is that, interestingly enough, this probability attains its extrema on the classical solution of the initial $n$-dimensional system.

Decoupling of the re-parametrization degree of freedom and a generalized probability in quantum cosmology [Cross-Listing]

The high degree of symmetry renders the dynamics of cosmological as well as some black hole spacetimes describable by a system of finite degrees of freedom. These systems are generally known as minisuperspace models. One of their important key features is the invariance of the corresponding reduced actions under reparametrizations of the independent variable, a fact that can be seen as the remnant of the general covariance of the full theory. In the case of a system of $n$ degrees of freedom, described by a Lagrangian quadratic in velocities, one can use the lapse by either gauge fixing it or letting it be defined by the constraint and subsequently substitute into the rest of the equations. In the first case, the system is solvable for $n$ accelerations and the constraint becomes a restriction among constants. In the second case, the system can only be solved for $n-1$ accelerations and the "gauge" freedom is transferred to the choice of one of the scalar degrees of freedom. In this paper, we take the second path and express all $n-1$ scalar degrees of freedom in terms of the remaining one, say $q$. By considering these $n-1$ degrees of freedom as arbitrary but given functions of $q$, we manage to extract a two dimensional pure gauge system consisting of the lapse $N$ and the arbitrary $q$: in a way, we decouple the reparametrization invariance from the rest of the equations of motion. The solution of the corresponding quantum two dimensional system is used for the definition of a generalized probability for every configuration $f^i (q)$, be it classical or not. The main result is that, interestingly enough, this probability attains its extrema on the classical solution of the initial $n$-dimensional system.

Decoupling of the re-parametrization degree of freedom and a generalized probability in quantum cosmology [Replacement]

The high degree of symmetry renders the dynamics of cosmological as well as some black hole spacetimes describable by a system of finite degrees of freedom. These systems are generally known as minisuperspace models. One of their important key features is the invariance of the corresponding reduced actions under reparametrizations of the independent variable, a fact that can be seen as the remnant of the general covariance of the full theory. In the case of a system of $n$ degrees of freedom, described by a Lagrangian quadratic in velocities, one can use the lapse by either gauge fixing it or letting it be defined by the constraint and subsequently substitute into the rest of the equations. In the first case, the system is solvable for $n$ accelerations and the constraint becomes a restriction among constants. In the second case, the system can only be solved for $n-1$ accelerations and the "gauge" freedom is transferred to the choice of one of the scalar degrees of freedom. In this paper, we take the second path and express all $n-1$ scalar degrees of freedom in terms of the remaining one, say $q$. By considering these $n-1$ degrees of freedom as arbitrary but given functions of $q$, we manage to extract a two dimensional pure gauge system consisting of the lapse $N$ and the arbitrary $q$: in a way, we decouple the reparametrization invariance from the rest of the equations of motion. The solution of the corresponding quantum two dimensional system is used for the definition of a generalized probability for every configuration $f^i (q)$, be it classical or not. The main result is that, interestingly enough, this probability attains its extrema on the classical solution of the initial $n$-dimensional system.

Decoupling of the re-parametrization degree of freedom and a generalized probability in quantum cosmology [Replacement]

The high degree of symmetry renders the dynamics of cosmological as well as some black hole spacetimes describable by a system of finite degrees of freedom. These systems are generally known as minisuperspace models. One of their important key features is the invariance of the corresponding reduced actions under reparametrizations of the independent variable, a fact that can be seen as the remnant of the general covariance of the full theory. In the case of a system of $n$ degrees of freedom, described by a Lagrangian quadratic in velocities, one can use the lapse by either gauge fixing it or letting it be defined by the constraint and subsequently substitute into the rest of the equations. In the first case, the system is solvable for $n$ accelerations and the constraint becomes a restriction among constants. In the second case, the system can only be solved for $n-1$ accelerations and the "gauge" freedom is transferred to the choice of one of the scalar degrees of freedom. In this paper, we take the second path and express all $n-1$ scalar degrees of freedom in terms of the remaining one, say $q$. By considering these $n-1$ degrees of freedom as arbitrary but given functions of $q$, we manage to extract a two dimensional pure gauge system consisting of the lapse $N$ and the arbitrary $q$: in a way, we decouple the reparametrization invariance from the rest of the equations of motion. The solution of the corresponding quantum two dimensional system is used for the definition of a generalized probability for every configuration $f^i (q)$, be it classical or not. The main result is that, interestingly enough, this probability attains its extrema on the classical solution of the initial $n$-dimensional system.

Searching for intermediate-mass black holes in globular clusters with gravitational microlensing

We discuss the potential of the gravitational microlensing method as a unique tool to detect unambiguous signals caused by intermediate-mass black holes in globular clusters. We select clusters near the line of sight to the Galactic Bulge and the Small Magellanic Cloud, estimate the density of background stars for each of them, and carry out simulations in order to estimate the probabilities of detecting the astrometric signatures caused by black hole lensing. We find that for several clusters, the probability of detecting such an event is significant with available archival data from the Hubble Space Telescope. Specifically, we find that M 22 is the cluster with the best chances of yielding an IMBH detection via astrometric microlensing. If M 22 hosts an IMBH of mass $10^5M_\odot$, then the probability that at least one star will yield a detectable signal over an observational baseline of 20 years is $\sim 86\%$, while the probability of a null result is around $14\%$. For an IMBH of mass $10^6M_\odot$, the detection probability rises to $>99\%$. Future observing facilities will also extend the available time baseline, improving the chance of detections for the clusters we consider.

Robo-AO Kepler Planetary Candidate Survey II: Adaptive Optics Imaging of 969 Kepler Exoplanet Candidate Host Stars

We initiated the Robo-AO Kepler Planetary Candidate Survey in 2012 to observe each Kepler exoplanet candidate host star with high-angular-resolution visible-light laser-adaptive-optics imaging. Our goal is to find nearby stars lying in Kepler's photometric apertures that are responsible for the relatively high probability of false-positive exoplanet detections and that cause underestimates of the size of transit radii. Our comprehensive survey will also shed light on the effects of stellar multiplicity on exoplanet properties and will identify rare exoplanetary architectures. In this second part of our ongoing survey, we observed an additional 969 Kepler planet candidate hosts and we report blended stellar companions up to $\Delta m \approx 6$ that contribute to Kepler's measured light curves. We found 203 companions within $\sim$4" of 181 of the Kepler stars, of which 141 are new discoveries. We measure the nearby-star probability for this sample of Kepler planet candidate host stars to be 10.6% $\pm$ 1.1% at angular separations up to 2.5", significantly higher than the 7.4% $\pm$ 1.0% probability discovered in our initial sample of 715 stars; we find the probability increases to 17.6% $\pm$ 1.5% out to a separation of 4.0". The median position of KOIs observed in this survey are 1.1$^{\circ}$ closer to the galactic plane which may account for some of the nearby-star probability enhancement. We additionally detail 50 Keck adaptive optics images of Robo-AO observed KOIs in order to confirm 37 companions detected at a $<5\sigma$ significance level and to obtain additional infrared photometry on higher-significance detected companions.

Discrete stochastic Laplacian growth: classical limit [Cross-Listing]

Many tiny particles, emitted incessantly from stochastic sources on a plane, perform Brownian motion until they stick to a cluster, so providing its growth. The growth probability is presented as a sum over all possible "scenaria", leading to the same final complex shape. The logarithm of the probability (negative action) has the familiar entropy form, and its global maximum is shown to be exactly the deterministic equation of Laplacian growth with many sources. The full growth probability, which includes probability of creation of random sources, is presented in two forms of electrostatic energy. It is also found to be factorizable in a complex plane, where the exterior of the unit disk maps conformally to the exterior of the growing cluster. The obtained action is analyzed from a potential-theoretical point of view, and its connections with the tau-function of the integrable Toda hierarchy and with the Liouville theory for non-critical strings are established.

The Vertex Expansion in the Consistent Histories Formulation of Spin Foam Loop Quantum Cosmology

Assignment of consistent quantum probabilities to events in a quantum universe is a fundamental challenge which every quantum cosmology/gravity framework must overcome. In loop quantum cosmology, this issue leads to a fundamental question: What is the probability that the universe undergoes a non-singular bounce? Using the consistent histories formulation, this question was successfully answered recently by the authors for a spatially flat FRW model in the canonical approach. In this manuscript, we obtain a covariant generalization of this result. Our analysis is based on expressing loop quantum cosmology in the spin foam paradigm and using histories defined via volume transitions to compute the amplitudes of transitions obtained using a vertex expansion. We show that the probability for bounce turns out to be unity.

Fermion production in a magnetic field in a de Sitter Universe

The process of fermion production in the field of a magnetic dipole on a de Sitter expanding universe is analyzed. The amplitude and probability for production of massive fermions are obtained using the exact solution of the Dirac equation written in the momentum-helicity basis. We found that the most probable transitions are those that generate the fermion pair perpendicular to the direction of the magnetic field. The behavior of the probability is graphically studied for large/small values of the expansion factor, and a detailed analysis of the probability in terms of the angle between the momenta vectors of the particle and antiparticle is performed. The phenomenon of fermion production is significant only at large expansion which corresponds to the conditions from the early Universe. When the expansion factor vanishes we recover the Minkowski limit where this process is forbidden by the simultaneous energy-momentum conservation.

Gravitationally induced quantum transitions

In this letter, we calculate the probability for resonantly induced transitions in quantum states due to time dependent gravitational perturbations. Contrary to common wisdom, the probability of inducing transitions is not infinitesimally small. We consider a system of ultra cold neutrons (UCN), which are organized according to the energy levels of the Schr\"odinger equation in the presence of the earth's gravitational field. Transitions between energy levels are induced by an oscillating driving force of frequency $\omega$. The driving force is created by oscillating a macroscopic mass in the neighbourhood of the system of neutrons. The neutrons decay in 880 seconds while the probability of transitions increase as $t^2$. Hence the optimal strategy is to drive the system for 2 lifetimes. The transition amplitude then is of the order of $1.06\times 10^{-5}$ hence with a million ultra cold neutrons, one should be able to observe transitions.

Gravitationally induced quantum transitions [Cross-Listing]

In this letter, we calculate the probability for resonantly induced transitions in quantum states due to time dependent gravitational perturbations. Contrary to common wisdom, the probability of inducing transitions is not infinitesimally small. We consider a system of ultra cold neutrons (UCN), which are organized according to the energy levels of the Schr\"odinger equation in the presence of the earth's gravitational field. Transitions between energy levels are induced by an oscillating driving force of frequency $\omega$. The driving force is created by oscillating a macroscopic mass in the neighbourhood of the system of neutrons. The neutrons decay in 880 seconds while the probability of transitions increase as $t^2$. Hence the optimal strategy is to drive the system for 2 lifetimes. The transition amplitude then is of the order of $1.06\times 10^{-5}$ hence with a million ultra cold neutrons, one should be able to observe transitions.

Particle creation rate for general black holes [Replacement]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe. We also infer that this frequency cut off can be a parameter that shows the primordial black hole growth at the emission moment.

Particle creation rate for general black holes [Replacement]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe. We also infer that this frequency cut off can be a parameter that shows the primordial black hole growth at the emission moment.

Particle creation rate for general black holes [Replacement]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe. We also infer that this frequency cut off can be a parameter that shows the primordial black hole growth at the emission moment.

Particle creation rate for general black holes [Cross-Listing]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe.

Particle creation rate for general black holes [Cross-Listing]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe.

Horizon of quantum black holes in various dimensions [Replacement]

We adapt the horizon wave-function formalism to describe massive static spherically symmetric sources in a general $(1+D)$-dimensional space-time, for $D>3$ and including the $D=1$ case. We find that the probability $P_{\rm BH}$ that such objects are (quantum) black holes behaves similarly to the probability in the $(3+1)$ framework for $D> 3$. In fact, for $D\ge 3$, the probability increases towards unity as the mass grows above the relevant $D$-dimensional Planck scale $m_D$. At fixed mass, however, $P_{\rm BH}$ decreases with increasing $D$, so that a particle with mass $m\simeq m_D$ has just about $10\%$ probability to be a black hole in $D=5$, and smaller for larger $D$. This result has a potentially strong impact on estimates of black hole production in colliders. In contrast, for $D=1$, we find the probability is comparably larger for smaller masses, but $P_{\rm BH} < 0.5$, suggesting that such lower dimensional black holes are purely quantum and not classical objects. This result is consistent with recent observations that sub-Planckian black holes are governed by an effective two-dimensional gravitation theory. Lastly, we derive Generalised Uncertainty Principle relations for the black holes under consideration, and find a minimum length corresponding to a characteristic energy scale of the order of the fundamental gravitational mass $m_D$ in $D>3$. For $D=1$ we instead find the uncertainty due to the horizon fluctuations has the same form as the usual Heisenberg contribution, and therefore no fundamental scale exists.

Horizon of quantum black holes in various dimensions [Replacement]

We adapt the horizon wave-function formalism to describe massive static spherically symmetric sources in a general $(1+D)$-dimensional space-time, for $D>3$ and including the $D=1$ case. We find that the probability $P_{\rm BH}$ that such objects are (quantum) black holes behaves similarly to the probability in the $(3+1)$ framework for $D> 3$. In fact, for $D\ge 3$, the probability increases towards unity as the mass grows above the relevant $D$-dimensional Planck scale $m_D$. At fixed mass, however, $P_{\rm BH}$ decreases with increasing $D$, so that a particle with mass $m\simeq m_D$ has just about $10\%$ probability to be a black hole in $D=5$, and smaller for larger $D$. This result has a potentially strong impact on estimates of black hole production in colliders. In contrast, for $D=1$, we find the probability is comparably larger for smaller masses, but $P_{\rm BH} < 0.5$, suggesting that such lower dimensional black holes are purely quantum and not classical objects. This result is consistent with recent observations that sub-Planckian black holes are governed by an effective two-dimensional gravitation theory. Lastly, we derive Generalised Uncertainty Principle relations for the black holes under consideration, and find a minimum length corresponding to a characteristic energy scale of the order of the fundamental gravitational mass $m_D$ in $D>3$. For $D=1$ we instead find the uncertainty due to the horizon fluctuations has the same form as the usual Heisenberg contribution, and therefore no fundamental scale exists.

Horizon of quantum black holes in various dimensions [Cross-Listing]

We adapt the horizon wave-function formalism to describe massive static spherically symmetric sources in a general $(1+D)$-dimensional space-time, for $D>3$ and including the $D=1$ case. We find that the probability $P_{\rm BH} $ that such objects are (quantum) black holes behaves similarly to the probability in the $(3+1)$ framework for $D> 3$. In fact, for $D\ge 3$, the probability increases towards unity as the mass grows above the relevant $D$-dimensional Planck scale $m_D$, the faster the larger $D$. In contrast, for $D=1$, we find the probability is comparably larger for smaller masses, but $P_{\rm BH} < 0.5$, suggesting that such lower dimensional black holes are purely quantum and not classical objects. This result is consistent with recent observations that sub-Planckian black holes are governed by an effective two-dimensional gravitation theory. Lastly, we derive Generalised Uncertainty Principle relations for the black holes under consideration, and for all cases find a minimum length scale $L_D$ corresponding to a characteristic energy scale $M_D\sim m_D$.

Horizon of quantum black holes in various dimensions [Replacement]

We adapt the horizon wave-function formalism to describe massive static spherically symmetric sources in a general $(1+D)$-dimensional space-time, for $D>3$ and including the $D=1$ case. We find that the probability $P_{\rm BH}$ that such objects are (quantum) black holes behaves similarly to the probability in the $(3+1)$ framework for $D> 3$. In fact, for $D\ge 3$, the probability increases towards unity as the mass grows above the relevant $D$-dimensional Planck scale $m_D$. At fixed mass, however, $P_{\rm BH}$ decreases with increasing $D$, so that a particle with mass $m\simeq m_D$ has just about $10\%$ probability to be a black hole in $D=5$, and smaller for larger $D$. This result has a potentially strong impact on estimates of black hole production in colliders. In contrast, for $D=1$, we find the probability is comparably larger for smaller masses, but $P_{\rm BH} < 0.5$, suggesting that such lower dimensional black holes are purely quantum and not classical objects. This result is consistent with recent observations that sub-Planckian black holes are governed by an effective two-dimensional gravitation theory. Lastly, we derive Generalised Uncertainty Principle relations for the black holes under consideration, and find a minimum length corresponding to a characteristic energy scale of the order of the fundamental gravitational mass $m_D$ in $D>3$. For $D=1$ we instead find the uncertainty due to the horizon fluctuations has the same form as the usual Heisenberg contribution, and therefore no fundamental scale exists.

Quasar Probabilities and Redshifts from WISE mid-IR through GALEX UV Photometry

Extreme deconvolution (XD) of broad-band photometric data can both separate stars from quasars and generate probability density functions for quasar redshifts, while incorporating flux uncertainties and missing data. Mid-infrared photometric colors are now widely used to identify hot dust intrinsic to quasars, and the release of all-sky WISE data has led to a dramatic increase in the number of IR-selected quasars. Using forced-photometry on public WISE data at the locations of SDSS point sources, we incorporate this all-sky data into the training of the XDQSOz models originally developed to select quasars from optical photometry. The combination of WISE and SDSS information is far more powerful than SDSS alone, particularly at $z>2$. The use of SDSS$+$WISE photometry is comparable to the use of SDSS$+$ultraviolet$+$near-IR data. We release a new public catalogue of 5,537,436 (total; 3,874,639 weighted by probability) potential quasars with probability $P_{\textrm{QSO}} > 0.2$. The catalogue includes redshift probabilities for all objects. We also release an updated version of the publicly available set of codes to calculate quasar and redshift probabilities for various combinations of data. Finally, we demonstrate that this method of selecting quasars using WISE data is both more complete and efficient than simple WISE color-cuts, especially at high redshift. Our fits verify that above $z \sim 3$ WISE colors become bluer than the standard cuts applied to select quasars. Currently, the analysis is limited to quasars with optical counterparts, and thus cannot be used to find highly obscured quasars that WISE color-cuts identify in significant numbers.

Annihilation of the scalar pair into a photon on de Sitter spacetime [Replacement]

The annihilation of massive scalar particles in one photon on de Sitter expanding universe is studied, using perturbation theory. The amplitude and probability corresponding to this process is computed using the exact solutions of the Klein-Gordon and Maxwell equations on de Sitter geometry. Our results show that the expression of the total probability of photon emission is a function dependent on the ratio mass/expansion\, factor. We perform a graphical study of the total probability in terms of the parameter mass/expansion factor, showing that this effect is significant only in strong gravitational fields. We also obtain that the total probability for this process vanishes in the Minkowski limit.

Annihilation of the scalar pair into a photon on de Sitter spacetime

The annihilation of the massive scalar particles in one photon on de Sitter expanding universe is studied, using the theory of perturbations. The amplitude and probability corresponding to this process are computed using the exact solutions of the Klein-Gordon and Maxwell equations in de Sitter geometry. Our results show that, the expression of the total number of created photons is a function which is dependent on the ratio $mass/expansion\, factor$. We perform a graphical study of the total number of photons in terms of the parameter $mass/expansion factor$, showing that this effect is important only in strong gravitational fields. We also obtain that, the probability for this process vanishes in the Minkowski limit.

Quantum corrected non-thermal radiation spectrum from the tunnelling mechanism

Tunnelling mechanism is today considered a popular and widely used method in describing Hawking radiation. However, in relation to black hole (BH) emission, this mechanism is mostly used to obtain the Hawking temperature by comparing the probability of emission of an outgoing particle with the Boltzmann factor. On the other hand, Banerjee and Majhi reformulated the tunnelling framework deriving a black body spectrum through the density matrix for the outgoing modes for both the Bose-Einstein distribution and the Fermi-Dirac distribution. In contrast, Parikh and Wilczek introduced a correction term performing an exact calculation of the action for a tunnelling spherically symmetric particle and, as a result, the probability of emission of an outgoing particle corresponds to a non-strictly thermal radiation spectrum. Recently, one of us (C. Corda) introduced a BH effective state and was able to obtain a non-strictly black body spectrum from the tunnelling mechanism corresponding to the probability of emission of an outgoing particle found by Parikh and Wilczek. The present work introduces the quantum corrected effective temperature and the corresponding quantum corrected effective metric is written using Hawking's periodicity arguments. Thus, we obtain further corrections to the non-strictly thermal BH radiation spectrum as the final distributions take into account both the BH dynamical geometry during the emission of the particle and the quantum corrections to the semiclassical Hawking temperature.

Quantum corrected non-thermal radiation spectrum from the tunnelling mechanism [Replacement]

Tunnelling mechanism is today considered a popular and widely used method in describing Hawking radiation. However, in relation to black hole (BH) emission, this mechanism is mostly used to obtain the Hawking temperature by comparing the probability of emission of an outgoing particle with the Boltzmann factor. On the other hand, Banerjee and Majhi reformulated the tunnelling framework deriving a black body spectrum through the density matrix for the outgoing modes for both the Bose-Einstein distribution and the Fermi-Dirac distribution. In contrast, Parikh and Wilczek introduced a correction term performing an exact calculation of the action for a tunnelling spherically symmetric particle and, as a result, the probability of emission of an outgoing particle corresponds to a non-strictly thermal radiation spectrum. Recently, one of us (C. Corda) introduced a BH effective state and was able to obtain a non-strictly black body spectrum from the tunnelling mechanism corresponding to the probability of emission of an outgoing particle found by Parikh and Wilczek. The present work introduces the quantum corrected effective temperature and the corresponding quantum corrected effective metric is written using Hawking's periodicity arguments. Thus, we obtain further corrections to the non-strictly thermal BH radiation spectrum as the final distributions take into account both the BH dynamical geometry during the emission of the particle and the quantum corrections to the semiclassical Hawking temperature.

Circumbinary planets - why they are so likely to transit

Transits on single stars are rare. The probability rarely exceeds a few per cent. Furthermore, this probability rapidly approaches zero at increasing orbital period. Therefore transit surveys have been predominantly limited to the inner parts of exoplanetary systems. Here we demonstrate how circumbinary planets allow us to beat these unfavourable odds. By incorporating the geometry and the three-body dynamics of circumbinary systems, we analytically derive the probability of transitability, a configuration where the binary and planet orbits overlap on the sky. We later show that this is equivalent to the transit probability, but at an unspecified point in time. This probability, at its minimum, is always higher than for single star cases. In addition, it is an increasing function with mutual inclination. By applying our analytical development to eclipsing binaries, we deduce that transits are highly probable, and in some case guaranteed. For example, a circumbinary planet revolving at 1 AU around a 0.3 AU eclipsing binary is certain to eventually transit - a 100% probability - if its mutual inclination is greater than 0.6 deg. We show that the transit probability is generally only a weak function of the planet's orbital period; circumbinary planets may be used as practical tools for probing the outer regions of exoplanetary systems to search for and detect warm to cold transiting planets.

Circumbinary planets - why they are so likely to transit [Replacement]

Transits on single stars are rare. The probability rarely exceeds a few per cent. Furthermore, this probability rapidly approaches zero at increasing orbital period. Therefore transit surveys have been predominantly limited to the inner parts of exoplanetary systems. Here we demonstrate how circumbinary planets allow us to beat these unfavourable odds. By incorporating the geometry and the three-body dynamics of circumbinary systems, we analytically derive the probability of transitability, a configuration where the binary and planet orbits overlap on the sky. We later show that this is equivalent to the transit probability, but at an unspecified point in time. This probability, at its minimum, is always higher than for single star cases. In addition, it is an increasing function with mutual inclination. By applying our analytical development to eclipsing binaries, we deduce that transits are highly probable, and in some case guaranteed. For example, a circumbinary planet revolving at 1 AU around a 0.3 AU eclipsing binary is certain to eventually transit - a 100% probability - if its mutual inclination is greater than 0.6 deg. We show that the transit probability is generally only a weak function of the planet's orbital period; circumbinary planets may be used as practical tools for probing the outer regions of exoplanetary systems to search for and detect warm to cold transiting planets.

On the abundance of extraterrestrial life after the Kepler mission

The data recently accumulated by the Kepler mission have demonstrated that small planets are quite common and that a significant fraction of all stars may have an Earth-like planet within their Habitable Zone. These results are combined with a Drake-equation formalism to derive the space density of biotic planets as a function of the relatively modest uncertainty in the astronomical data and of the (yet unknown) probability for the evolution of biotic life, Fb. I suggest that Fb may be estimated by future spectral observations of exoplanet biomarkers. If Fb is in the range 0.001 -- 1 then a biotic planet may be expected within 10 -- 100 light years from Earth. Extending the biotic results to advanced life I derive expressions for the distance to putative civilizations in terms of two additional Drake parameters - the probability for evolution of a civilization, Fc, and its average longevity. For instance, assuming optimistic probability values (Fb Fc 1) and a broadcasting longevity of a few thousand years, the likely distance to the nearest civilizations detectable by SETI is of the order of a few thousand light years. The probability of detecting intelligent signals with present and future radio telescopes is calculated as a function of the Drake parameters. Finally, I describe how the detection of intelligent signals would constrain the Drake parameters.

BGLS: A Bayesian formalism for the generalised Lomb-Scargle periodogram

Context. Frequency analyses are very important in astronomy today, not least in the ever-growing field of exoplanets, where short-period signals in stellar radial velocity data are investigated. Periodograms are the main (and powerful) tools for this purpose. However, recovering the correct frequencies and assessing the probability of each frequency is not straightforward. Aims. We provide a formalism that is easy to implement in a code, to describe a Bayesian periodogram that includes weights and a constant offset in the data. The relative probability between peaks can be easily calculated with this formalism. We discuss the differences and agreements between the various periodogram formalisms with simulated examples. Methods. We used the Bayesian probability theory to describe the probability that a full sine function (including weights derived from the errors on the data values and a constant offset) with a specific frequency is present in the data. Results. From the expression for our Baysian generalised Lomb-Scargle periodogram (BGLS), we can easily recover the expression for the non-Bayesian version. In the simulated examples we show that this new formalism recovers the underlying periods better than previous versions. A Python-based code is available for the community.

Varying constants quantum cosmology [Replacement]

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Varying constants quantum cosmology [Replacement]

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Varying constants quantum cosmology [Replacement]

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Varying constants quantum cosmology [Cross-Listing]

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Varying constants quantum cosmology [Cross-Listing]

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Varying constants quantum cosmology

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Precise model of Hawking radiation from the tunnelling mechanism [Replacement]

We recently improved the famous result of Parikh and Wilczek, who found a probability of emission of Hawking radiation which is compatible with a non-strictly thermal spectrum, showing that such a probability of emission is really associated to two non-strictly thermal distributions for boson and fermions. Here we finalize the model by finding the correct value of the pre-factor of the Parikh and Wilczek probability of emission. In fact, that expression has the "of order" sign instead of the equality. In general, in this kind of leading order tunnelling calculations, the exponent arises indeed from the classical action and the pre-factor is an order Planck constant correction. But in the case of emissions of Hawking quanta, the variation of the Bekenstein-Hawking entropy is order 1 for an emitted particle having energy of order the Hawking temperature. As a consequence, the exponent in the Parikh and Wilczek probability of emission is order unity and one asks what is the real significance of that scaling if the pre-factor is unknown. Here we solve the problem assuming the unitarity of the black hole (BH) quantum evaporation and considering the natural correspondence between Hawking radiation and quasi-normal modes (QNMs) of excited BHs, in a "Bohr-like model" that we recently discussed in a series of papers. In that papers, QNMs are interpreted as natural BH quantum levels (the "electron states" in the "Bohr-like model"). Here we find the intriguing result that, although in general it is well approximated by 1, the pre-factor of the Parikh and Wilczek probability of emission depends on the BH quantum level n. We also write down an elegant expression of the probability of emission in terms of the BH quantum levels.

Precise model of Hawking radiation from the tunnelling mechanism [Replacement]

We recently improved the famous result of Parikh and Wilczek, who found a probability of emission of Hawking radiation which is compatible with a non-strictly thermal spectrum, showing that such a probability of emission is really associated to two non-strictly thermal distributions for boson and fermions. Here we finalize the model by finding the correct value of the pre-factor of the Parikh and Wilczek probability of emission. In fact, that expression has the "of order" sign instead of the equality. In general, in this kind of leading order tunnelling calculations, the exponent arises indeed from the classical action and the pre-factor is an order Planck constant correction. But in the case of emissions of Hawking quanta, the variation of the Bekenstein-Hawking entropy is order 1 for an emitted particle having energy of order the Hawking temperature. As a consequence, the exponent in the Parikh and Wilczek probability of emission is order unity and one asks what is the real significance of that scaling if the pre-factor is unknown. Here we solve the problem assuming the unitarity of the black hole (BH) quantum evaporation and considering the natural correspondence between Hawking radiation and quasi-normal modes (QNMs) of excited BHs, in a "Bohr-like model" that we recently discussed in a series of papers. In that papers, QNMs are interpreted as natural BH quantum levels (the "electron states" in the "Bohr-like model"). Here we find the intriguing result that, although in general it is well approximated by 1, the pre-factor of the Parikh and Wilczek probability of emission depends on the BH quantum level n. We also write down an elegant expression of the probability of emission in terms of the BH quantum levels.

The effect of uu diquark suppression in proton splitting in Monte Carlo event generators

Monte Carlo event generators assume that protons split into (uu)-diquarks and d-quarks with a probability of 1/3 in strong interactions. It is shown in this paper that using a value of 1/6 for the probability allows one to describe at a semi-quantitative level the NA49 Collaboration data for $p+p\rightarrow p+X$ reactions at 158 GeV/c. The Fritiof (FTF) model of Geant4 was used to simulate the reactions. The reduced weight of the (uu)-diquarks in protons is expected in the instanton model.

What is the probability that direct detection experiments have observed Dark Matter? [Cross-Listing]

In Dark Matter direct detection we are facing the situation of some experiments reporting positive signals which are in conflict with limits from other experiments. Such conclusions are subject to large uncertainties introduced by the poorly known local Dark Matter distribution. We present a method to calculate an upper bound on the joint probability of obtaining the outcome of two potentially conflicting experiments under the assumption that the Dark Matter hypothesis is correct, but completely independent of assumptions about the Dark Matter distribution. In this way we can quantify the compatibility of two experiments in an astrophysics independent way. We illustrate our method by testing the compatibility of the hints reported by DAMA and CDMS-Si with the limits from the LUX and SuperCDMS experiments. The method does not require Monte Carlo simulations but is mostly based on using Poisson statistics. In order to deal with signals of few events we introduce the so-called "signal length" to take into account energy information without the need of binning data. The signal length method provides a simple way to calculate the probability to obtain a given experimental outcome under a specified Dark Matter and background hypothesis.

What is the probability that direct detection experiments have observed Dark Matter? [Replacement]

In Dark Matter direct detection we are facing the situation of some experiments reporting positive signals which are in conflict with limits from other experiments. Such conclusions are subject to large uncertainties introduced by the poorly known local Dark Matter distribution. We present a method to calculate an upper bound on the joint probability of obtaining the outcome of two potentially conflicting experiments under the assumption that the Dark Matter hypothesis is correct, but completely independent of assumptions about the Dark Matter distribution. In this way we can quantify the compatibility of two experiments in an astrophysics independent way. We illustrate our method by testing the compatibility of the hints reported by DAMA and CDMS-Si with the limits from the LUX and SuperCDMS experiments. The method does not require Monte Carlo simulations but is mostly based on using Poisson statistics. In order to deal with signals of few events we introduce the so-called "signal length" to take into account energy information. The signal length method provides a simple way to calculate the probability to obtain a given experimental outcome under a specified Dark Matter and background hypothesis.

What is the probability that direct detection experiments have observed Dark Matter? [Replacement]

In Dark Matter direct detection we are facing the situation of some experiments reporting positive signals which are in conflict with limits from other experiments. Such conclusions are subject to large uncertainties introduced by the poorly known local Dark Matter distribution. We present a method to calculate an upper bound on the joint probability of obtaining the outcome of two potentially conflicting experiments under the assumption that the Dark Matter hypothesis is correct, but completely independent of assumptions about the Dark Matter distribution. In this way we can quantify the compatibility of two experiments in an astrophysics independent way. We illustrate our method by testing the compatibility of the hints reported by DAMA and CDMS-Si with the limits from the LUX and SuperCDMS experiments. The method does not require Monte Carlo simulations but is mostly based on using Poisson statistics. In order to deal with signals of few events we introduce the so-called "signal length" to take into account energy information. The signal length method provides a simple way to calculate the probability to obtain a given experimental outcome under a specified Dark Matter and background hypothesis.

What is the probability that direct detection experiments have observed Dark Matter?

In Dark Matter direct detection we are facing the situation of some experiments reporting positive signals which are in conflict with limits from other experiments. Such conclusions are subject to large uncertainties introduced by the poorly known local Dark Matter distribution. We present a method to calculate an upper bound on the joint probability of obtaining the outcome of two potentially conflicting experiments under the assumption that the Dark Matter hypothesis is correct, but completely independent of assumptions about the Dark Matter distribution. In this way we can quantify the compatibility of two experiments in an astrophysics independent way. We illustrate our method by testing the compatibility of the hints reported by DAMA and CDMS-Si with the limits from the LUX and SuperCDMS experiments. The method does not require Monte Carlo simulations but is mostly based on using Poisson statistics. In order to deal with signals of few events we introduce the so-called "signal length" to take into account energy information without the need of binning data. The signal length method provides a simple way to calculate the probability to obtain a given experimental outcome under a specified Dark Matter and background hypothesis.

Photon-ALP conversions inside AGN

An intriguing possibility to partially circumvent extragalactic background light (EBL) absorption in very-high-energy (VHE) observations of blazars is that photons convert into axion-like particles (ALPs) $\gamma \to a$ inside or close to a blazar and reconvert into photons $a \to \gamma$ in the Milky Way magnetic field. This idea has been put forward in 2008 and has attracted a considerable interest. However, while the probability for the back-conversion $a \to \gamma$ has been computed in detail (using the maps of the Galatic magnetic field), regretfully no realistic estimate of the probability for the conversion $\gamma \to a$ inside a blazar has been performed, in spite of the fact that the present-day knowledge allows this task to be accomplished in a reliable fashion. We present a detailed calculation that fills this gap, considering both types of blazars, namely BL Lac objects (BL Lacs) and flat spectrum radio quasars (FSRQ) with their specific structural and environmental properties. We also include the host elliptical galaxy into account. Our somewhat surprising results show that the conversion probability in BL Lacs is strongly dependent on the source parameters -- like the position of the emission region along the jet and the strength of the magnetic field therein -- making it effectivelly unpredictable. On the other hand, the lobes at the termination of FSRQ jets lead to an effective "equipartition" between photons and ALPs due to its chaotic nature, thereby allowing us to make a clear-cut prediction. These results are quite important in view of the planned VHE detectors like the CTA, HAWK and HiSCORE.

On the role of GRBs on life extinction in the Universe

As a copious source of gamma-rays, a nearby Galactic Gamma-Ray Burst (GRB) can be a threat to life. Using recent determinations of the rate of GRBs, their luminosity function and properties of their host galaxies, we estimate the probability that a life-threatening (lethal) GRB would take place. Amongst the different kinds of GRBs, long ones are most dangerous. There is a very good chance (but no certainty) that at least one lethal GRB took place during the past 5 Gyr close enough to Earth as to significantly damage life. There is a 50% chance that such a lethal GRB took place during the last 500 Myr causing one of the major mass extinction events. Assuming that a similar level of radiation would be lethal to life on other exoplanets hosting life, we explore the potential effects of GRBs to life elsewhere in the Galaxy and the Universe. We find that the probability of a lethal GRB is much larger in the inner Milky Way (95% within a radius of 4 kpc from the galactic center), making it inhospitable to life. Only at the outskirts of the Milky Way, at more than 10 kpc from the galactic center, this probability drops below 50%. When considering the Universe as a whole, the safest environments for life (similar to the one on Earth) are the lowest density regions in the outskirts of large galaxies and life can exist in only ~ 10% of galaxies. Remarkably, a cosmological constant is essential for such systems to exist. Furthermore, because of both the higher GRB rate and galaxies being smaller, life as it exists on Earth could not take place at $z > 0.5$. Early life forms must have been much more resilient to radiation.

On the role of GRBs on life extinction in the Universe [Replacement]

As a copious source of gamma-rays, a nearby Galactic Gamma-Ray Burst (GRB) can be a threat to life. Using recent determinations of the rate of GRBs, their luminosity function and properties of their host galaxies, we estimate the probability that a life-threatening (lethal) GRB would take place. Amongst the different kinds of GRBs, long ones are most dangerous. There is a very good chance (but no certainty) that at least one lethal GRB took place during the past 5 Gyr close enough to Earth as to significantly damage life. There is a 50% chance that such a lethal GRB took place during the last 500 Myr causing one of the major mass extinction events. Assuming that a similar level of radiation would be lethal to life on other exoplanets hosting life, we explore the potential effects of GRBs to life elsewhere in the Galaxy and the Universe. We find that the probability of a lethal GRB is much larger in the inner Milky Way (95% within a radius of 4 kpc from the galactic center), making it inhospitable to life. Only at the outskirts of the Milky Way, at more than 10 kpc from the galactic center, this probability drops below 50%. When considering the Universe as a whole, the safest environments for life (similar to the one on Earth) are the lowest density regions in the outskirts of large galaxies and life can exist in only ~ 10% of galaxies. Remarkably, a cosmological constant is essential for such systems to exist. Furthermore, because of both the higher GRB rate and galaxies being smaller, life as it exists on Earth could not take place at $z > 0.5$. Early life forms must have been much more resilient to radiation.

What is Generic Structure of the Three-dimensional Magnetic Reconnection? [Cross-Listing]

The probability of occurrence of various topological configurations of the three-dimensional reconnection in a random magnetic field is studied. It is found that a specific six-tail spatial configuration should play the dominant role, while all other types of reconnection (in particular, the axially-symmetric fan-like structures) are realized with a much less probability. A characteristic feature of the six-tail configuration is that at the sufficiently large scales it is approximately reduced to the well-known two-dimensional X-type structure; and this explains why the two-dimensional models of reconnection usually work quite well.

 

You need to log in to vote

The blog owner requires users to be logged in to be able to vote for this post.

Alternatively, if you do not have an account yet you can create one here.

Powered by Vote It Up

^ Return to the top of page ^