Posts Tagged probability

Recent Postings from probability

A Study of Central Galaxy Rotation with Stellar Mass and Environment

We present a pilot analysis of the influence of galaxy stellar mass and cluster environment on the probability of slow rotation in 22 central galaxies at mean redshift $z=0.07$. This includes new integral-field observations of 5 central galaxies selected from the Sloan Digital Sky Survey, observed with the SPIRAL integral-field spectrograph on the Anglo-Australian Telescope. The composite sample presented here spans a wide range of stellar masses, $10.9<$log(M$_{*}/$M$_{\odot})<12.0$, and are embedded in halos ranging from groups to clusters, $12.9<$log(M$_{200}/$M$_{\odot})<15.6$. We find a mean probability of slow rotation in our sample of P(SR)$=54\pm7$percent. Our results show an increasing probability of slow rotation in central galaxies with increasing stellar mass. However, when we examine the dependence of slow rotation on host cluster halo mass we do not see a significant relationship. We also explore the influence of cluster dominance on slow rotation in central galaxies. Clusters with low dominance are associated with dynamically younger systems. We find that cluster dominance has no significant effect on the probability of slow rotation in central galaxies. These results conflict with a paradigm in which halo mass alone predetermines central galaxy properties.

Deuteron D-state probability from finite size effects in muonic deuterium [Cross-Listing]

Recent analyses of the Lamb shift data in muonic deuterium ($\mu^- d$) have shown that precision atomic spectroscopy determines a more accurate radius of the deuteron than scattering experiments do. This precision can be used to determine the D-state probability, $P_D$, in deuteron accurately. To demonstrate the method, we evaluate the nuclear structure corrections of order $\alpha^4$ within a few body formalism for the $\mu^- p n$ system in muonic deuterium using different values of $P_D$ and find $P_D$ close to 2\% to be most favoured by the $\mu^-d$ data.

On neutrinoless double electron capture [Replacement]

We found the probability for the neutrinoless double electron capture in the case of $KK$ capture. We clarified the mechanism of the energy transfer from the nucleus to the bound electrons. This enabled us to obtain the equations for the probability of the $2EC0\nu$ process which modify those obtained earlier. We found that the probability of the process is three orders of magnitude larger than it was supposed earlier.

On neutrinoless double electron capture [Cross-Listing]

We found the probability for the neutrinoless double electron capture in the case of $KK$ capture. We clarified the mechanism of the energy transfer from the nucleus to the bound electrons. This enabled us to obtain the equations for the probability of the $2EC0\nu$ process which modify those obtained earlier. We found that the probability of the process is three orders of magnitude larger than it was supposed earlier.

Stable laws and cosmic ray physics

In the new precision era for cosmic ray astrophysics, theoretical predictions cannot content themselves with average trends, but need to correctly take into account intrinsic uncertainties. The space-time discreteness of the cosmic ray sources, joined with a substantial ignorance of their precise epochs and locations (with the possible exception of the most recent and close ones) plays an important role in this sense. We elaborate a statistical theory to deal with this problem, relating the composite probability P({\Psi}) to obtain a flux {\Psi} at the Earth to the single-source probability p({\psi}) to contribute with a flux {\psi}. The main difficulty arises since p({\psi}) is a fat tail distribution, characterized by power-law or broken power-law behaviour up to very large fluxes for which central limit theorem does not hold, and leading to well-known stable laws as opposed to Gaussian distributions. We find that relatively simple recipes provide a satisfactory description of the probability P({\Psi}). We also find that a naive Gaussian fit to simulation results would underestimate the probability of very large fluxes, i.e. several times above the average, while overestimating the probability of relatively milder excursions. At large energies, large flux fluctuations are prevented by causal considerations, while at low energies a partial knowledge on the recent and nearby population of sources plays an important role. A few proposal have been discussed in the literature to account for spectral breaks recently reported in cosmic ray data in terms of local contributions. We apply our newly developed theory to assess their probabilities, finding that they are relatively small.

Causal evolution of wave packets [Cross-Listing]

Drawing from the optimal transport theory adapted to the relativistic setting we formulate the principle of a causal flow of probability and apply it in the wave packet formalism. We demonstrate that whereas the Dirac system is causal, the relativistic-Schr\"odinger Hamiltonian impels a superluminal evolution of probabilities. We quantify the causality breakdown in the latter system and argue that, in contrast to the popular viewpoint, it is not related to the localisation properties of the states.

Temperature dependence of the probability of "small heating" and spectrum of UCNs up-scattered on the surface of Fomblin oil Y-HVAC 18/8 [Cross-Listing]

We performed precision measurements of the probability of small heating and spectrum of UCNs up-scattered on the surface of hydrogen-free oil Fomblin Y-HVAC 18/8 as a function of temperature. The probability is well reproducible, does not depend on sample thickness and does not evolve in time. It is equal (9.8+-0.2)10^(-6) at the ambient temperature. The spectrum coincides with those measured with solid-surface and nanoparticle samples. Indirect arguments indicate that spectrum shape weakly depends on temperature. Measured experimental data can be satisfactory described both within the model of near-surface nanodroplets and the model of capillary waves.

Probability of coincidental similarity among the orbits of small bodies - I. Pairing

Probability of coincidental clustering among the orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4, ... members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample, one can use to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

Derivation of Capture Probabilities for the Corotation Eccentric Mean Motion Resonances

We study in this paper the capture of a massless particle into an isolated, first order Corotation Eccentric Resonance (CER), in the framework of the Planar, Eccentric and Restricted Three-Body problem near a m+1:m mean motion commensurability (m integer). While capture into Lindblad Eccentric Resonances (where the perturber's orbit is circular) has been investigated years ago, capture into CER (where the perturber's orbit is elliptic) has not yet been investigated in detail. Here, we derive the generic equations of motion near a CER in the general case where both the perturber and the test particle migrate. We derive the probability of capture in that context, and we examine more closely two particular cases: (i) if only the perturber is migrating, capture is possible only if the migration is outward from the primary. Notably, the probability of capture is independent of the way the perturber migrates outward; (ii) if only the test particle is migrating, then capture is possible only if the algebraic value of its migration rate is a decreasing function of orbital radius. In this case, the probability of capture is proportional to the radial gradient of migration. These results differ from the capture into Lindblad Eccentric Resonance (LER), where it is necessary that the orbits of the perturber and the test particle converge for capture to be possible. Possible applications for planetary satellites are discussed.

How Often Do Diquarks Form? A Very Simple Model

Starting from a textbook result, the nearest-neighbor distribution of particles in an ideal gas, we develop estimates for the probability with which quarks $q$ in a mixed $q$, $\bar q$ gas are more strongly attracted to the nearest $q$, potentially forming a diquark, than to the nearest $\bar q$. Generic probabilities lie in the range of tens of percent, with values in the several percent range even under extreme assumptions favoring $q\bar q$ over $qq$ attraction.

Decoupling of the re-parametrization degree of freedom and a generalized probability in quantum cosmology [Replacement]

The high degree of symmetry renders the dynamics of cosmological as well as some black hole spacetimes describable by a system of finite degrees of freedom. These systems are generally known as minisuperspace models. One of their important key features is the invariance of the corresponding reduced actions under reparametrizations of the independent variable, a fact that can be seen as the remnant of the general covariance of the full theory. In the case of a system of $n$ degrees of freedom, described by a Lagrangian quadratic in velocities, one can use the lapse by either gauge fixing it or letting it be defined by the constraint and subsequently substitute into the rest of the equations. In the first case, the system is solvable for $n$ accelerations and the constraint becomes a restriction among constants. In the second case, the system can only be solved for $n-1$ accelerations and the "gauge" freedom is transferred to the choice of one of the scalar degrees of freedom. In this paper, we take the second path and express all $n-1$ scalar degrees of freedom in terms of the remaining one, say $q$. By considering these $n-1$ degrees of freedom as arbitrary but given functions of $q$, we manage to extract a two dimensional pure gauge system consisting of the lapse $N$ and the arbitrary $q$: in a way, we decouple the reparametrization invariance from the rest of the equations of motion. The solution of the corresponding quantum two dimensional system is used for the definition of a generalized probability for every configuration $f^i (q)$, be it classical or not. The main result is that, interestingly enough, this probability attains its extrema on the classical solution of the initial $n$-dimensional system.

Decoupling of the re-parametrization degree of freedom and a generalized probability in quantum cosmology

The high degree of symmetry renders the dynamics of cosmological as well as some black hole spacetimes describable by a system of finite degrees of freedom. These systems are generally known as minisuperspace models. One of their important key features is the invariance of the corresponding reduced actions under reparametrizations of the independent variable, a fact that can be seen as the remnant of the general covariance of the full theory. In the case of a system of $n$ degrees of freedom, described by a Lagrangian quadratic in velocities, one can use the lapse by either gauge fixing it or letting it be defined by the constraint and subsequently substitute into the rest of the equations. In the first case, the system is solvable for $n$ accelerations and the constraint becomes a restriction among constants. In the second case, the system can only be solved for $n-1$ accelerations and the "gauge" freedom is transferred to the choice of one of the scalar degrees of freedom. In this paper, we take the second path and express all $n-1$ scalar degrees of freedom in terms of the remaining one, say $q$. By considering these $n-1$ degrees of freedom as arbitrary but given functions of $q$, we manage to extract a two dimensional pure gauge system consisting of the lapse $N$ and the arbitrary $q$: in a way, we decouple the reparametrization invariance from the rest of the equations of motion. The solution of the corresponding quantum two dimensional system is used for the definition of a generalized probability for every configuration $f^i (q)$, be it classical or not. The main result is that, interestingly enough, this probability attains its extrema on the classical solution of the initial $n$-dimensional system.

Decoupling of the re-parametrization degree of freedom and a generalized probability in quantum cosmology [Cross-Listing]

The high degree of symmetry renders the dynamics of cosmological as well as some black hole spacetimes describable by a system of finite degrees of freedom. These systems are generally known as minisuperspace models. One of their important key features is the invariance of the corresponding reduced actions under reparametrizations of the independent variable, a fact that can be seen as the remnant of the general covariance of the full theory. In the case of a system of $n$ degrees of freedom, described by a Lagrangian quadratic in velocities, one can use the lapse by either gauge fixing it or letting it be defined by the constraint and subsequently substitute into the rest of the equations. In the first case, the system is solvable for $n$ accelerations and the constraint becomes a restriction among constants. In the second case, the system can only be solved for $n-1$ accelerations and the "gauge" freedom is transferred to the choice of one of the scalar degrees of freedom. In this paper, we take the second path and express all $n-1$ scalar degrees of freedom in terms of the remaining one, say $q$. By considering these $n-1$ degrees of freedom as arbitrary but given functions of $q$, we manage to extract a two dimensional pure gauge system consisting of the lapse $N$ and the arbitrary $q$: in a way, we decouple the reparametrization invariance from the rest of the equations of motion. The solution of the corresponding quantum two dimensional system is used for the definition of a generalized probability for every configuration $f^i (q)$, be it classical or not. The main result is that, interestingly enough, this probability attains its extrema on the classical solution of the initial $n$-dimensional system.

Decoupling of the re-parametrization degree of freedom and a generalized probability in quantum cosmology [Replacement]

The high degree of symmetry renders the dynamics of cosmological as well as some black hole spacetimes describable by a system of finite degrees of freedom. These systems are generally known as minisuperspace models. One of their important key features is the invariance of the corresponding reduced actions under reparametrizations of the independent variable, a fact that can be seen as the remnant of the general covariance of the full theory. In the case of a system of $n$ degrees of freedom, described by a Lagrangian quadratic in velocities, one can use the lapse by either gauge fixing it or letting it be defined by the constraint and subsequently substitute into the rest of the equations. In the first case, the system is solvable for $n$ accelerations and the constraint becomes a restriction among constants. In the second case, the system can only be solved for $n-1$ accelerations and the "gauge" freedom is transferred to the choice of one of the scalar degrees of freedom. In this paper, we take the second path and express all $n-1$ scalar degrees of freedom in terms of the remaining one, say $q$. By considering these $n-1$ degrees of freedom as arbitrary but given functions of $q$, we manage to extract a two dimensional pure gauge system consisting of the lapse $N$ and the arbitrary $q$: in a way, we decouple the reparametrization invariance from the rest of the equations of motion. The solution of the corresponding quantum two dimensional system is used for the definition of a generalized probability for every configuration $f^i (q)$, be it classical or not. The main result is that, interestingly enough, this probability attains its extrema on the classical solution of the initial $n$-dimensional system.

Searching for intermediate-mass black holes in globular clusters with gravitational microlensing

We discuss the potential of the gravitational microlensing method as a unique tool to detect unambiguous signals caused by intermediate-mass black holes in globular clusters. We select clusters near the line of sight to the Galactic Bulge and the Small Magellanic Cloud, estimate the density of background stars for each of them, and carry out simulations in order to estimate the probabilities of detecting the astrometric signatures caused by black hole lensing. We find that for several clusters, the probability of detecting such an event is significant with available archival data from the Hubble Space Telescope. Specifically, we find that M 22 is the cluster with the best chances of yielding an IMBH detection via astrometric microlensing. If M 22 hosts an IMBH of mass $10^5M_\odot$, then the probability that at least one star will yield a detectable signal over an observational baseline of 20 years is $\sim 86\%$, while the probability of a null result is around $14\%$. For an IMBH of mass $10^6M_\odot$, the detection probability rises to $>99\%$. Future observing facilities will also extend the available time baseline, improving the chance of detections for the clusters we consider.

Robo-AO Kepler Planetary Candidate Survey II: Adaptive Optics Imaging of 969 Kepler Exoplanet Candidate Host Stars

We initiated the Robo-AO Kepler Planetary Candidate Survey in 2012 to observe each Kepler exoplanet candidate host star with high-angular-resolution visible-light laser-adaptive-optics imaging. Our goal is to find nearby stars lying in Kepler's photometric apertures that are responsible for the relatively high probability of false-positive exoplanet detections and that cause underestimates of the size of transit radii. Our comprehensive survey will also shed light on the effects of stellar multiplicity on exoplanet properties and will identify rare exoplanetary architectures. In this second part of our ongoing survey, we observed an additional 969 Kepler planet candidate hosts and we report blended stellar companions up to $\Delta m \approx 6$ that contribute to Kepler's measured light curves. We found 203 companions within $\sim$4" of 181 of the Kepler stars, of which 141 are new discoveries. We measure the nearby-star probability for this sample of Kepler planet candidate host stars to be 10.6% $\pm$ 1.1% at angular separations up to 2.5", significantly higher than the 7.4% $\pm$ 1.0% probability discovered in our initial sample of 715 stars; we find the probability increases to 17.6% $\pm$ 1.5% out to a separation of 4.0". The median position of KOIs observed in this survey are 1.1$^{\circ}$ closer to the galactic plane which may account for some of the nearby-star probability enhancement. We additionally detail 50 Keck adaptive optics images of Robo-AO observed KOIs in order to confirm 37 companions detected at a $<5\sigma$ significance level and to obtain additional infrared photometry on higher-significance detected companions.

Discrete stochastic Laplacian growth: classical limit [Cross-Listing]

Many tiny particles, emitted incessantly from stochastic sources on a plane, perform Brownian motion until they stick to a cluster, so providing its growth. The growth probability is presented as a sum over all possible "scenaria", leading to the same final complex shape. The logarithm of the probability (negative action) has the familiar entropy form, and its global maximum is shown to be exactly the deterministic equation of Laplacian growth with many sources. The full growth probability, which includes probability of creation of random sources, is presented in two forms of electrostatic energy. It is also found to be factorizable in a complex plane, where the exterior of the unit disk maps conformally to the exterior of the growing cluster. The obtained action is analyzed from a potential-theoretical point of view, and its connections with the tau-function of the integrable Toda hierarchy and with the Liouville theory for non-critical strings are established.

Stochastic Laplacian growth [Replacement]

A point source on a plane constantly emits particles which rapidly diffuse and then stick to a growing cluster. The growth probability of a cluster is presented as a sum over all possible scenarios leading to the same final shape. The classical point for the action, defined as a minus logarithm of the growth probability, describes the most probable scenario and reproduces the Laplacian growth equation, which embraces numerous fundamental free boundary dynamics in non-equilibrium physics. For non-classical scenarios we introduce virtual point sources, in which presence the action becomes the Kullback-Leibler entropy. Strikingly, this entropy is shown to be the sum of electrostatic energies of layers grown per elementary time unit. Hence the growth probability of the presented non-equilibrium process obeys the Gibbs-Boltzmann statistics, which, as a rule, is not applied out from equilibrium. Each layer's probability is expressed as a product of simple factors in an auxiliary complex plane after a properly chosen conformal map. The action at this plane is a sum of Robin functions, which solve the Liouville equation. At the end we establish connections of our theory with the tau-function of the integrable Toda hierarchy and with the Liouville theory for non-critical quantum strings.

The Vertex Expansion in the Consistent Histories Formulation of Spin Foam Loop Quantum Cosmology

Assignment of consistent quantum probabilities to events in a quantum universe is a fundamental challenge which every quantum cosmology/gravity framework must overcome. In loop quantum cosmology, this issue leads to a fundamental question: What is the probability that the universe undergoes a non-singular bounce? Using the consistent histories formulation, this question was successfully answered recently by the authors for a spatially flat FRW model in the canonical approach. In this manuscript, we obtain a covariant generalization of this result. Our analysis is based on expressing loop quantum cosmology in the spin foam paradigm and using histories defined via volume transitions to compute the amplitudes of transitions obtained using a vertex expansion. We show that the probability for bounce turns out to be unity.

Prefactor in the dynamically assisted Sauter-Schwinger effect [Replacement]

The probability of creating an electron-positron pair out of the quantum vacuum by a strong electric field can be enhanced tremendously via an additional weaker time-dependent field. This dynamically assisted Sauter-Schwinger effect has already been studied in several works. It has been found that the enhancement mechanism depends on the shape of the weaker field. For example, a Sauter pulse $1/\cosh(\omega t)^2$ and a Gaussian profile $\exp(-\omega^2 t^2)$ exhibit significant, qualitative differences. However, so far most of the analytical studies were focused on the exponent entering the pair-creation probability. Here, we study the subleading prefactor in front of the exponential using the worldline instanton method. We find that the main features of the dynamically assisted Sauter-Schwinger effect, including the dependence on the shape of the weaker field, are basically unaffected by the prefactor. To test the validity of the instanton approximation, we compare the number of produced pairs to a numerical integration of the full Riccati equation.

Fermion production in a magnetic field in a de Sitter Universe

The process of fermion production in the field of a magnetic dipole on a de Sitter expanding universe is analyzed. The amplitude and probability for production of massive fermions are obtained using the exact solution of the Dirac equation written in the momentum-helicity basis. We found that the most probable transitions are those that generate the fermion pair perpendicular to the direction of the magnetic field. The behavior of the probability is graphically studied for large/small values of the expansion factor, and a detailed analysis of the probability in terms of the angle between the momenta vectors of the particle and antiparticle is performed. The phenomenon of fermion production is significant only at large expansion which corresponds to the conditions from the early Universe. When the expansion factor vanishes we recover the Minkowski limit where this process is forbidden by the simultaneous energy-momentum conservation.

Gravitationally induced quantum transitions

In this letter, we calculate the probability for resonantly induced transitions in quantum states due to time dependent gravitational perturbations. Contrary to common wisdom, the probability of inducing transitions is not infinitesimally small. We consider a system of ultra cold neutrons (UCN), which are organized according to the energy levels of the Schr\"odinger equation in the presence of the earth's gravitational field. Transitions between energy levels are induced by an oscillating driving force of frequency $\omega$. The driving force is created by oscillating a macroscopic mass in the neighbourhood of the system of neutrons. The neutrons decay in 880 seconds while the probability of transitions increase as $t^2$. Hence the optimal strategy is to drive the system for 2 lifetimes. The transition amplitude then is of the order of $1.06\times 10^{-5}$ hence with a million ultra cold neutrons, one should be able to observe transitions.

Gravitationally induced quantum transitions [Cross-Listing]

In this letter, we calculate the probability for resonantly induced transitions in quantum states due to time dependent gravitational perturbations. Contrary to common wisdom, the probability of inducing transitions is not infinitesimally small. We consider a system of ultra cold neutrons (UCN), which are organized according to the energy levels of the Schr\"odinger equation in the presence of the earth's gravitational field. Transitions between energy levels are induced by an oscillating driving force of frequency $\omega$. The driving force is created by oscillating a macroscopic mass in the neighbourhood of the system of neutrons. The neutrons decay in 880 seconds while the probability of transitions increase as $t^2$. Hence the optimal strategy is to drive the system for 2 lifetimes. The transition amplitude then is of the order of $1.06\times 10^{-5}$ hence with a million ultra cold neutrons, one should be able to observe transitions.

Particle creation rate for general black holes [Cross-Listing]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe.

Particle creation rate for general black holes [Cross-Listing]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe.

Particle creation rate for dynamical black holes [Replacement]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe. We also infer that this frequency cut off can be a parameter that shows the primordial black hole growth at the emission moment.

Particle creation rate for general black holes [Replacement]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe. We also infer that this frequency cut off can be a parameter that shows the primordial black hole growth at the emission moment.

Particle creation rate for general black holes [Replacement]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe. We also infer that this frequency cut off can be a parameter that shows the primordial black hole growth at the emission moment.

Particle creation rate for general black holes [Replacement]

We present the particle creation probability rate around a general black hole as an outcome of quantum fluctuations. Using the uncertainty principle for these fluctuation, we derive a new ultraviolet frequency cutoff for the radiation spectrum of a dynamical black hole. Using this frequency cutoff, we define the probability creation rate function for such black holes. We consider a dynamical Vaidya model, and calculate the probability creation rate for this case when its horizon is in a slowly evolving phase. Our results show that one can expect the usual Hawking radiation emission process in the case of a dynamical black hole when it has a slowly evolving horizon. Moreover, calculating the probability rate for a dynamical black hole gives a measure of when Hawking radiation can be killed off by an incoming flux of matter or radiation. Our result strictly suggests that we have to revise the Hawking radiation expectation for primordial black holes that have grown substantially since they were created in the early universe. We also infer that this frequency cut off can be a parameter that shows the primordial black hole growth at the emission moment.

Horizon of quantum black holes in various dimensions [Replacement]

We adapt the horizon wave-function formalism to describe massive static spherically symmetric sources in a general $(1+D)$-dimensional space-time, for $D>3$ and including the $D=1$ case. We find that the probability $P_{\rm BH}$ that such objects are (quantum) black holes behaves similarly to the probability in the $(3+1)$ framework for $D> 3$. In fact, for $D\ge 3$, the probability increases towards unity as the mass grows above the relevant $D$-dimensional Planck scale $m_D$. At fixed mass, however, $P_{\rm BH}$ decreases with increasing $D$, so that a particle with mass $m\simeq m_D$ has just about $10\%$ probability to be a black hole in $D=5$, and smaller for larger $D$. This result has a potentially strong impact on estimates of black hole production in colliders. In contrast, for $D=1$, we find the probability is comparably larger for smaller masses, but $P_{\rm BH} < 0.5$, suggesting that such lower dimensional black holes are purely quantum and not classical objects. This result is consistent with recent observations that sub-Planckian black holes are governed by an effective two-dimensional gravitation theory. Lastly, we derive Generalised Uncertainty Principle relations for the black holes under consideration, and find a minimum length corresponding to a characteristic energy scale of the order of the fundamental gravitational mass $m_D$ in $D>3$. For $D=1$ we instead find the uncertainty due to the horizon fluctuations has the same form as the usual Heisenberg contribution, and therefore no fundamental scale exists.

Horizon of quantum black holes in various dimensions [Replacement]

We adapt the horizon wave-function formalism to describe massive static spherically symmetric sources in a general $(1+D)$-dimensional space-time, for $D>3$ and including the $D=1$ case. We find that the probability $P_{\rm BH}$ that such objects are (quantum) black holes behaves similarly to the probability in the $(3+1)$ framework for $D> 3$. In fact, for $D\ge 3$, the probability increases towards unity as the mass grows above the relevant $D$-dimensional Planck scale $m_D$. At fixed mass, however, $P_{\rm BH}$ decreases with increasing $D$, so that a particle with mass $m\simeq m_D$ has just about $10\%$ probability to be a black hole in $D=5$, and smaller for larger $D$. This result has a potentially strong impact on estimates of black hole production in colliders. In contrast, for $D=1$, we find the probability is comparably larger for smaller masses, but $P_{\rm BH} < 0.5$, suggesting that such lower dimensional black holes are purely quantum and not classical objects. This result is consistent with recent observations that sub-Planckian black holes are governed by an effective two-dimensional gravitation theory. Lastly, we derive Generalised Uncertainty Principle relations for the black holes under consideration, and find a minimum length corresponding to a characteristic energy scale of the order of the fundamental gravitational mass $m_D$ in $D>3$. For $D=1$ we instead find the uncertainty due to the horizon fluctuations has the same form as the usual Heisenberg contribution, and therefore no fundamental scale exists.

Horizon of quantum black holes in various dimensions [Cross-Listing]

We adapt the horizon wave-function formalism to describe massive static spherically symmetric sources in a general $(1+D)$-dimensional space-time, for $D>3$ and including the $D=1$ case. We find that the probability $P_{\rm BH} $ that such objects are (quantum) black holes behaves similarly to the probability in the $(3+1)$ framework for $D> 3$. In fact, for $D\ge 3$, the probability increases towards unity as the mass grows above the relevant $D$-dimensional Planck scale $m_D$, the faster the larger $D$. In contrast, for $D=1$, we find the probability is comparably larger for smaller masses, but $P_{\rm BH} < 0.5$, suggesting that such lower dimensional black holes are purely quantum and not classical objects. This result is consistent with recent observations that sub-Planckian black holes are governed by an effective two-dimensional gravitation theory. Lastly, we derive Generalised Uncertainty Principle relations for the black holes under consideration, and for all cases find a minimum length scale $L_D$ corresponding to a characteristic energy scale $M_D\sim m_D$.

Horizon of quantum black holes in various dimensions [Replacement]

We adapt the horizon wave-function formalism to describe massive static spherically symmetric sources in a general $(1+D)$-dimensional space-time, for $D>3$ and including the $D=1$ case. We find that the probability $P_{\rm BH}$ that such objects are (quantum) black holes behaves similarly to the probability in the $(3+1)$ framework for $D> 3$. In fact, for $D\ge 3$, the probability increases towards unity as the mass grows above the relevant $D$-dimensional Planck scale $m_D$. At fixed mass, however, $P_{\rm BH}$ decreases with increasing $D$, so that a particle with mass $m\simeq m_D$ has just about $10\%$ probability to be a black hole in $D=5$, and smaller for larger $D$. This result has a potentially strong impact on estimates of black hole production in colliders. In contrast, for $D=1$, we find the probability is comparably larger for smaller masses, but $P_{\rm BH} < 0.5$, suggesting that such lower dimensional black holes are purely quantum and not classical objects. This result is consistent with recent observations that sub-Planckian black holes are governed by an effective two-dimensional gravitation theory. Lastly, we derive Generalised Uncertainty Principle relations for the black holes under consideration, and find a minimum length corresponding to a characteristic energy scale of the order of the fundamental gravitational mass $m_D$ in $D>3$. For $D=1$ we instead find the uncertainty due to the horizon fluctuations has the same form as the usual Heisenberg contribution, and therefore no fundamental scale exists.

Quasar Probabilities and Redshifts from WISE mid-IR through GALEX UV Photometry

Extreme deconvolution (XD) of broad-band photometric data can both separate stars from quasars and generate probability density functions for quasar redshifts, while incorporating flux uncertainties and missing data. Mid-infrared photometric colors are now widely used to identify hot dust intrinsic to quasars, and the release of all-sky WISE data has led to a dramatic increase in the number of IR-selected quasars. Using forced-photometry on public WISE data at the locations of SDSS point sources, we incorporate this all-sky data into the training of the XDQSOz models originally developed to select quasars from optical photometry. The combination of WISE and SDSS information is far more powerful than SDSS alone, particularly at $z>2$. The use of SDSS$+$WISE photometry is comparable to the use of SDSS$+$ultraviolet$+$near-IR data. We release a new public catalogue of 5,537,436 (total; 3,874,639 weighted by probability) potential quasars with probability $P_{\textrm{QSO}} > 0.2$. The catalogue includes redshift probabilities for all objects. We also release an updated version of the publicly available set of codes to calculate quasar and redshift probabilities for various combinations of data. Finally, we demonstrate that this method of selecting quasars using WISE data is both more complete and efficient than simple WISE color-cuts, especially at high redshift. Our fits verify that above $z \sim 3$ WISE colors become bluer than the standard cuts applied to select quasars. Currently, the analysis is limited to quasars with optical counterparts, and thus cannot be used to find highly obscured quasars that WISE color-cuts identify in significant numbers.

Annihilation of the scalar pair into a photon on de Sitter spacetime

The annihilation of the massive scalar particles in one photon on de Sitter expanding universe is studied, using the theory of perturbations. The amplitude and probability corresponding to this process are computed using the exact solutions of the Klein-Gordon and Maxwell equations in de Sitter geometry. Our results show that, the expression of the total number of created photons is a function which is dependent on the ratio $mass/expansion\, factor$. We perform a graphical study of the total number of photons in terms of the parameter $mass/expansion factor$, showing that this effect is important only in strong gravitational fields. We also obtain that, the probability for this process vanishes in the Minkowski limit.

Annihilation of the scalar pair into a photon on de Sitter spacetime [Replacement]

The annihilation of massive scalar particles in one photon on de Sitter expanding universe is studied, using perturbation theory. The amplitude and probability corresponding to this process is computed using the exact solutions of the Klein-Gordon and Maxwell equations on de Sitter geometry. Our results show that the expression of the total probability of photon emission is a function dependent on the ratio mass/expansion\, factor. We perform a graphical study of the total probability in terms of the parameter mass/expansion factor, showing that this effect is significant only in strong gravitational fields. We also obtain that the total probability for this process vanishes in the Minkowski limit.

Quantum corrected non-thermal radiation spectrum from the tunnelling mechanism [Replacement]

Tunnelling mechanism is today considered a popular and widely used method in describing Hawking radiation. However, in relation to black hole (BH) emission, this mechanism is mostly used to obtain the Hawking temperature by comparing the probability of emission of an outgoing particle with the Boltzmann factor. On the other hand, Banerjee and Majhi reformulated the tunnelling framework deriving a black body spectrum through the density matrix for the outgoing modes for both the Bose-Einstein distribution and the Fermi-Dirac distribution. In contrast, Parikh and Wilczek introduced a correction term performing an exact calculation of the action for a tunnelling spherically symmetric particle and, as a result, the probability of emission of an outgoing particle corresponds to a non-strictly thermal radiation spectrum. Recently, one of us (C. Corda) introduced a BH effective state and was able to obtain a non-strictly black body spectrum from the tunnelling mechanism corresponding to the probability of emission of an outgoing particle found by Parikh and Wilczek. The present work introduces the quantum corrected effective temperature and the corresponding quantum corrected effective metric is written using Hawking's periodicity arguments. Thus, we obtain further corrections to the non-strictly thermal BH radiation spectrum as the final distributions take into account both the BH dynamical geometry during the emission of the particle and the quantum corrections to the semiclassical Hawking temperature.

Quantum corrected non-thermal radiation spectrum from the tunnelling mechanism

Tunnelling mechanism is today considered a popular and widely used method in describing Hawking radiation. However, in relation to black hole (BH) emission, this mechanism is mostly used to obtain the Hawking temperature by comparing the probability of emission of an outgoing particle with the Boltzmann factor. On the other hand, Banerjee and Majhi reformulated the tunnelling framework deriving a black body spectrum through the density matrix for the outgoing modes for both the Bose-Einstein distribution and the Fermi-Dirac distribution. In contrast, Parikh and Wilczek introduced a correction term performing an exact calculation of the action for a tunnelling spherically symmetric particle and, as a result, the probability of emission of an outgoing particle corresponds to a non-strictly thermal radiation spectrum. Recently, one of us (C. Corda) introduced a BH effective state and was able to obtain a non-strictly black body spectrum from the tunnelling mechanism corresponding to the probability of emission of an outgoing particle found by Parikh and Wilczek. The present work introduces the quantum corrected effective temperature and the corresponding quantum corrected effective metric is written using Hawking's periodicity arguments. Thus, we obtain further corrections to the non-strictly thermal BH radiation spectrum as the final distributions take into account both the BH dynamical geometry during the emission of the particle and the quantum corrections to the semiclassical Hawking temperature.

Circumbinary planets - why they are so likely to transit

Transits on single stars are rare. The probability rarely exceeds a few per cent. Furthermore, this probability rapidly approaches zero at increasing orbital period. Therefore transit surveys have been predominantly limited to the inner parts of exoplanetary systems. Here we demonstrate how circumbinary planets allow us to beat these unfavourable odds. By incorporating the geometry and the three-body dynamics of circumbinary systems, we analytically derive the probability of transitability, a configuration where the binary and planet orbits overlap on the sky. We later show that this is equivalent to the transit probability, but at an unspecified point in time. This probability, at its minimum, is always higher than for single star cases. In addition, it is an increasing function with mutual inclination. By applying our analytical development to eclipsing binaries, we deduce that transits are highly probable, and in some case guaranteed. For example, a circumbinary planet revolving at 1 AU around a 0.3 AU eclipsing binary is certain to eventually transit - a 100% probability - if its mutual inclination is greater than 0.6 deg. We show that the transit probability is generally only a weak function of the planet's orbital period; circumbinary planets may be used as practical tools for probing the outer regions of exoplanetary systems to search for and detect warm to cold transiting planets.

Circumbinary planets - why they are so likely to transit [Replacement]

Transits on single stars are rare. The probability rarely exceeds a few per cent. Furthermore, this probability rapidly approaches zero at increasing orbital period. Therefore transit surveys have been predominantly limited to the inner parts of exoplanetary systems. Here we demonstrate how circumbinary planets allow us to beat these unfavourable odds. By incorporating the geometry and the three-body dynamics of circumbinary systems, we analytically derive the probability of transitability, a configuration where the binary and planet orbits overlap on the sky. We later show that this is equivalent to the transit probability, but at an unspecified point in time. This probability, at its minimum, is always higher than for single star cases. In addition, it is an increasing function with mutual inclination. By applying our analytical development to eclipsing binaries, we deduce that transits are highly probable, and in some case guaranteed. For example, a circumbinary planet revolving at 1 AU around a 0.3 AU eclipsing binary is certain to eventually transit - a 100% probability - if its mutual inclination is greater than 0.6 deg. We show that the transit probability is generally only a weak function of the planet's orbital period; circumbinary planets may be used as practical tools for probing the outer regions of exoplanetary systems to search for and detect warm to cold transiting planets.

On the abundance of extraterrestrial life after the Kepler mission

The data recently accumulated by the Kepler mission have demonstrated that small planets are quite common and that a significant fraction of all stars may have an Earth-like planet within their Habitable Zone. These results are combined with a Drake-equation formalism to derive the space density of biotic planets as a function of the relatively modest uncertainty in the astronomical data and of the (yet unknown) probability for the evolution of biotic life, Fb. I suggest that Fb may be estimated by future spectral observations of exoplanet biomarkers. If Fb is in the range 0.001 -- 1 then a biotic planet may be expected within 10 -- 100 light years from Earth. Extending the biotic results to advanced life I derive expressions for the distance to putative civilizations in terms of two additional Drake parameters - the probability for evolution of a civilization, Fc, and its average longevity. For instance, assuming optimistic probability values (Fb Fc 1) and a broadcasting longevity of a few thousand years, the likely distance to the nearest civilizations detectable by SETI is of the order of a few thousand light years. The probability of detecting intelligent signals with present and future radio telescopes is calculated as a function of the Drake parameters. Finally, I describe how the detection of intelligent signals would constrain the Drake parameters.

BGLS: A Bayesian formalism for the generalised Lomb-Scargle periodogram

Context. Frequency analyses are very important in astronomy today, not least in the ever-growing field of exoplanets, where short-period signals in stellar radial velocity data are investigated. Periodograms are the main (and powerful) tools for this purpose. However, recovering the correct frequencies and assessing the probability of each frequency is not straightforward. Aims. We provide a formalism that is easy to implement in a code, to describe a Bayesian periodogram that includes weights and a constant offset in the data. The relative probability between peaks can be easily calculated with this formalism. We discuss the differences and agreements between the various periodogram formalisms with simulated examples. Methods. We used the Bayesian probability theory to describe the probability that a full sine function (including weights derived from the errors on the data values and a constant offset) with a specific frequency is present in the data. Results. From the expression for our Baysian generalised Lomb-Scargle periodogram (BGLS), we can easily recover the expression for the non-Bayesian version. In the simulated examples we show that this new formalism recovers the underlying periods better than previous versions. A Python-based code is available for the community.

Varying constants quantum cosmology

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Varying constants quantum cosmology [Replacement]

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Varying constants quantum cosmology [Replacement]

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Varying constants quantum cosmology [Replacement]

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Varying constants quantum cosmology [Cross-Listing]

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Varying constants quantum cosmology [Cross-Listing]

We discuss minisuperspace models within the framework of varying physical constants theories including $\Lambda$-term. In particular, we consider the varying speed of light (VSL) theory and varying gravitational constant theory (VG) using the specific ans\"atze for the variability of constants: $c(a) = c_0 a^n$ and $G(a)=G_0 a^q$. We find that most of the varying $c$ and $G$ minisuperspace potentials are of the tunneling type which allows to use WKB approximation of quantum mechanics. Using this method we show that the probability of tunneling of the universe "from nothing" ($a=0)$ to a Friedmann geometry with the scale factor $a_t$ is large for growing $c$ models and is strongly suppressed for diminishing $c$ models. As for $G$ varying, the probability of tunneling is large for $G$ diminishing, while it is small for $G$ increasing. In general, both varying $c$ and $G$ change the probability of tunneling in comparison to the standard matter content (cosmological term, dust, radiation) universe models.

Precise model of Hawking radiation from the tunnelling mechanism [Replacement]

We recently improved the famous result of Parikh and Wilczek, who found a probability of emission of Hawking radiation which is compatible with a non-strictly thermal spectrum, showing that such a probability of emission is really associated to two non-strictly thermal distributions for boson and fermions. Here we finalize the model by finding the correct value of the pre-factor of the Parikh and Wilczek probability of emission. In fact, that expression has the "of order" sign instead of the equality. In general, in this kind of leading order tunnelling calculations, the exponent arises indeed from the classical action and the pre-factor is an order Planck constant correction. But in the case of emissions of Hawking quanta, the variation of the Bekenstein-Hawking entropy is order 1 for an emitted particle having energy of order the Hawking temperature. As a consequence, the exponent in the Parikh and Wilczek probability of emission is order unity and one asks what is the real significance of that scaling if the pre-factor is unknown. Here we solve the problem assuming the unitarity of the black hole (BH) quantum evaporation and considering the natural correspondence between Hawking radiation and quasi-normal modes (QNMs) of excited BHs, in a "Bohr-like model" that we recently discussed in a series of papers. In that papers, QNMs are interpreted as natural BH quantum levels (the "electron states" in the "Bohr-like model"). Here we find the intriguing result that, although in general it is well approximated by 1, the pre-factor of the Parikh and Wilczek probability of emission depends on the BH quantum level n. We also write down an elegant expression of the probability of emission in terms of the BH quantum levels.

Precise model of Hawking radiation from the tunnelling mechanism [Replacement]

We recently improved the famous result of Parikh and Wilczek, who found a probability of emission of Hawking radiation which is compatible with a non-strictly thermal spectrum, showing that such a probability of emission is really associated to two non-strictly thermal distributions for boson and fermions. Here we finalize the model by finding the correct value of the pre-factor of the Parikh and Wilczek probability of emission. In fact, that expression has the "of order" sign instead of the equality. In general, in this kind of leading order tunnelling calculations, the exponent arises indeed from the classical action and the pre-factor is an order Planck constant correction. But in the case of emissions of Hawking quanta, the variation of the Bekenstein-Hawking entropy is order 1 for an emitted particle having energy of order the Hawking temperature. As a consequence, the exponent in the Parikh and Wilczek probability of emission is order unity and one asks what is the real significance of that scaling if the pre-factor is unknown. Here we solve the problem assuming the unitarity of the black hole (BH) quantum evaporation and considering the natural correspondence between Hawking radiation and quasi-normal modes (QNMs) of excited BHs, in a "Bohr-like model" that we recently discussed in a series of papers. In that papers, QNMs are interpreted as natural BH quantum levels (the "electron states" in the "Bohr-like model"). Here we find the intriguing result that, although in general it is well approximated by 1, the pre-factor of the Parikh and Wilczek probability of emission depends on the BH quantum level n. We also write down an elegant expression of the probability of emission in terms of the BH quantum levels.

 

You need to log in to vote

The blog owner requires users to be logged in to be able to vote for this post.

Alternatively, if you do not have an account yet you can create one here.

Powered by Vote It Up

^ Return to the top of page ^