Array ( [0] => tag/lsst/ ) lsst « Vox Charta

# Posts Tagged lsst

## Recent Postings from lsst

### Optical selection of quasars: SDSS and LSST

Over the last decade, quasar sample sizes have increased from several thousand to several hundred thousand, thanks mostly to SDSS imaging and spectroscopic surveys. LSST, the next-generation optical imaging survey, will provide hundreds of detections per object for a sample of more than ten million quasars with redshifts of up to about seven. We briefly review optical quasar selection techniques, with emphasis on methods based on colors, variability properties and astrometric behavior.

### Cosmic shear without shape noise

We describe a new method for reducing the shape noise in weak lensing measurements by an order of magnitude. Our method relies on spectroscopic measurements of disk galaxy rotation and makes use of the Tully-Fisher (TF) relation in order to control for the intrinsic orientations of galaxy disks. For this new proposed experiment, the shape noise ceases to be an important source of statistical error. Using CosmoLike, a new cosmological analysis software package, we simulate likelihood analyses for two spectroscopic weak lensing survey concepts (roughly similar in scale to Dark Energy Survey Task Force Stage III and Stage IV missions) and compare their constraining power to a cosmic shear survey from the Large Synoptic Survey Telescope (LSST). Our forecasts in seven-dimensional cosmological parameter space include statistical uncertainties resulting from shape noise, cosmic variance, halo sample variance, and higher-order moments of the density field. We marginalize over systematic uncertainties arising from photometric redshift errors and shear calibration biases considering both optimistic and conservative assumptions about LSST systematic errors. We find that even the TF-Stage III is highly competitive with the optimistic LSST scenario, while evading the most important sources of theoretical and observational systematic error inherent in traditional weak lensing techniques. Furthermore, the TF technique enables a narrow-bin cosmic shear tomography approach to tightly constrain time-dependent signatures in the dark energy phenomenon.

### Strong Lens Time Delay Challenge: I. Experimental Design

The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters as well as probe the dark matter (sub-)structure within the lens galaxy. The number of lenses with measured time delays is growing rapidly as a result of some dedicated efforts; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ~1000 lens systems consisting of a foreground elliptical galaxy producing multiple images of a background quasar. In an effort to assess the present capabilities of the community to accurately measure the time delays in strong gravitational lens systems, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we invite the community to take part in a "Time Delay Challenge" (TDC). The challenge is organized as a set of "ladders", each containing a group of simulated datasets to be analyzed blindly by participating independent analysis teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs’ datasets increasing in complexity and realism to incorporate a variety of anticipated physical and experimental effects. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of datasets, and is designed to be used as a practice set by the participating teams as they set up their analysis pipelines. The (non-mandatory) deadline for completion of TDC0 will be the TDC1 launch date, December 1, 2013. TDC1 will consist of some 1000 light curves, a sample designed to provide the statistical power to make meaningful statements about the sub-percent accuracy that will be required to provide competitive Dark Energy constraints in the LSST era.

### Growth of Cosmic Structure: Probing Dark Energy Beyond Expansion [Replacement]

The quantity and quality of cosmic structure observations have greatly accelerated in recent years. Further leaps forward will be facilitated by imminent projects, which will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to the nature of dark matter and dark energy. Dark energy and gravity both affect how rapidly structure grows; the greater the acceleration, the more suppressed the growth of structure, while the greater the gravity, the more enhanced the growth. While distance measurements also constrain dark energy, the comparison of growth and distance data tests whether General Relativity describes the laws of physics accurately on large scales. Modified gravity models are able to reproduce the distance measurements but at the cost of altering the growth of structure (these signatures are described in more detail in the accompanying paper on Novel Probes of Gravity and Dark Energy). Upcoming surveys will exploit these differences to determine whether the acceleration of the Universe is due to dark energy or to modified gravity. To realize this potential, both wide field imaging and spectroscopic redshift surveys play crucial roles. Projects including DES, eBOSS, DESI, PFS, LSST, Euclid, and WFIRST are in line to map more than a 1000 cubic-billion-light-year volume of the Universe. These will map the cosmic structure growth rate to 1% in the redshift range 0<z<2, over the last 3/4 of the age of the Universe.

### Growth of Cosmic Structure: Probing Dark Energy Beyond Expansion

The quantity and quality of cosmic structure observations have greatly accelerated in recent years. Further leaps forward will be facilitated by imminent projects, which will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to the nature of dark matter and dark energy. Dark energy and gravity both affect how rapidly structure grows; the greater the acceleration, the more suppressed the growth of structure, while the greater the gravity, the more enhanced the growth. While distance measurements also constrain dark energy, the comparison of growth and distance data tests whether General Relativity describes the laws of physics accurately on large scales. Modified gravity models are able to reproduce the distance measurements but at the cost of altering the growth of structure (these signatures are described in more detail in the accompanying paper on Novel Probes of Gravity and Dark Energy). Upcoming surveys will exploit these differences to determine whether the acceleration of the Universe is due to dark energy or to modified gravity. To realize this potential, both wide field imaging and spectroscopic redshift surveys play crucial roles. Projects including DES, eBOSS, DESI, PFS, LSST, Euclid, and WFIRST are in line to map more than a 1000 cubic-billion-light-year volume of the Universe. These will map the cosmic structure growth rate to 1% in the redshift range 0<z<2, over the last 3/4 of the age of the Universe.

### Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST [Replacement]

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Scale Synoptic Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2. However, this outlook may be conservative, given that our model underpredicts the number of ultra-faint galaxies with large potential annihilation signals actually discovered in the Sloan Digital Sky Survey. Our simulation-based approach focusing on the Milky Way satellite population demographics complements existing empirically-based estimates.

### Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Scale Synoptic Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2.

### Measuring the Thermal Sunyaev-Zel'dovich Effect Through the Cross Correlation of Planck and WMAP Maps with ROSAT Galaxy Cluster Catalogs

We measure a significant correlation between the thermal Sunyaev-Zel’dovich effect in the Planck and WMAP maps and an X-ray cluster map based on ROSAT. We use the 100, 143 and 343 GHz Planck maps and the WMAP 94 GHz map to obtain this cluster cross spectrum. We check our measurements for contamination from dusty galaxies using the cross correlations with the 220, 545 and 843 GHz maps from Planck. Our measurement yields a direct characterization of the cluster power spectrum over a wide range of angular scales that is consistent with large cosmological simulations. The amplitude of this signal depends on cosmological parameters that determine the growth of structure (\sigma_8 and \Omega_M) and scales as \sigma_8^7.4 and \Omega_M^1.9 around the multipole (ell) ~ 1000. We constrain \sigma_8 and \Omega_M from the cross-power spectrum to be \sigma_8 (\Omega_M/0.30)^0.26 = 0.8 +/- 0.02. Since this cross spectrum produces a tight constraint in the \sigma_8 and \Omega_M plane the errors on a \sigma_8 constraint will be mostly limited by the uncertainties from external constraints. Future cluster catalogs, like those from eRosita and LSST, and pointed multi-wavelength observations of clusters will improve the constraining power of this cross spectrum measurement. In principle this analysis can be extended beyond \sigma_8 and \Omega_M to constrain dark energy or the sum of the neutrino masses.

### The Kepler-SEP Mission: Harvesting the South Ecliptic Pole large-amplitude variables with Kepler

As a response to the white paper call, we propose to turn Kepler to the South Ecliptic Pole (SEP) and observe thousands of large amplitude variables for years with high cadence in the frame of the Kepler-SEP Mission. The degraded pointing stability will still allow observing these stars with reasonable (probably better than mmag) accuracy. Long-term continuous monitoring already proved to be extremely helpful to investigate several areas of stellar astrophysics. Space-based missions opened a new window to the dynamics of pulsation in several class of pulsating variable stars and facilitated detailed studies of eclipsing binaries. The main aim of this mission is to better understand the fascinating dynamics behind various stellar pulsational phenomena (resonances, mode coupling, chaos, mode selection) and interior physics (turbulent convection, opacities). This will also improve the applicability of these astrophysical tools for distance measurements, population and stellar evolution studies. We investigated the pragmatic details of such a mission and found a number of advantages: minimal reprogramming of the flight software, a favorable field of view, access to both galactic and LMC objects. However, the main advantage of the SEP field comes from the large sample of well classified targets, mainly through OGLE. Synergies and significant overlap (spatial, temporal and in brightness) with both ground- (OGLE, LSST) and space-based missions (GAIA, TESS) will greatly enhance the scientific value of the Kepler-SEP mission. GAIA will allow full characterization of the distance indicators. TESS will continuously monitor this field for at least one year, and together with the proposed mission provide long time series that cannot be obtained by other means. If Kepler-SEP program is successful, there is a possibility to place one of the so-called LSST "deep-drilling" fields in this region.

### Galactic Stellar Populations in the Era of SDSS and Other Large Surveys

Studies of stellar populations, understood to mean collections of stars with common spatial, kinematic, chemical, and/or age distributions, have been reinvigorated during the last decade by the advent of large-area sky surveys such as SDSS, 2MASS, RAVE, and others. We review recent analyses of these data that, together with theoretical and modeling advances, are revolutionizing our understanding of the nature of the Milky Way, and galaxy formation and evolution in general. The formation of galaxies like the Milky Way was long thought to be a steady process leading to a smooth distribution of stars. However, the abundance of substructure in the multi-dimensional space of various observables, such as position, kinematics, and metallicity, is by now proven beyond doubt, and demonstrates the importance of mergers in the growth of galaxies. Unlike smooth models that involve simple components, the new data reviewed here clearly show many irregular structures, such as the Sagittarius dwarf tidal stream and the Virgo and Pisces overdensities in the halo, and the Monoceros stream closer to the Galactic plane. These recent developments have made it clear that the Milky Way is a complex and dynamical structure, one that is still being shaped by the merging of neighboring smaller galaxies. We also briefly discuss the next generation of wide-field sky surveys, such as SkyMapper, Pan-STARRS, Gaia and LSST, which will improve measurement precision manyfold, and comprise billions of individual stars. The ultimate goal, development of a coherent and detailed story of the assembly and evolutionary history of the Milky Way and other large spirals like it, now appears well within reach.

### Clustering Measurements of broad-line AGNs: Review and Future

Despite substantial effort, the precise physical processes that lead to the growth of super-massive black holes in the centers of galaxies are still not well understood. These phases of black hole growth are thought to be of key importance in understanding galaxy evolution. Forthcoming missions such as eROSITA, HETDEX, eBOSS, BigBOSS, LSST, and Pan-STARRS will compile by far the largest ever Active Galactic Nuclei (AGNs) catalogs which will allow us to measure the spatial distribution of AGNs in the universe with unprecedented accuracy. For the first time, AGN clustering measurements will reach a level of precision that will not only allow for an alternative approach to answering open questions in AGN/galaxy co-evolution but will open a new frontier, allowing us to precisely determine cosmological parameters. This paper reviews the large-scale clustering measurements of broad line AGNs. We summarize how clustering is measured and which constraints can be derived from AGN clustering measurements, we discuss recent developments, and we briefly describe future projects that will deliver extremely large AGN samples which will enable AGN clustering measurements of unprecedented accuracy. In order to maximize the scientific return on the research fields of AGN/galaxy evolution and cosmology, we advise that the community develop a full understanding of the systematic uncertainties which will, in contrast to today’s measurement, be the dominant source of uncertainty.

### Measuring the matter energy density and Hubble parameter from Large Scale Structure

We investigate the method to measure both the present value of the matter energy density contrast and the Hubble parameter directly from the measurement of the linear growth rate which is obtained from the large scale structure of the Universe. From this method, one can obtain the value of the nuisance cosmological parameter $\Omo$ (the present value of the matter energy density contrast) within 3% error if the growth rate measurement can be reached $z > 3.5$. One can also investigate the evolution of the Hubble parameter without any prior on the value of $H_0$ (the current value of the Hubble parameter). Especially, estimating the Hubble parameter are insensitive to the errors on the measurement of the normalized growth rate $f \sigma_8$. However, this method requires the high $z$ ($z > 3.5$) measurement of the growth rate in order to get the less than 5% errors on the measurements of $H(z)$ at $z \leq 1.2$ with the redshift bin $\Delta z = 0.2$. Thus, this will be suitable for the next generation large scale structure galaxy surveys like WFMOS and LSST.

### The Effect of Weak Lensing on Distance Estimates from Supernovae

Using a sample of 608 Type Ia supernovae from the SDSS-II and BOSS surveys, combined with a sample of foreground galaxies from SDSS-II, we estimate the weak lensing convergence for each supernova line-of-sight. We find that the correlation between this measurement and the Hubble residuals is consistent with the prediction from lensing (at a significance of 1.7sigma). Strong correlations are also found between the residuals and supernova nuisance parameters after a linear correction is applied. When these other correlations are taken into account, the lensing signal is detected at 1.4sigma. We show for the first time that distance estimates from supernovae can be improved when lensing is incorporated by including a new parameter in the SALT2 methodology for determining distance moduli. The recovered value of the new parameter is consistent with the lensing prediction. Using WMAP7, HST and BAO data, we find the best-fit value of the new lensing parameter and show that the central values and uncertainties on Omega_m and w are unaffected. The lensing of supernovae, while only seen at marginal significance in this low redshift sample, will be of vital importance for the next generation of surveys, such as DES and LSST, which will be systematics dominated.

### An r-Process Kilonova Associated with the Short-Hard GRB 130603B [Replacement]

We present ground-based optical and Hubble Space Telescope optical and near-IR observations of the short-hard GRB130603B at z=0.356, which demonstrate the presence of excess near-IR emission matching the expected brightness and color of an r-process powered transient (a "kilonova"). The early afterglow fades rapidly with alpha<-2.6 at t~8-32 hr post-burst and has a spectral index of beta=-1.5 (F_nu t^alpha*nu^beta), leading to an expected near-IR brightness at the time of the first HST observation of m(F160W)>29.3 AB mag. Instead, the detected source has m(F160W)=25.8+/-0.2 AB mag, corresponding to a rest-frame absolute magnitude of M(J)=-15.2 mag. The upper limit in the HST optical observations is m(F606W)>27.7 AB mag (3-sigma), indicating an unusually red color of V-H>1.9 mag. Comparing the observed near-IR luminosity to theoretical models of kilonovae produced by ejecta from the merger of an NS-NS or NS-BH binary, we infer an ejecta mass of M_ej~0.03-0.08 Msun for v_ej=0.1-0.3c. The inferred mass matches the expectations from numerical merger simulations. The presence of a kilonova provides the strongest evidence to date that short GRBs are produced by compact object mergers, and provides initial insight on the ejected mass and the primary role that compact object merger may play in the r-process. Equally important, it demonstrates that gravitational wave sources detected by Advanced LIGO/Virgo will be accompanied by optical/near-IR counterparts with unusually red colors, detectable by existing and upcoming large wide-field facilities (e.g., Pan-STARRS, DECam, Subaru, LSST).

### Smoking Gun or Smoldering Embers? A Possible r-process Kilonova Associated with the Short-Hard GRB 130603B

We present Hubble Space Telescope optical and near-IR observations of the short-hard GRB 130603B (z=0.356) obtained 9.4 days post-burst. At the position of the burst we detect a red point source with m(F160W)=25.8+/-0.2 AB mag and m(F606W)>27.5 AB mag (3-sigma), corresponding to rest-frame absolute magnitudes of M_J -15.2 mag and M_B>-13.5 mag. A comparison to the early optical afterglow emission requires a decline rate of alpha_opt<-1.6 (F_nu t^alpha), consistent with the observed X-ray decline at about 1 hr to about 1 day. The observed red color of V-H>1.7 mag is also potentially consistent with the red optical colors of the afterglow at early time (F_nu nu^-1.6 in gri). Thus, an afterglow interpretation is feasible. Alternatively, the red color and faint absolute magnitude are due to emission from an r-process powered transient ("kilonova") produced by ejecta from the merger of an NS-NS or NS-BH binary, the most likely progenitors of short GRBs. In this scenario, the observed brightness implies an outflow with M_ej 0.01 Msun and v_ej 0.1c, in good agreement with the results of numerical merger simulations for roughly equal mass binary constituents (i.e., NS-NS). If true, the kilonova interpretation provides the strongest evidence to date that short GRBs are produced by compact object mergers, and places initial constraints on the ejected mass. Equally important, it demonstrates that gravitational wave sources detected by Advanced LIGO/Virgo will be accompanied by optical/near-IR counterparts with unusually red colors, detectable by existing and upcoming large wide-field facilities (e.g., Pan-STARRS, DECam, Subaru, LSST).

### Dark energy with gravitational lens time delays

Strong lensing gravitational time delays are a powerful and cost effective probe of dark energy. Recent studies have shown that a single lens can provide a distance measurement with 6-7 % accuracy (including random and systematic uncertainties), provided sufficient data are available to determine the time delay and reconstruct the gravitational potential of the deflector. Gravitational-time delays are a low redshift (z~0-2) probe and thus allow one to break degeneracies in the interpretation of data from higher-redshift probes like the cosmic microwave background in terms of the dark energy equation of state. Current studies are limited by the size of the sample of known lensed quasars, but this situation is about to change. Even in this decade, wide field imaging surveys are likely to discover thousands of lensed quasars, enabling the targeted study of ~100 of these systems and resulting in substantial gains in the dark energy figure of merit. In the next decade, a further order of magnitude improvement will be possible with the 10000 systems expected to be detected and measured with LSST and Euclid. To fully exploit these gains, we identify three priorities. First, support for the development of software required for the analysis of the data. Second, in this decade, small robotic telescopes (1-4m in diameter) dedicated to monitoring of lensed quasars will transform the field by delivering accurate time delays for ~100 systems. Third, in the 2020′s, LSST will deliver 1000′s of time delays; the bottleneck will instead be the aquisition and analysis of high resolution imaging follow-up. Thus, the top priority for the next decade is to support fast high resolution imaging capabilities, such as those enabled by the James Webb Space Telescope and next generation adaptive optics systems on large ground based telescopes.

### Unidentified Moving Objects in Next Generation Time Domain Surveys

Existing and future wide-field photometric surveys will produce a time-lapse movie of the sky that will revolutionize our census of variable and moving astronomical and atmospheric phenomena. As with any revolution in scientific measurement capability, this new species of data will also present us with results that are sure to surprise and confound our understanding of the cosmos. While we cannot predict the unknown yields of such endeavors, it is a beneficial exercise to explore certain parameter spaces using reasonable assumptions for rates and observability. To this end I present a simple parameterized model of the detectability of unidentified flying objects (UFOs) with the Large Synoptic Survey Telescope (LSST). I also demonstrate that the LSST is well suited to place the first systematic constraints on the rate of UFO and extraterrestrial visits to our world.

### Recurring flares from supermassive black hole binaries: implications for tidal disruption candidates and OJ 287

I discuss the possibility that accreting, supermassive black hole (SMBH) binaries with sub-parsec separations produce luminous, periodically recurring outbursts that interrupt periods of relative quiescence. This hypothesis is motivated by two characteristics found in simulations of binaries embedded in prograde accretion discs: (i) the formation of a central, low-density cavity, and (ii) the leakage of circumbinary gas into this cavity, occurring once per orbit, via discrete streams on nearly radial trajectories. The first feature will diminish the emergent optical/UV flux of the system relative to active galactic nuclei (AGN) powered by single SMBHs, while the second is likely to trigger periodic fluctuations in the emergent flux. I propose a simple toy model in which a leaked stream crosses its own orbit and shocks, converting its bulk kinetic energy to heat. The result is a hot, optically thick flow that is quickly accreted and produces a flare with an AGN-like spectrum that peaks in the UV and ranges from the optical to the soft X-ray. Due to the preceding quiescence, such a flare could plausibly be mistaken for the tidal disruption of a star. For typical binary periods of years to decades, the event rate in an individual system can be much higher than that predicted for stellar tidal disruptions but infrequent enough to hinder tests of periodicity. The flares proposed here can be produced by very massive (>10^8 Msol) SMBHs that would not tidally disrupt solar-type stars. They could be discovered serendipitously in the future by observatories such as LSST or eROSITA. I apply the model to the active galaxy OJ 287, whose production of periodic optical flares has long fueled speculation that it hosts a SMBH binary.

### Recurring flares from supermassive black hole binaries: implications for tidal disruption candidates and OJ 287 [Replacement]

I discuss the possibility that accreting supermassive black hole (SMBH) binaries with sub-parsec separations produce periodically recurring luminous outbursts that interrupt periods of relative quiescence. This hypothesis is motivated by two characteristics found generically in simulations of binaries embedded in prograde accretion discs: (i) the formation of a central, low-density cavity around the binary, and (ii) the leakage of gas into this cavity, occurring once per orbit via discrete streams on nearly radial trajectories. The first feature would reduce the emergent optical/UV flux of the system relative to active galactic nuclei powered by single SMBHs, while the second can trigger quasiperiodic fluctuations in luminosity. I argue that the quasiperiodic accretion signature may be much more dramatic than previously thought, because the infalling gas streams can strongly shock-heat via self-collision and tidal compression, thereby enhancing viscous accretion. Any optically thick gas that is circularized about either SMBH can accrete before the next pair of streams is deposited, fueling transient, luminous flares that recur every orbit. Due to the diminished flux in between accretion episodes, such cavity-accretion flares could plausibly be mistaken for the tidal disruptions of stars in quiescent nuclei. The flares could be distinguished from tidal disruption events if their quasiperiodic recurrence is observed, or if they are produced by very massive SMBHs that cannot disrupt solar-type stars. They may be discovered serendipitously in surveys such as LSST or eROSITA. I present a heuristic toy model as a proof of concept for the production of cavity-accretion flares, and generate mock light curves and specta. I also apply the model to the active galaxy OJ 287, whose production of quasiperiodic pairs of optical flares has long fueled speculation that it hosts a SMBH binary.

### ELT-MOS White Paper: Science Overview & Requirements

The workhorse instruments of the 8-10m class observatories have become their multi-object spectrographs (MOS), providing comprehensive follow-up to both ground-based and space-borne imaging. With the advent of deeper imaging surveys from, e.g., the HST and VISTA, there are a plethora of spectroscopic targets which are already beyond the sensitivity limits of current facilities. This wealth of targets will grow even more rapidly in the coming years, e.g., after the completion of ALMA, the launch of the JWST and Euclid, and the advent of the LSST. Thus, one of the key requirements underlying plans for the next generation of ground-based telescopes, the Extremely Large Telescopes (ELTs), is for even greater sensitivity for optical and infrared spectroscopy. Here we revisit the scientific motivation for a MOS capability on the European ELT, combining updated elements of science cases advanced from the Phase A instrument studies with new science cases which draw on the latest results and discoveries. These science cases address key questions related to galaxy evolution over cosmic time, from studies of resolved stellar populations in nearby galaxies out to observations of the most distant galaxies, and are used to identify the top-level requirements on an ‘E-ELT/MOS’. We argue that several of the most compelling ELT science cases demand MOS observations, in highly competitive areas of modern astronomy. Recent technical studies have demonstrated that important issues related to e.g. sky subtraction and multi-object AO can be solved, making fast- track development of a MOS instrument feasible. To ensure that ESO retains world leadership in exploring the most distant objects in the Universe, galaxy evolution and stellar populations, we are convinced that a MOS should have high priority in the instrumentation plan for the E-ELT.

### Machine-assisted discovery of relationships in astronomy

High-volume feature-rich data sets are becoming the bread-and-butter of 21st century astronomy but present significant challenges to scientific discovery. In particular, identifying scientifically significant relationships between sets of parameters is non-trivial. Similar problems in biological and geosciences have led to the development of systems which can explore large parameter spaces and identify potentially interesting sets of associations. In this paper, we describe the application of automated discovery systems of relationships to astronomical data sets, focussing on an evolutionary programming technique and an information-theory technique. We demonstrate their use with classical astronomical relationships – the Hertzsprung-Russell diagram and the fundamental plane of elliptical galaxies. We also show how they work with the issue of binary classification which is relevant to the next generation of large synoptic sky surveys, such as LSST. We find that comparable results to more familiar techniques, such as decision trees, are achievable. Finally, we consider the reality of the relationships discovered and how this can be used for feature selection and extraction.

### Limb-Darkening Coefficients for Eclipsing White Dwarfs

We present extensive calculations of linear and non-linear limb-darkening coefficients as well as complete intensity profiles appropriate for modeling the light-curves of eclipsing white dwarfs. We compute limb-darkening coefficients in the Johnson-Kron-Cousins UBVRI photometric system as well as the Large Synoptic Survey Telescope (LSST) ugrizy system using the most up-to-date model atmospheres available. In all, we provide the coefficients for seven different limb-darkening laws. We describe the variations of these coefficients as a function of the atmospheric parameters, including the effects of convection at low effective temperatures. Finally, we discuss the importance of having readily available limb-darkening coefficients in the context of present and future photometric surveys like the LSST, Palomar Transient Factory (PTF), and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS). The LSST, for example, may find ~10^5 eclipsing white dwarfs. The limb-darkening calculations presented here will be an essential part of the detailed analysis of all of these systems.

### Limb-Darkening Coefficients for Eclipsing White Dwarfs [Replacement]

We present extensive calculations of linear and non-linear limb-darkening coefficients as well as complete intensity profiles appropriate for modeling the light-curves of eclipsing white dwarfs. We compute limb-darkening coefficients in the Johnson-Kron-Cousins UBVRI photometric system as well as the Large Synoptic Survey Telescope (LSST) ugrizy system using the most up-to-date model atmospheres available. In all, we provide the coefficients for seven different limb-darkening laws. We describe the variations of these coefficients as a function of the atmospheric parameters, including the effects of convection at low effective temperatures. Finally, we discuss the importance of having readily available limb-darkening coefficients in the context of present and future photometric surveys like the LSST, Palomar Transient Factory, and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS). The LSST, for example, may find ~10^5 eclipsing white dwarfs. The limb-darkening calculations presented here will be an essential part of the detailed analysis of all of these systems.

### The Interior Structure Constants as an Age Diagnostic for Low-Mass, Pre-Main Sequence Detached Eclipsing Binary Stars

We propose a novel method for determining the ages of low-mass, pre-main sequence stellar systems using the apsidal motion of low-mass detached eclipsing binaries. The apsidal motion of a binary system with an eccentric orbit provides information regarding the interior structure constants of the individual stars. These constants are related to the normalized stellar interior density distribution and can be extracted from the predictions of stellar evolution models. We demonstrate that low-mass, pre-main sequence stars undergoing radiative core contraction display rapidly changing interior structure constants (greater than 5% per 10 Myr) that, when combined with observational determinations of the interior structure constants (with 5 — 10% precision), allow for a robust age estimate. This age estimate, unlike those based on surface quantities, is largely insensitive to the surface layer where effects of magnetic activity are likely to be most pronounced. On the main sequence, where age sensitivity is minimal, the interior structure constants provide a valuable test of the physics used in stellar structure models of low-mass stars. There are currently no known systems where this technique is applicable. Nevertheless, the emphasis on time domain astronomy with current missions, such as Kepler, and future missions, such as LSST, has the potential to discover systems where the proposed method will be observationally feasible.

### The cosmological information of shear peaks: beyond the abundance

We study the cosmological information of weak lensing (WL) peaks, focusing on two other statistics besides their abundance: the stacked tangential-shear profiles and the peak-peak correlation function. We use a large ensemble of simulated WL maps with survey specifications relevant to future missions like Euclid and LSST, to explore the three peak probes. We find that the correlation function of peaks with high signal-to-noise (S/N) measured from fields of size 144 sq. deg. has a maximum of ~0.3 at an angular scale ~10 arcmin. For peaks with smaller S/N, the amplitude of the correlation function decreases, and its maximum occurs on smaller angular scales. We compare the peak observables measured with and without shape noise and find that for S/N~3 only ~5% of the peaks are due to large-scale structures, the rest being generated by shape noise. The covariance matrix of the probes is examined: the correlation function is only weakly covariant on scales < 30 arcmin, and slightly more on larger scales; the shear profiles are very correlated for theta > 2 arcmin, with a correlation coefficient as high as 0.7. Using the Fisher-matrix formalism, we compute the cosmological constraints for {Om_m, sig_8, w, n_s} considering each probe separately, as well as in combination. We find that the correlation function of peaks and shear profiles yield marginalized errors which are larger by a factor of 2-4 for {Om_m, sig_8} than the errors yielded by the peak abundance alone, while the errors for {w, n_s} are similar. By combining the three probes, the marginalized constraints are tightened by a factor of ~2 compared to the peak abundance alone, the least contributor to the error reduction being the correlation function. This work therefore recommends that future WL surveys use shear peaks beyond their abundance in order to constrain the cosmological model.

### Will Upcoming Observations Distinguish Between Slow-Roll Dark Energy and a Cosmological Constant?

Numerous upcoming observations, such as WFIRST, BOSS, BigBOSS, LSST, Euclid, and Planck, will constrain dark energy (DE)’s equation of state with great precision. They may well find the ratio of pressure to energy density, w, is -1, meaning DE is equivalent to a cosmological constant. However, many time-varying DE models have also been proposed. A single parametrization to test a broad class of them and that is itself motivated by a physical picture is therefore desirable. We suggest the simplest model of DE has the same mechanism as inflation, likely a scalar field slowly rolling down its potential. If this is so, DE will have a generic equation of state and the Universe will have a generic dependence of the Hubble constant on redshift independent of the potential’s starting value and shape. This equation of state and expression for the Hubble constant offer the desired model-independent but physically motivated parametrization, because they will hold for most of the standard scalar-field models of DE such as quintessence and phantom DE. Using it, we conduct a \chi^2 analysis and find that experiments in the next seven years should be able to distinguish any of these time-varying DE models on the one hand from a cosmological constant on the other to 73% confidence if w today differs from -1 by 3.5%. In the limit of perfectly accurate measurements of \Omega_m and H_0, this confidence would rise to 96%. We also include discussion of the current status of DE experiment, a table compiling the techniques each will use, and tables of the precisions of the experiments for which this information was available at the time of publication.

### A new method to improve photometric redshift reconstruction. Applications to the Large Synoptic Survey Telescope [Replacement]

In the next decade, the LSST will become a major facility for the astronomical community. However accurately determining the redshifts of the observed galaxies without using spectroscopy is a major challenge. Reconstruction of the redshifts with high resolution and well-understood uncertainties is mandatory for many science goals, including the study of baryonic acoustic oscillations. We investigate different approaches to establish the accuracy that can be reached by the LSST six-band photometry. We construct a realistic mock galaxy catalog, based on the GOODS survey luminosity function, by simulating the expected apparent magnitude distribution for the LSST. To reconstruct the photometric redshifts (photo-z’s), we consider a template-fitting method and a neural network method. The photo-z reconstruction from both of these techniques is tested on real CFHTLS data and also on simulated catalogs. We describe a new method to improve photo-z reconstruction that efficiently removes catastrophic outliers via a likelihood ratio statistical test. This test uses the posterior probability functions of the fit parameters and the colors. We show that the photometric redshift accuracy will meet the stringent LSST requirements up to redshift $\sim2.5$ after a selection that is based on the likelihood ratio test or on the apparent magnitude for galaxies with $S/N>5$ in at least 5 bands. The former selection has the advantage of retaining roughly 35% more galaxies for a similar photo-z performance compared to the latter. Photo-z reconstruction using a neural network algorithm is also described. In addition, we utilize the CFHTLS spectro-photometric catalog to outline the possibility of combining the neural network and template-fitting methods. We conclude that the photo-z’s will be accurately estimated with the LSST if a Bayesian prior probability and a calibration sample are used.

### On rates of supernovae strongly lensed by galactic haloes in Millennium Simulation

We make use of publicly available results from N-body Millennium Simulation to create mock samples of lensed supernovae type Ia and core-collapse. Simulating galaxy-galaxy lensing we derive the rates of lensed supernovae and find than at redshifts higher that 0.5 about 0.06 per cent of supernovae will be lensed by a factor two or more. Future wide field surveys like Gaia or LSST should be able to detect lensed supernovae in their unbiased sky monitoring. Gaia (from 2013) will detect at least 2 cases whereas LSST (from 2018) will see more than 500 a year. Large number of future lensed supernovae will allow to verify results of cosmological simulations. The strong galaxy- galaxy lensing gives an opportunity to reach high-redshift supernovae type Ia and extend the Hubble diagram sample.

### Supernovae and Cosmology with Future European Facilities

Prospects for future supernova surveys are discussed, focusing on the ESA Euclid mission and the European Extremely Large Telescope(E-ELT), both expected to be in operation around the turn of the decade. Euclid is a 1.2m space survey telescope that will operate at visible and near-infrared wavelengths, and has the potential to find and obtain multi-band lightcurves for thousands of distant supernovae. The E-ELT is a planned general-purpose ground-based 40m-class optical-IR telescope with adaptive optics built in, which will be capable of obtaining spectra of Type Ia supernovae to redshifts of at least four. The contribution to supernova cosmology with these facilities will be discussed in the context of other future supernova programs such as those proposed for DES, JWST, LSST and WFIRST.

### Mapping the Local Halo: Statistical Parallax Analysis of SDSS Low-Mass Subdwarfs

We present a statistical parallax study of nearly 2,000 M subdwarfs with photometry and spectroscopy from the Sloan Digital Sky Survey. Statistical parallax analysis yields the mean absolute magnitudes, mean velocities and velocity ellipsoids for homogenous samples of stars. We selected homogeneous groups of subdwarfs based on their photometric colors and spectral appearance. We examined the color-magnitude relations of low-mass subdwarfs and quantified their dependence on the newly-refined metallicity parameter, zeta. We also developed a photometric metallicity parameter, delta(g-r), based on the g-r and r-z colors of low-mass stars and used it to select stars with similar metallicities. The kinematics of low-mass subdwarfs as a function of color and metallicity were also examined and compared to main sequence M dwarfs. We find that the SDSS subdwarfs share similar kinematics to the inner halo and thick disk. The color-magnitude relations derived in this analysis will be a powerful tool for identifying and characterizing low-mass metal-poor subdwarfs in future surveys such as GAIA and LSST, making them important and plentiful tracers of the stellar halo.

### Finding the First Cosmic Explosions I: Pair-Instability Supernovae [Replacement]

The first stars are the key to the formation of primitive galaxies, early cosmological reionization and chemical enrichment, and the origin of supermassive black holes. Unfortunately, in spite of their extreme luminosities, individual Population III stars will likely remain beyond the reach of direct observation for decades to come. However, their properties could be revealed by their supernova explosions, which may soon be detected by a new generation of NIR observatories such as JWST and WFIRST. We present light curves and spectra for Pop III pair-instability supernovae calculated with the Los Alamos radiation hydrodynamics code RAGE. Our numerical simulations account for the interaction of the blast with realistic circumstellar envelopes, the opacity of the envelope, and Lyman absorption by the neutral IGM at high redshift, all of which are crucial to computing the NIR signatures of the first cosmic explosions. We find that JWST will detect pair-instability supernovae out to z > 30, WFIRST will detect them in all-sky surveys out to z ~ 15 – 20 and LSST and Pan-STARRS will find them at z ~ 7 – 8. The discovery of these ancient explosions will probe the first stellar populations and reveal the existence of primitive galaxies that might not otherwise have been detected.

### Finding the First Cosmic Explosions I: Pair-Instability Supernovae

The first stars are the key to the formation of primitive galaxies, early cosmological reionization and chemical enrichment, and the origin of supermassive black holes. Unfortunately, in spite of their extreme luminosities, individual Population III stars will likely remain beyond the reach of direct observation for decades to come. However, their properties could be revealed by their supernova explosions, which may soon be detected by a new generation of NIR observatories such as JWST and WFIRST. We present light curves and spectra for Pop III pair-instability supernovae calculated with the Los Alamos radiation hydrodynamics code RAGE. Our numerical simulations account for the interaction of the blast with realistic circumstellar envelopes, the opacity of the envelope, and Lyman absorption by the neutral IGM at high redshift, all of which are crucial to computing the NIR signatures of the first cosmic explosions. We find that JWST will detect pair-instability supernovae out to z > 30, WFIRST will detect them in all-sky surveys out to z ~ 15 – 20 and LSST and Pan-STARRS will find them at z ~ 7 – 8. The discovery of these ancient explosions will probe the first stellar populations and reveal the existence of primitive galaxies that might not otherwise have been detected.

### Finding the First Cosmic Explosions I: Pair-Instability Supernovae [Replacement]

The first stars are the key to the formation of primitive galaxies, early cosmological reionization and chemical enrichment, and the origin of supermassive black holes. Unfortunately, in spite of their extreme luminosities, individual Population III stars will likely remain beyond the reach of direct observation for decades to come. However, their properties could be revealed by their supernova explosions, which may soon be detected by a new generation of NIR observatories such as JWST and WFIRST. We present light curves and spectra for Pop III pair-instability supernovae calculated with the Los Alamos radiation hydrodynamics code RAGE. Our numerical simulations account for the interaction of the blast with realistic circumstellar envelopes, the opacity of the envelope, and Lyman absorption by the neutral IGM at high redshift, all of which are crucial to computing the NIR signatures of the first cosmic explosions. We find that JWST will detect pair-instability supernovae out to z > 30, WFIRST will detect them in all-sky surveys out to z ~ 15 – 20 and LSST and Pan-STARRS will find them at z ~ 7 – 8. The discovery of these ancient explosions will probe the first stellar populations and reveal the existence of primitive galaxies that might not otherwise have been detected.

### Finding the First Cosmic Explosions I: Pair-Instability Supernovae [Replacement]

The first stars are the key to the formation of primitive galaxies, early cosmological reionization and chemical enrichment, and the origin of supermassive black holes. Unfortunately, in spite of their extreme luminosities, individual Population III stars will likely remain beyond the reach of direct observation for decades to come. However, their properties could be revealed by their supernova explosions, which may soon be detected by a new generation of NIR observatories such as JWST and WFIRST. We present light curves and spectra for Pop III pair-instability supernovae calculated with the Los Alamos radiation hydrodynamics code RAGE. Our numerical simulations account for the interaction of the blast with realistic circumstellar envelopes, the opacity of the envelope, and Lyman absorption by the neutral IGM at high redshift, all of which are crucial to computing the NIR signatures of the first cosmic explosions. We find that JWST will detect pair-instability supernovae out to z > 30, WFIRST will detect them in all-sky surveys out to z ~ 15 – 20 and LSST and Pan-STARRS will find them at z ~ 7 – 8. The discovery of these ancient explosions will probe the first stellar populations and reveal the existence of primitive galaxies that might not otherwise have been detected.

### Opening the 100-Year Window for Time Domain Astronomy

The large-scale surveys such as PTF, CRTS and Pan-STARRS-1 that have emerged within the past 5 years or so employ digital databases and modern analysis tools to accentuate research into Time Domain Astronomy (TDA). Preparations are underway for LSST which, in another 6 years, will usher in the second decade of modern TDA. By that time the Digital Access to a Sky Century @ Harvard (DASCH) project will have made available to the community the full sky Historical TDA database and digitized images for a century (1890–1990) of coverage. We describe the current DASCH development and some initial results, and outline plans for the "production scanning" phase and data distribution which is to begin in 2012. That will open a 100-year window into temporal astrophysics, revealing rare transients and (especially) astrophysical phenomena that vary on time-scales of a decade. It will also provide context and archival comparisons for the deeper modern surveys

### Effect of Our Galaxy's Motion on Weak Lensing Measurements of Shear and Convergence [Replacement]

In this work we investigate the effect on weak-lensing shear and convergence measurements due to distortions from the Lorentz boost induced by our Galaxy’s motion. While no ellipticity is induced in an image from the Lorentz boost to first order in beta = v/c, the image is magnified. This affects the inferred convergence at a 10 per cent level, and is most notable for low multipoles in the convergence power spectrum C {\kappa}{\kappa} and for surveys with large sky coverage like LSST and DES. Experiments which image only small fractions of the sky and convergence power spectrum determinations at l > 5 can safely neglect the boost effect to first order in beta.

### Millions of Multiples: Detecting and Characterizing Close-Separation Binary Systems in Synoptic Sky Surveys

The direct detection of binary systems in wide-field surveys is limited by the size of the stars’ point-spread-functions (PSFs). A search for elongated objects can find closer companions, but is limited by the precision to which the PSF shape can be calibrated for individual stars. We have developed the BinaryFinder algorithm to search for close binaries by using precision measurements of PSF ellipticity across wide-field survey images. We show that the algorithm is capable of reliably detecting binary systems down to approximately 1/5 of the seeing limit, and can directly measure the systems’ position angles, separations and contrast ratios. To verify the algorithm’s performance we evaluated 100,000 objects in Palomar Transient Factory (PTF) wide-field-survey data for signs of binarity, and then used the Robo-AO robotic laser adaptive optics system to verify the parameters of 44 high-confidence targets. We show that BinaryFinder correctly predicts the presence of close companions with a <5% false-positive rate, measures the detected binaries’ position angles within 2 degrees and separations within 25%, and weakly constrains their contrast ratios. When applied to the full PTF dataset, we estimate that BinaryFinder will discover and characterize ~450,000 physically-associated binary systems with separations <2 arcseconds and magnitudes brighter than R=18. New wide-field synoptic surveys with high sensitivity and sub-arcsecond angular resolution, such as LSST, will allow BinaryFinder to reliably detect millions of very faint binary systems with separations as small as 0.1 arcseconds.

### Stellar transits in active galactic nuclei

Supermassive black holes (SMBH) are typically surrounded by a dense stellar population in galactic nuclei. Stars crossing the line of site in active galactic nuclei (AGN) produce a characteristic transit lightcurve, just like extrasolar planets do when they transit their host star. We examine the possibility of finding such AGN transits in deep optical, UV, and X-ray surveys. We calculate transit lightcurves using the Novikov–Thorne thin accretion disk model, including general relatistic effects. Based on the expected properties of stellar cusps, we find that around 10^6 solar mass SMBHs, transits of red giants are most common for stars on close orbits with transit durations of a few weeks and orbital periods of a few years. We find that detecting AGN transits requires repeated observations of thousands of low mass AGNs to 1% photometric accuracy in optical, or ~ 10% in UV bands or soft X-ray. It may be possible to identify stellar transits in the Pan-STARRS and LSST optical and the eROSITA X-ray surveys. Such observations could be used to constrain black hole mass, spin, inclination and accretion rate. Transit rates and durations could give valuable information on the circumnuclear stellar clusters as well. Transit lightcurves could be used to image accretion disks with unprecedented resolution, allowing to resolve the SMBH silhouette in distant AGNs.

### Degeneracies in parametrized modified gravity models

We study degeneracies between parameters in some of the widely used parametrized modified gravity models. We investigate how different observables from a future photometric weak lensing survey such as LSST, correlate the effects of these parameters and to what extent the degeneracies are broken. We also study the impact of other degenerate effects, namely massive neutrinos and some of the weak lensing systematics, on the correlations.

### Constraining the substructure of dark matter haloes with galaxy-galaxy lensing

With galaxy groups constructed from the Sloan Digital Sky Survey (SDSS), we analyze the expected galaxy-galaxy lensing signals around satellite galaxies residing in different host haloes and located at different halo-centric distances. We use Markov Chain Monte Carlo (MCMC) method to explore the potential constraints on the mass and density profile of subhaloes associated with satellite galaxies from SDSS-like surveys and surveys similar to the Large Synoptic Survey Telescope (LSST). Our results show that for SDSS-like surveys, we can only set a loose constraint on the mean mass of subhaloes. With LSST-like surveys, however, both the mean mass and the density profile of subhaloes can be well constrained.

### Constraining the substructure of dark matter haloes with galaxy-galaxy lensing [Replacement]

With galaxy groups constructed from the Sloan Digital Sky Survey (SDSS), we analyze the expected galaxy-galaxy lensing signals around satellite galaxies residing in different host haloes and located at different halo-centric distances. We use Markov Chain Monte Carlo (MCMC) method to explore the potential constraints on the mass and density profile of subhaloes associated with satellite galaxies from SDSS-like surveys and surveys similar to the Large Synoptic Survey Telescope (LSST). Our results show that for SDSS-like surveys, we can only set a loose constraint on the mean mass of subhaloes. With LSST-like surveys, however, both the mean mass and the density profile of subhaloes can be well constrained.

### Extended Photometry for the DEEP2 Galaxy Redshift Survey: A Testbed for Photometric Redshift Experiments [Replacement]

This paper describes a new catalog that supplements the existing DEEP2 Galaxy Redshift Survey photometric and spectroscopic catalogs with ugriz photometry from two other surveys; the Canada-France-Hawaii Legacy Survey (CFHTLS) and the Sloan Digital Sky Survey (SDSS). Each catalog is cross-matched by position on the sky in order to assign ugriz photometry to objects in the DEEP2 catalogs. We have recalibrated the CFHTLS photometry where it overlaps DEEP2 in order to provide a more uniform dataset. We have also used this improved photometry to predict DEEP2 BRI photometry in regions where only poorer measurements were available previously. In addition, we have included improved astrometry tied to SDSS rather than USNO-A2.0 for all DEEP2 objects. In total this catalog contains ~27,000 objects with full ugriz photometry as well as robust spectroscopic redshift measurements, 64% of which have r > 23. By combining the secure and accurate redshifts of the DEEP2 Galaxy Redshift Survey with ugriz photometry, we have created a catalog that can be used as an excellent testbed for future photo-z studies, including tests of algorithms for surveys such as LSST and DES.

### Extended Photometry for the DEEP2 Galaxy Redshift Survey: A Testbed for Photometric Redshift Experiments

This paper describes a new catalog that supplements the existing DEEP2 Galaxy Redshift Survey photometric and spectroscopic catalogs with ugriz photometry from two other surveys; the Canada-France-Hawaii Legacy Survey (CFHTLS) and the Sloan Digital Sky Survey (SDSS). Each catalog is cross-matched by position on the sky in order to assign ugriz photometry to objects in the DEEP2 catalogs. We have recalibrated the CFHTLS photometry where it overlaps DEEP2 in order to provide a more uniform dataset. We have also used this improved photometry to predict DEEP2 BRI photometry in regions where only poorer measurements were available previously. In addition, we have included improved astrometry tied to SDSS rather than USNO-A2.0 for all DEEP2 objects. In total this catalog contains ~27,000 objects with full ugriz photometry as well as robust spectroscopic redshift measurements, 64% of which have r > 23. By combining the secure and accurate redshifts of the DEEP2 Galaxy Redshift Survey with ugriz photometry, we have created a catalog that can be used as an excellent testbed for future photo-z studies, including tests of algorithms for surveys such as LSST and DES.

### Accurate Geodetic Coordinates for Observatories on Cerro Tololo and Cerro Pachon [Replacement]

As the 50th anniversary of the Cerro Tololo Inter-American Observatory (CTIO) draws near, the author was surprised to learn that the published latitude and longitude for CTIO in the Astronomical Almanac and iraf observatory database appears to differ from modern GPS-measured geodetic positions by nearly a kilometer. Surely, the position for CTIO could not be in error after five decades? The source of the discrepancy appears to be due to the ~30" difference between the astronomical and geodetic positions — a systematic effect due to vertical deflection first reported by Harrington, Mintz Blanco, & Blanco (1972). Since the astronomical position is not necessarily the desired quantity for some calculations, and since the number of facilities on Cerro Tololo and neighboring Cerro Pachon has grown considerably over the years, I decided to measure accurate geodetic positions for all of the observatories and some select landmarks on the two peaks using GPS and Google Earth. Both sets of measurements were inter-compared, and externally compared to a high accuracy geodetic position for a NASA Space Geodesy Program survey monument on Tololo. I conclude that Google Earth can currently be used to determine absolute geodetic positions (i.e. compared to GPS) accurate to roughly +-0.15" (+-5 m) in latitude and longitude without correction, or approximately +-0".10 (+-3 m) with correction. I tabulate final geodetic and geocentric positions on the WGS-84 coordinate system for all astronomical observatories on Cerro Tololo and Cerro Pachon with accuracy +-0".1 (+-3 m). One surprise is that an oft-cited position for LSST is in error by 9.4 km and the quoted elevation is in error by 500 m.

### Accurate Geodetic Coordinates for Observatories on Cerro Tololo and Cerro Pachon

As the 50th anniversary of the Cerro Tololo Inter-American Observatory (CTIO) draws near, the author was surprised to learn that the published latitude and longitude for CTIO in the Astronomical Almanac and iraf observatory database appears to differ from modern GPS-measured geodetic positions by nearly a kilometer. Surely, the position for CTIO could not be in error after five decades? The source of the discrepancy appears to be due to the ~30" difference between the astronomical and geodetic positions — a systematic effect due to vertical deflection first reported by Harrington, Mintz Blanco, & Blanco (1972). Since the astronomical position is not necessarily the desired quantity for some calculations, and since the number of facilities on Cerro Tololo and neighboring Cerro Pachon has grown considerably over the years, I decided to measure accurate geodetic positions for all of the observatories and some select landmarks on the two peaks using GPS and Google Earth. Both sets of measurements were inter-compared, and externally compared to a high accuracy geodetic position for a NASA Space Geodesy Program survey monument on Tololo. I conclude that Google Earth can currently be used to determine absolute geodetic positions (i.e. compared to GPS) accurate to roughly +-0.15" (+-5 m) in latitude and longitude without correction, or approximately +-0".10 (+-3 m) with correction. I tabulate final geodetic and geocentric positions on the WGS-84 coordinate system for all astronomical observatories on Cerro Tololo and Cerro Pachon with accuracy +-0".1 (+-3 m). One surprise is that an oft-cited position for LSST is in error by 9.4 km and the quoted elevation is in error by 500 m.

### Accurate Geodetic Coordinates for Observatories on Cerro Tololo and Cerro Pachon [Replacement]

As the 50th anniversary of the Cerro Tololo Inter-American Observatory (CTIO) draws near, the author was surprised to learn that the published latitude and longitude for CTIO in the Astronomical Almanac and iraf observatory database appears to differ from modern GPS-measured geodetic positions by nearly a kilometer. Surely, the position for CTIO could not be in error after five decades? The source of the discrepancy appears to be due to the ~30" difference between the astronomical and geodetic positions — a systematic effect due to vertical deflection first reported by Harrington, Mintz Blanco, & Blanco (1972). Since the astronomical position is not necessarily the desired quantity for some calculations, and since the number of facilities on Cerro Tololo and neighboring Cerro Pachon has grown considerably over the years, I decided to measure accurate geodetic positions for all of the observatories and some select landmarks on the two peaks using GPS and Google Earth. Both sets of measurements were inter-compared, and externally compared to a high accuracy geodetic position for a NASA Space Geodesy Program survey monument on Tololo. I conclude that Google Earth can currently be used to determine absolute geodetic positions (i.e. compared to GPS) accurate to roughly +-0.15" (+-5 m) in latitude and longitude without correction, or approximately +-0".10 (+-3 m) with correction. I tabulate final geodetic and geocentric positions on the WGS-84 coordinate system for all astronomical observatories on Cerro Tololo and Cerro Pachon with accuracy +-0".1 (+-3 m). One surprise is that an oft-cited position for LSST is in error by 9.4 km and the quoted elevation is in error by 500 m.

### Electromagnetic transients as triggers in searches for gravitational waves from compact binary mergers [Replacement]

The detection of an electromagnetic transient which may originate from a binary neutron star merger can increase the probability that a given segment of data from the LIGO-Virgo ground-based gravitational-wave detector network contains a signal from a binary coalescence. Additional information contained in the electromagnetic signal, such as the sky location or distance to the source, can help rule out false alarms, and thus lower the necessary threshold for a detection. Here, we develop a framework for determining how much sensitivity is added to a gravitational-wave search by triggering on an electromagnetic transient. We apply this framework to a variety of relevant electromagnetic transients, from short GRBs to signatures of r-process heating to optical and radio orphan afterglows. We compute the expected rates of multi-messenger observations in the Advanced detector era, and find that searches triggered on short GRBs — with current high-energy instruments, such as Fermi — and nucleosynthetic kilonovae’ — with future optical surveys, like LSST — can boost the number of multi-messenger detections by 15% and 40%, respectively, for a binary neutron star progenitor model. Short GRB triggers offer precise merger timing, but suffer from detection rates decreased by beaming and the high a priori probability that the source is outside the LIGO-Virgo sensitive volume. Isotropic kilonovae, on the other hand, could be commonly observed within the LIGO-Virgo sensitive volume with an instrument roughly an order of magnitude more sensitive than current optical surveys. We propose that the most productive strategy for making multi-messenger gravitational-wave observations is using triggers from future deep, optical all-sky surveys, with characteristics comparable to LSST, which could make as many as ten such coincident observations a year.

### Electromagnetic transients as triggers in searches for gravitational waves from compact binary mergers

The detection of an electromagnetic transient which may originate from a binary neutron star merger can increase the probability that a given segment of data from the LIGO-Virgo ground-based gravitational-wave detector network contains a signal from a binary coalescence. Additional information contained in the electromagnetic signal, such as the sky location or distance to the source, can help rule out false alarms, and thus lower the necessary threshold for a detection. Here, we develop a framework for determining how much sensitivity is added to a gravitational-wave search by triggering on an electromagnetic transient. We apply this framework to a variety of relevant electromagnetic transients, from short GRBs to signatures of r-process heating to optical and radio orphan afterglows. We compute the expected rates of multi-messenger observations in the Advanced detector era, and find that searches triggered on short GRBs — with current high-energy instruments, such as Fermi — and nucleosynthetic kilonovae’ — with future optical surveys, like LSST — can boost the number of multi-messenger detections by 15% and 40%, respectively, for a binary neutron star progenitor model. Short GRB triggers offer precise merger timing, but suffer from detection rates decreased by beaming and the high a priori probability that the source is outside the LIGO-Virgo sensitive volume. Isotropic kilonovae, on the other hand, could be commonly observed within the LIGO-Virgo sensitive volume with an instrument roughly an order of magnitude more sensitive than current optical surveys. We propose that the most productive strategy for making multi-messenger gravitational-wave observations is using triggers from future deep, optical all-sky surveys, with characteristics comparable to LSST, which could make as many as ten such coincident observations a year.

### Wide-Field InfraRed Survey Telescope (WFIRST) Final Report

In December 2010, NASA created a Science Definition Team (SDT) for WFIRST, the Wide Field Infra-Red Survey Telescope, recommended by the Astro 2010 Decadal Survey as the highest priority for a large space mission. The SDT was chartered to work with the WFIRST Project Office at GSFC and the Program Office at JPL to produce a Design Reference Mission (DRM) for WFIRST. Part of the original charge was to produce an interim design reference mission by mid-2011. That document was delivered to NASA and widely circulated within the astronomical community. In late 2011 the Astrophysics Division augmented its original charge, asking for two design reference missions. The first of these, DRM1, was to be a finalized version of the interim DRM, reducing overall mission costs where possible. The second of these, DRM2, was to identify and eliminate capabilities that overlapped with those of NASA’s James Webb Space Telescope (henceforth JWST), ESA’s Euclid mission, and the NSF’s ground-based Large Synoptic Survey Telescope (henceforth LSST), and again to reduce overall mission cost, while staying faithful to NWNH. This report presents both DRM1 and DRM2.

The blog owner requires users to be logged in to be able to vote for this post.

Alternatively, if you do not have an account yet you can create one here.