Posts Tagged lsst

Recent Postings from lsst

Transiting Planets with LSST II. Period Detection of Planets Orbiting 1 Solar Mass Hosts

The Large Synoptic Survey Telescope (LSST) will photometrically monitor ~1 billion stars for ten years. The resulting light curves can be used to detect transiting exoplanets. In particular, as demonstrated by Lund et al. (2015), LSST will probe stellar populations currently undersampled in most exoplanet transit surveys, including out to extragalactic distances. In this paper we test the efficiency of the box-fitting least-squares (BLS) algorithm for accurately recovering the periods of transiting exoplanets using simulated LSST data. We model planets with a range of radii orbiting a solar-mass star at a distance of 7 kpc, with orbital periods ranging from 0.5 to 20 d. We find that typical LSST observations will be able to reliably detect Hot Jupiters with periods shorter than ~3 d. At the same time, we find that the LSST deep drilling cadence is extremely powerful: the BLS algorithm successfully recovers at least 30% of sub-Saturn-size exoplanets with orbital periods as long as 20 d.

The Gaia-LSST Synergy

We discuss the synergy of Gaia and the Large Synoptic Survey Telescope (LSST) in the context of Milky Way studies. LSST can be thought of as Gaia’s deep complement because the two surveys will deliver trigonometric parallax, proper-motion, and photometric measurements with similar uncertainties at Gaia’s faint end at $r=20$, and LSST will extend these measurements to a limit about five magnitudes fainter. We also point out that users of Gaia data will have developed data analysis skills required to benefit from LSST data, and provide detailed information about how international participants can join LSST.

Periodograms for Multiband Astronomical Time Series

This paper introduces the multiband periodogram, a general extension of the well-known Lomb-Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb-Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common to all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. This decrease in the effective model complexity is the main reason for improved performance. We use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature, and find that this method will be able to efficiently determine the correct period in the majority of LSST’s bright RR Lyrae stars with as little as six months of LSST data. A Python implementation of this method, along with code to fully reproduce the results reported here, is available on GitHub.

The Whole is Greater than the Sum of the Parts: Optimizing the Joint Science Return from LSST, Euclid and WFIRST

The focus of this report is on the opportunities enabled by the combination of LSST, Euclid and WFIRST, the optical surveys that will be an essential part of the next decade’s astronomy. The sum of these surveys has the potential to be significantly greater than the contributions of the individual parts. As is detailed in this report, the combination of these surveys should give us multi-wavelength high-resolution images of galaxies and broadband data covering much of the stellar energy spectrum. These stellar and galactic data have the potential of yielding new insights into topics ranging from the formation history of the Milky Way to the mass of the neutrino. However, enabling the astronomy community to fully exploit this multi-instrument data set is a challenging technical task: for much of the science, we will need to combine the photometry across multiple wavelengths with varying spectral and spatial resolution. We identify some of the key science enabled by the combined surveys and the key technical challenges in achieving the synergies.

The Whole is Greater than the Sum of the Parts: Optimizing the Joint Science Return from LSST, Euclid and WFIRST [Replacement]

The focus of this report is on the opportunities enabled by the combination of LSST, Euclid and WFIRST, the optical surveys that will be an essential part of the next decade’s astronomy. The sum of these surveys has the potential to be significantly greater than the contributions of the individual parts. As is detailed in this report, the combination of these surveys should give us multi-wavelength high-resolution images of galaxies and broadband data covering much of the stellar energy spectrum. These stellar and galactic data have the potential of yielding new insights into topics ranging from the formation history of the Milky Way to the mass of the neutrino. However, enabling the astronomy community to fully exploit this multi-instrument data set is a challenging technical task: for much of the science, we will need to combine the photometry across multiple wavelengths with varying spectral and spatial resolution. We identify some of the key science enabled by the combined surveys and the key technical challenges in achieving the synergies.

Improving the LSST dithering pattern and cadence for dark energy studies [Replacement]

The Large Synoptic Survey Telescope (LSST) will explore the entire southern sky over 10 years starting in 2022 with unprecedented depth and time sampling in six filters, $ugrizy$. Artificial power on the scale of the 3.5 deg LSST field-of-view will contaminate measurements of baryonic acoustic oscillations (BAO), which fall at the same angular scale at redshift $z \sim 1$. Using the HEALPix framework, we demonstrate the impact of an "un-dithered" survey, in which $17\%$ of each LSST field-of-view is overlapped by neighboring observations, generating a honeycomb pattern of strongly varying survey depth and significant artificial power on BAO angular scales. We find that adopting large dithers (i.e., telescope pointing offsets) of amplitude close to the LSST field-of-view radius reduces artificial structure in the galaxy distribution by a factor of $\sim$10. We propose an observing strategy utilizing large dithers within the main survey and minimal dithers for the LSST Deep Drilling Fields. We show that applying various magnitude cutoffs can further increase survey uniformity. We find that a magnitude cut of $r < 27.3$ removes significant spurious power from the angular power spectrum with a minimal reduction in the total number of observed galaxies over the ten-year LSST run. We also determine the effectiveness of the observing strategy for Type Ia SNe and predict that the main survey will contribute $\sim$100,000 Type Ia SNe. We propose a concentrated survey where LSST observes one-third of its main survey area each year, increasing the number of main survey Type Ia SNe by a factor of $\sim$1.5, while still enabling the successful pursuit of other science drivers.

Improving the LSST dithering pattern and cadence for dark energy studies

The Large Synoptic Survey Telescope (LSST) will explore the entire southern sky over 10 years starting in 2022 with unprecedented depth and time sampling in six filters, $ugrizy$. Artificial power on the scale of the 3.5 deg LSST field-of-view will contaminate measurements of baryonic acoustic oscillations (BAO), which fall at the same angular scale at redshift $z \sim 1$. Using the HEALPix framework, we demonstrate the impact of an "un-dithered" survey, in which $17\%$ of each LSST field-of-view is overlapped by neighboring observations, generating a honeycomb pattern of strongly varying survey depth and significant artificial power on BAO angular scales. We find that adopting large dithers (i.e., telescope pointing offsets) of amplitude close to the LSST field-of-view radius reduces artificial structure in the galaxy distribution by a factor of $\sim$10. We propose an observing strategy utilizing large dithers within the main survey and minimal dithers for the LSST Deep Drilling Fields. We show that applying various magnitude cutoffs can further increase survey uniformity. We find that a magnitude cut of $r < 27.3$ removes significant spurious power from the angular power spectrum with a minimal reduction in the total number of observed galaxies over the ten-year LSST run. We also determine the effectiveness of the observing strategy for Type Ia SNe and predict that the main survey will contribute $\sim$100,000 Type Ia SNe. We propose a concentrated survey where LSST observes one-third of its main survey area each year, increasing the number of main survey Type Ia SNe by a factor of $\sim$1.5, while still enabling the successful pursuit of other science drivers.

Crosstalk in multi-output CCDs for LSST [Replacement]

LSST’s compact, low power focal plane will be subject to electronic crosstalk with some unique signatures due to its readout geometry. This note describes the crosstalk mechanisms, ongoing characterization of prototypes, and implications for the observing cadence.

Synergy between the Large Synoptic Survey Telescope and the Square Kilometre Array

We provide an overview of the science benefits of combining information from the Square Kilometre Array (SKA) and the Large Synoptic Survey Telescope (LSST). We first summarise the capabilities and timeline of the LSST and overview its science goals. We then discuss the science questions in common between the two projects, and how they can be best addressed by combining the data from both telescopes. We describe how weak gravitational lensing and galaxy clustering studies with LSST and SKA can provide improved constraints on the causes of the cosmological acceleration. We summarise the benefits to galaxy evolution studies of combining deep optical multi-band imaging with radio observations. Finally, we discuss the excellent match between one of the most unique features of the LSST, its temporal cadence in the optical waveband, and the time resolution of the SKA.

Probing Neutrino Hierarchy and Chirality via Wakes

The relic neutrinos are expected to acquire a bulk relative velocity with respect to the dark matter at low redshifts, and downstream of dark matter halos neutrino wakes are expected to develop. We propose a method of measuring the neutrino mass based on this mechanism. The neutrino wake will cause a dipole distortion of the galaxy-galaxy lensing pattern. This effect could be detected by combining upcoming lensing surveys, e.g. the LSST and Euclid surveys with a low redshift galaxy survey or a 21cm intensity mapping survey which can map the neutrino flow field. The data obtained with LSST and Euclid should enable us to make positive detection if the three neutrino masses are Quasi-Degenerate, and a future high precision 21cm lensing survey would allow the normal hierarchy and inverted hierarchy cases to be distinguished, and even the right handed Dirac neutrinos may be detectable.

Probing Neutrino Hierarchy and Chirality via Wakes [Cross-Listing]

The relic neutrinos are expected to acquire a bulk relative velocity with respect to the dark matter at low redshifts, and downstream of dark matter halos neutrino wakes are expected to develop. We propose a method of measuring the neutrino mass based on this mechanism. The neutrino wake will cause a dipole distortion of the galaxy-galaxy lensing pattern. This effect could be detected by combining upcoming lensing surveys, e.g. the LSST and Euclid surveys with a low redshift galaxy survey or a 21cm intensity mapping survey which can map the neutrino flow field. The data obtained with LSST and Euclid should enable us to make positive detection if the three neutrino masses are Quasi-Degenerate, and a future high precision 21cm lensing survey would allow the normal hierarchy and inverted hierarchy cases to be distinguished, and even the right handed Dirac neutrinos may be detectable.

LSST optical beam simulator

We describe a camera beam simulator for the LSST which is capable of illuminating a 60mm field at f/1.2 with realistic astronomical scenes, enabling studies of CCD astrometric and photometric performance. The goal is to fully simulate LSST observing, in order to characterize charge transport and other features in the thick fully depleted CCDs and to probe low level systematics under realistic conditions. The automated system simulates the centrally obscured LSST beam and sky scenes, including the spectral shape of the night sky. The doubly telecentric design uses a nearly unit magnification design consisting of a spherical mirror, three BK7 lenses, and one beam-splitter window. To achieve the relatively large field the beam-splitter window is used twice. The motivation for this LSST beam test facility was driven by the need to fully characterize a new generation of thick fully-depleted CCDs, and assess their suitability for the broad range of science which is planned for LSST. Due to the fast beam illumination and the thick silicon design [each pixel is 10 microns wide and over 100 microns deep] at long wavelengths there can be effects of photon transport and charge transport in the high purity silicon. The focal surface covers a field more than sufficient for a 40×40 mm LSST CCD. Delivered optical quality meets design goals, with 50% energy within a 5 micron circle. The tests of CCD performance are briefly described.

Data Mining for Gravitationally Lensed Quasars

Gravitationally lensed (GL) quasars are brighter than their unlensed counterparts and produce images with distinctive morphological signatures. Past searches and target selection algorithms, in particular the Sloan Quasar Lens Search (SQLS), have relied on basic morphological criteria, which were applied to samples of bright, spectroscopically confirmed quasars. The SQLS techniques are not sufficient for searching into new surveys (e.g. DES, PS1, LSST), because spectroscopic information is not readily available and the large data volume requires higher purity in target/candidate selection. We carry out a systematic exploration of machine learning techniques and demonstrate that a two step strategy can be highly effective. In the first step we use catalog-level information ($griz$+WISE magnitudes, second moments) to preselect targets, using artificial neural networks. The accepted targets are then inspected with pixel-by-pixel pattern recognition algorithms (Gradient-Boosted Trees), to form a final set of candidates. The results from this procedure can be used to further refine the simpler SQLS algorithms, with a twofold (or threefold) gain in purity and the same (or $80\%$) completeness at target-selection stage, or a purity of $70\%$ and a completeness of $60\%$ after the candidate-selection step. Simpler photometric searches in $griz$+WISE based on colour cuts would provide samples with $7\%$ purity or less. Our technique is extremely fast, as a list of candidates can be obtained from a stage III experiment (e.g. DES catalog/database) in {a few} CPU hours. The techniques are easily extendable to Stage IV experiments like LSST with the addition of time domain information.

Spectroscopic Needs for Calibration of LSST Photometric Redshifts

This white paper summarizes the conclusions of the Snowmass White Paper "Spectroscopic Needs for Imaging Dark Energy Experiments" (arXiv:1309.5384) which are relevant to the calibration of LSST photometric redshifts; i.e., the accurate characterization of biases and uncertainties in photo-z’s. Any significant miscalibration will lead to systematic errors in photo-z’s, impacting nearly all extragalactic science with LSST. As existing deep redshift samples have failed to yield highly-secure redshifts for a systematic 20%-60% of their targets, it is a strong possibility that future deep spectroscopic samples will not solve the calibration problem on their own. The best options in this scenario are provided by cross-correlation methods that utilize clustering with objects from spectroscopic surveys (which need not be fully representative) to trace the redshift distribution of the full sample. For spectroscopy, the eBOSS survey would enable a basic calibration of LSST photometric redshifts, while the expected LSST-DESI overlap would be more than sufficient for an accurate calibration at z>0.2. A DESI survey of nearby galaxies conducted in bright time would enable accurate calibration down to z~0. The expanded areal coverage provided by the transfer of the DESI instrument (or duplication of it at the Blanco Telescope) would enable the best possible calibration from cross-correlations, in addition to other science gains.

Spectroscopic Needs for Training of LSST Photometric Redshifts

This white paper summarizes those conclusions of the Snowmass White Paper "Spectroscopic Needs for Imaging Dark Energy Experiments" (arXiv:1309.5384) which are relevant to the training of LSST photometric redshifts; i.e., the use of spectroscopic redshifts to improve algorithms and reduce photo-z errors. The larger and more complete the available training set is, the smaller the RMS error in photo-z estimates should be, increasing LSST’s constraining power. Among the better US-based options for this work are the proposed MANIFEST fiber feed for the Giant Magellan Telescope or (with lower survey speed) the WFOS spectrograph on the Thirty Meter Telescope (TMT). Due to its larger field of view and higher multiplexing, the PFS spectrograph on Subaru would be able to obtain a baseline training sample faster than TMT; comparable performance could be achieved with a highly-multiplexed spectrograph on Gemini with at least a 20 arcmin diameter field of view.

Maximizing LSST's Scientific Return: Ensuring Participation from Smaller Institutions

The remarkable scientific return and legacy of LSST, in the era that it will define, will not only be realized in the breakthrough science that will be achieved with catalog data. This Big Data survey will shape the way the entire astronomical community advances — or fails to embrace — new ways of approaching astronomical research and data. In this white paper, we address the NRC template questions 4,5,6,8 and 9, with a focus on the unique challenges for smaller, and often under-resourced, institutions, including institutions dedicated to underserved minority populations, in the efficient and effective use of LSST data products to maximize LSST’s scientific return.

The Variable Sky of Deep Synoptic Surveys [Replacement]

The discovery of variable and transient sources is an essential product of synoptic surveys. The alert stream will require filtering for personalized criteria — a process managed by a functionality commonly described as a Broker. In order to understand quantitatively the magnitude of the alert generation and Broker tasks, we have undertaken an analysis of the most numerous types of variable targets in the sky — Galactic stars, QSOs, AGNs and asteroids. It is found that LSST will be capable of discovering ~10^5 high latitude |b| > 20 deg) variable stars per night at the beginning of the survey. (The corresponding number for |b| < 20 deg is orders of magnitude larger, but subject to caveats concerning extinction and crowding.) However, the number of new discoveries may well drop below 100/night within 2 years. The same analysis applied to GAIA clarifies the complementarity of the GAIA and LSST surveys. Discovery of variable galactic nuclei (AGNs) and Quasi Stellar Objects (QSOs) are each predicted to begin at ~3000 per night, and decrease by 50X over 4 years. SNe are expected at ~1100/night, and after several survey years will dominate the new variable discovery rate. LSST asteroid discoveries will start at > 10^5 per night, and if orbital determination has a 50% success rate per epoch, will drop below 1000/night within 2 years.

The Variable Sky of Deep Synoptic Surveys

The discovery of variable and transient sources is an essential product of synoptic surveys. The alert stream will require filtering for personalized criteria — a process managed by a functionality commonly described as a Broker. In order to understand quantitatively the magnitude of the alert generation and Broker tasks, we have undertaken an analysis of the most numerous types of variable targets in the sky — Galactic stars, QSOs, AGNs and asteroids. It is found that LSST will be capable of discovering ~10^4 high latitude |b| > 20 deg) variable stars per night at the beginning of the survey. (The corresponding number for |b| < 20 deg is 2 orders of magnitude larger, but subject to caveats concerning extinction and crowding.) However, the number of new discoveries may well drop below 100/night within less than 1 year. The same analysis applied to GAIA clarifies the complementarity of the GAIA and LSST surveys. Discovery of variable galactic nuclei (AGNs) and Quasi Stellar Objects (QSOs) are each predicted to begin at ~3000 per night, and decrease by 50X over 4 years. SNe are expected at ~1100/night, and after several survey years will dominate the new variable discovery rate. LSST asteroid discoveries will start at > 10^5 per night, and if orbital determination has a 50% success rate per epoch, will drop below 1000/night within 2 years.

The Variable Sky of Deep Synoptic Surveys [Replacement]

The discovery of variable and transient sources is an essential product of synoptic surveys. The alert stream will require filtering for personalized criteria — a process managed by a functionality commonly described as a Broker. In order to understand quantitatively the magnitude of the alert generation and Broker tasks, we have undertaken an analysis of the most numerous types of variable targets in the sky — Galactic stars, QSOs, AGNs and asteroids. It is found that LSST will be capable of discovering ~10^5 high latitude |b| > 20 deg) variable stars per night at the beginning of the survey. (The corresponding number for |b| < 20 deg is orders of magnitude larger, but subject to caveats concerning extinction and crowding.) However, the number of new discoveries may well drop below 100/night within 1 year. The same analysis applied to GAIA clarifies the complementarity of the GAIA and LSST surveys. Discovery of variable galactic nuclei (AGNs) and Quasi Stellar Objects (QSOs) are each predicted to begin at ~3000 per night, and decrease by 50X over 4 years. SNe are expected at ~1100/night, and after several survey years will dominate the new variable discovery rate. LSST asteroid discoveries will start at > 10^5 per night, and if orbital determination has a 50% success rate per epoch, will drop below 1000/night within 2 years.

Strong Lens Time Delay Challenge: II. Results of TDC1

We present the results of the first strong lens time delay challenge. The motivation, experimental design, and entry level challenge are described in a companion paper. This paper presents the main challenge, TDC1, which consisted in analyzing thousands of simulated light curves blindly. The observational properties of the light curves cover the range in quality obtained for current targeted efforts (e.g. COSMOGRAIL) and expected from future synoptic surveys (e.g. LSST), and include "evilness" in the form of simulated systematic errors. 7 teams participated in TDC1, submitting results from 78 different method variants. After a describing each method, we compute and analyze basic statistics measuring accuracy (or bias) $A$, goodness of fit $\chi^2$, precision $P$, and success rate $f$. For some methods we identify outliers as an important issue. Other methods show that outliers can be controlled via visual inspection or conservative quality control. Several methods are competitive, i.e. give $|A|<0.03$, $P<0.03$, and $\chi^2<1.5$, with some of the methods already reaching sub-percent accuracy. The fraction of light curves yielding a time delay measurement is typically in the range $f = $20–40%. It depends strongly on the quality of the data: COSMOGRAIL-quality cadence and light curve lengths yield significantly higher $f$ than does sparser sampling. We estimate that LSST should provide around 400 robust time-delay measurements, each with $P<0.03$ and $|A|<0.01$, comparable to current lens modeling uncertainties. In terms of observing strategies, we find that $A$ and $f$ depend mostly on season length, while P depends mostly on cadence and campaign duration.

Strong Lens Time Delay Challenge: II. Results of TDC1 [Replacement]

We present the results of the first strong lens time delay challenge. The motivation, experimental design, and entry level challenge are described in a companion paper. This paper presents the main challenge, TDC1, which consisted of analyzing thousands of simulated light curves blindly. The observational properties of the light curves cover the range in quality obtained for current targeted efforts (e.g.,~COSMOGRAIL) and expected from future synoptic surveys (e.g.,~LSST), and include simulated systematic errors. \nteamsA\ teams participated in TDC1, submitting results from \nmethods\ different method variants. After a describing each method, we compute and analyze basic statistics measuring accuracy (or bias) $A$, goodness of fit $\chi^2$, precision $P$, and success rate $f$. For some methods we identify outliers as an important issue. Other methods show that outliers can be controlled via visual inspection or conservative quality control. Several methods are competitive, i.e., give $|A|<0.03$, $P<0.03$, and $\chi^2<1.5$, with some of the methods already reaching sub-percent accuracy. The fraction of light curves yielding a time delay measurement is typically in the range $f = $20–40\%. It depends strongly on the quality of the data: COSMOGRAIL-quality cadence and light curve lengths yield significantly higher $f$ than does sparser sampling. Taking the results of TDC1 at face value, we estimate that LSST should provide around 400 robust time-delay measurements, each with $P<0.03$ and $|A|<0.01$, comparable to current lens modeling uncertainties. In terms of observing strategies, we find that $A$ and $f$ depend mostly on season length, while P depends mostly on cadence and campaign duration.

Improving LSST Photometric Calibration with Gaia Data

We consider the possibility that the Gaia mission can supply data which will improve the photometric calibration of LSST. After outlining the LSST calibra- tion process and the information that will be available from Gaia, we explore two options for using Gaia data. The first is to use Gaia G-band photometry of selected stars, in conjunction with knowledge of the stellar parameters Teff, log g, and AV, and in some cases Z, to create photometric standards in the LSST u, g, r, i, z, and y bands. The accuracies of the resulting standard magnitudes are found to be insufficient to satisfy LSST requirements when generated from main sequence (MS) stars, but generally adequate from DA white dwarfs (WD). The second option is combine the LSST bandpasses into a synthetic Gaia G band, which is a close approximation to the real Gaia G band. This allows synthetic Gaia G photometry to be directly compared with actual Gaia G photometry at a level of accuracy which is useful for both verifying and improving LSST photometric calibration.

Transiting Planets with LSST I: Potential for LSST Exoplanet Detection [Replacement]

The Large Synoptic Survey Telescope (LSST) has been designed in order to satisfy several different scientific objectives that can be addressed by a ten-year synoptic sky survey. However, LSST will also provide a large amount of data that can then be exploited for additional science beyond its primary goals. We demonstrate the potential of using LSST data to search for transiting exoplanets, and in particular to find planets orbiting host stars that are members of stellar populations that have been less thoroughly probed by current exoplanet surveys. We find that existing algorithms can detect in simulated LSST light curves the transits of Hot Jupiters around solar-type stars, Hot Neptunes around K dwarfs, and planets orbiting stars in the Large Magellanic Cloud. We also show that LSST would have the sensitivity to potentially detect Super-Earths orbiting red dwarfs, including those in habitable zone orbits, if they are present in some fields that LSST will observe. From these results, we make the case that LSST has the ability to provide a valuable contribution to exoplanet science.

Transiting Planets with LSST I: Potential for LSST Exoplanet Detection

The Large Synoptic Survey Telescope (LSST) has been designed in order to satisfy several different scientific objectives that can be addressed by a ten-year synoptic sky survey. However, LSST will also provide a large amount of data that can then be exploited for additional science beyond its primary goals. We demonstrate the potential of using LSST data to search for transiting exoplanets, and in particular to find planets orbiting host stars that are members of stellar populations that have been less thoroughly probed by current exoplanet surveys. We find that existing algorithms can detect in simulated LSST light curves the transits of Hot Jupiters around solar-type stars, Hot Neptunes around K dwarfs, and planets orbiting stars in the Large Magellanic Cloud. We also show that LSST would have the sensitivity to potentially detect Super-Earths orbiting red dwarfs, including those in habitable zone orbits, if they are present in some fields that LSST will observe. From these results, we make the case that LSST has the ability to provide a valuable contribution to exoplanet science.

Too Many, Too Few, or Just Right? The Predicted Number and Distribution of Milky Way Dwarf Galaxies

We predict the spatial distribution and number of Milky Way dwarf galaxies to be discovered in the DES and LSST surveys, by completeness correcting the observed SDSS dwarf population. We apply most massive in the past, earliest forming, and earliest infall toy models to a set of dark matter-only simulated Milky Way/M31 halo pairs from Exploring the Local Volume In Simulations (ELVIS). The observed spatial distribution of Milky Way dwarfs in the LSST-era will discriminate between the earliest infall and other simplified models for how dwarf galaxies populate dark matter subhalos. Inclusive of all toy models and simulations, at 90% confidence we predict a total of 37-114 L $\gtrsim 10^3$L$_{\odot}$ dwarfs and 131-782 L $\lesssim 10^3$L$_{\odot}$ dwarfs within 300 kpc. These numbers of L $\gtrsim 10^3$L$_{\odot}$ dwarfs are dramatically lower than previous predictions, owing primarily to our use of updated detection limits and the decreasing number of SDSS dwarfs discovered per sky area. For an effective $r_{\rm limit}$ of 25.8 mag, we predict: 3-13 L $\gtrsim 10^3$L$_{\odot}$ and 9-99 L $\lesssim 10^3$L$_{\odot}$ dwarfs for DES, and 18-53 L $\gtrsim 10^3$L$_{\odot}$ and 53-307 L $\lesssim 10^3$L$_{\odot}$ dwarfs for LSST. These enormous predicted ranges ensure a coming decade of near-field excitement with these next generation surveys.

On the Performance of Quasar Reverberation Mapping in the Era of Time-Domain Photometric Surveys

We quantitatively assess, by means of comprehensive numerical simulations, the ability of broad-band photometric surveys to recover the broad emission line region (BLR) size in quasars under various observing conditions and for a wide range of object properties. Focusing on the general characteristics of the Large Synoptic Survey Telescope (LSST), we find that the slope of the size-luminosity relation for the BLR in quasars can be determined with unprecedented accuracy, of order a few percent, over a broad luminosity range and out to $z\sim 3$. In particular, major emission lines for which the BLR size can be reliably measured with LSST include H$\alpha$, MgII $\lambda 2799$, CIII] $\lambda 1909$, CIV $\lambda 1549$, and Ly$\alpha$, amounting to a total of $\gtrsim 10^5$ time-delay measurements for all transitions. Combined with an estimate for the emission line velocity dispersion, upcoming photometric surveys will facilitate the estimation of black hole masses in AGN over a broad range of luminosities and redshifts, allow for refined calibrations of BLR size-luminosity-redshift relations in different transitions, as well as lead to more reliable cross-calibration with other black hole mass estimation techniques.

Radio Astronomy in LSST Era

A community meeting on the topic of "Radio Astronomy in the LSST Era" was hosted by the National Radio Astronomy Observatory in Charlottesville, VA (2013 May 6–8). The focus of the workshop was on time domain radio astronomy and sky surveys. For the time domain, the extent to which radio and visible wavelength observations are required to understand several classes of transients was stressed, but there are also classes of radio transients for which no visible wavelength counterpart is yet known, providing an opportunity for discovery. From the LSST perspective, the LSST is expected to generate as many as 1 million alerts nightly, which will require even more selective specification and identification of the classes and characteristics of transients that can warrant follow up, at radio or any wavelength. The LSST will also conduct a deep survey of the sky, producing a catalog expected to contain over 38 billion objects in it. Deep radio wavelength sky surveys will also be conducted on a comparable time scale, and radio and visible wavelength observations are part of the multi-wavelength approach needed to classify and understand these objects. Radio wavelengths are valuable because they are unaffected by dust obscuration and, for galaxies, contain contributions both from star formation and from active galactic nuclei. The workshop touched on several other topics, on which there was consensus including the placement of other LSST "Deep Drilling Fields," inter-operability of software tools, and the challenge of filtering and exploiting the LSST data stream. There were also topics for which there was insufficient time for full discussion or for which no consensus was reached, which included the procedures for following up on LSST observations and the nature for future support of researchers desiring to use LSST data products.

Optical selection of quasars: SDSS and LSST

Over the last decade, quasar sample sizes have increased from several thousand to several hundred thousand, thanks mostly to SDSS imaging and spectroscopic surveys. LSST, the next-generation optical imaging survey, will provide hundreds of detections per object for a sample of more than ten million quasars with redshifts of up to about seven. We briefly review optical quasar selection techniques, with emphasis on methods based on colors, variability properties and astrometric behavior.

Cosmic shear without shape noise

We describe a new method for reducing the shape noise in weak lensing measurements by an order of magnitude. Our method relies on spectroscopic measurements of disk galaxy rotation and makes use of the Tully-Fisher (TF) relation in order to control for the intrinsic orientations of galaxy disks. For this new proposed experiment, the shape noise ceases to be an important source of statistical error. Using CosmoLike, a new cosmological analysis software package, we simulate likelihood analyses for two spectroscopic weak lensing survey concepts (roughly similar in scale to Dark Energy Survey Task Force Stage III and Stage IV missions) and compare their constraining power to a cosmic shear survey from the Large Synoptic Survey Telescope (LSST). Our forecasts in seven-dimensional cosmological parameter space include statistical uncertainties resulting from shape noise, cosmic variance, halo sample variance, and higher-order moments of the density field. We marginalize over systematic uncertainties arising from photometric redshift errors and shear calibration biases considering both optimistic and conservative assumptions about LSST systematic errors. We find that even the TF-Stage III is highly competitive with the optimistic LSST scenario, while evading the most important sources of theoretical and observational systematic error inherent in traditional weak lensing techniques. Furthermore, the TF technique enables a narrow-bin cosmic shear tomography approach to tightly constrain time-dependent signatures in the dark energy phenomenon.

Strong Lens Time Delay Challenge: I. Experimental Design

The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters as well as probe the dark matter (sub-)structure within the lens galaxy. The number of lenses with measured time delays is growing rapidly as a result of some dedicated efforts; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ~1000 lens systems consisting of a foreground elliptical galaxy producing multiple images of a background quasar. In an effort to assess the present capabilities of the community to accurately measure the time delays in strong gravitational lens systems, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we invite the community to take part in a "Time Delay Challenge" (TDC). The challenge is organized as a set of "ladders", each containing a group of simulated datasets to be analyzed blindly by participating independent analysis teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs’ datasets increasing in complexity and realism to incorporate a variety of anticipated physical and experimental effects. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of datasets, and is designed to be used as a practice set by the participating teams as they set up their analysis pipelines. The (non-mandatory) deadline for completion of TDC0 will be the TDC1 launch date, December 1, 2013. TDC1 will consist of some 1000 light curves, a sample designed to provide the statistical power to make meaningful statements about the sub-percent accuracy that will be required to provide competitive Dark Energy constraints in the LSST era.

Growth of Cosmic Structure: Probing Dark Energy Beyond Expansion

The quantity and quality of cosmic structure observations have greatly accelerated in recent years. Further leaps forward will be facilitated by imminent projects, which will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to the nature of dark matter and dark energy. Dark energy and gravity both affect how rapidly structure grows; the greater the acceleration, the more suppressed the growth of structure, while the greater the gravity, the more enhanced the growth. While distance measurements also constrain dark energy, the comparison of growth and distance data tests whether General Relativity describes the laws of physics accurately on large scales. Modified gravity models are able to reproduce the distance measurements but at the cost of altering the growth of structure (these signatures are described in more detail in the accompanying paper on Novel Probes of Gravity and Dark Energy). Upcoming surveys will exploit these differences to determine whether the acceleration of the Universe is due to dark energy or to modified gravity. To realize this potential, both wide field imaging and spectroscopic redshift surveys play crucial roles. Projects including DES, eBOSS, DESI, PFS, LSST, Euclid, and WFIRST are in line to map more than a 1000 cubic-billion-light-year volume of the Universe. These will map the cosmic structure growth rate to 1% in the redshift range 0<z<2, over the last 3/4 of the age of the Universe.

Growth of Cosmic Structure: Probing Dark Energy Beyond Expansion [Replacement]

The quantity and quality of cosmic structure observations have greatly accelerated in recent years. Further leaps forward will be facilitated by imminent projects, which will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to the nature of dark matter and dark energy. Dark energy and gravity both affect how rapidly structure grows; the greater the acceleration, the more suppressed the growth of structure, while the greater the gravity, the more enhanced the growth. While distance measurements also constrain dark energy, the comparison of growth and distance data tests whether General Relativity describes the laws of physics accurately on large scales. Modified gravity models are able to reproduce the distance measurements but at the cost of altering the growth of structure (these signatures are described in more detail in the accompanying paper on Novel Probes of Gravity and Dark Energy). Upcoming surveys will exploit these differences to determine whether the acceleration of the Universe is due to dark energy or to modified gravity. To realize this potential, both wide field imaging and spectroscopic redshift surveys play crucial roles. Projects including DES, eBOSS, DESI, PFS, LSST, Euclid, and WFIRST are in line to map more than a 1000 cubic-billion-light-year volume of the Universe. These will map the cosmic structure growth rate to 1% in the redshift range 0<z<2, over the last 3/4 of the age of the Universe.

Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST [Replacement]

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Scale Synoptic Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2. However, this outlook may be conservative, given that our model underpredicts the number of ultra-faint galaxies with large potential annihilation signals actually discovered in the Sloan Digital Sky Survey. Our simulation-based approach focusing on the Milky Way satellite population demographics complements existing empirically-based estimates.

Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST [Replacement]

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Synoptic Survey Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2-4. However, this outlook may be conservative, given that our model underpredicts the number of ultra-faint galaxies with large potential annihilation signals actually discovered in the Sloan Digital Sky Survey. Our simulation-based approach focusing on the Milky Way satellite population demographics complements existing empirically-based estimates.

Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST [Replacement]

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Synoptic Survey Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2-4. However, this outlook may be conservative, given that our model underpredicts the number of ultra-faint galaxies with large potential annihilation signals actually discovered in the Sloan Digital Sky Survey. Our simulation-based approach focusing on the Milky Way satellite population demographics complements existing empirically-based estimates.

Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Scale Synoptic Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2.

Measuring the Thermal Sunyaev-Zel'dovich Effect Through the Cross Correlation of Planck and WMAP Maps with ROSAT Galaxy Cluster Catalogs

We measure a significant correlation between the thermal Sunyaev-Zel’dovich effect in the Planck and WMAP maps and an X-ray cluster map based on ROSAT. We use the 100, 143 and 343 GHz Planck maps and the WMAP 94 GHz map to obtain this cluster cross spectrum. We check our measurements for contamination from dusty galaxies using the cross correlations with the 220, 545 and 843 GHz maps from Planck. Our measurement yields a direct characterization of the cluster power spectrum over a wide range of angular scales that is consistent with large cosmological simulations. The amplitude of this signal depends on cosmological parameters that determine the growth of structure (\sigma_8 and \Omega_M) and scales as \sigma_8^7.4 and \Omega_M^1.9 around the multipole (ell) ~ 1000. We constrain \sigma_8 and \Omega_M from the cross-power spectrum to be \sigma_8 (\Omega_M/0.30)^0.26 = 0.8 +/- 0.02. Since this cross spectrum produces a tight constraint in the \sigma_8 and \Omega_M plane the errors on a \sigma_8 constraint will be mostly limited by the uncertainties from external constraints. Future cluster catalogs, like those from eRosita and LSST, and pointed multi-wavelength observations of clusters will improve the constraining power of this cross spectrum measurement. In principle this analysis can be extended beyond \sigma_8 and \Omega_M to constrain dark energy or the sum of the neutrino masses.

The Kepler-SEP Mission: Harvesting the South Ecliptic Pole large-amplitude variables with Kepler

As a response to the white paper call, we propose to turn Kepler to the South Ecliptic Pole (SEP) and observe thousands of large amplitude variables for years with high cadence in the frame of the Kepler-SEP Mission. The degraded pointing stability will still allow observing these stars with reasonable (probably better than mmag) accuracy. Long-term continuous monitoring already proved to be extremely helpful to investigate several areas of stellar astrophysics. Space-based missions opened a new window to the dynamics of pulsation in several class of pulsating variable stars and facilitated detailed studies of eclipsing binaries. The main aim of this mission is to better understand the fascinating dynamics behind various stellar pulsational phenomena (resonances, mode coupling, chaos, mode selection) and interior physics (turbulent convection, opacities). This will also improve the applicability of these astrophysical tools for distance measurements, population and stellar evolution studies. We investigated the pragmatic details of such a mission and found a number of advantages: minimal reprogramming of the flight software, a favorable field of view, access to both galactic and LMC objects. However, the main advantage of the SEP field comes from the large sample of well classified targets, mainly through OGLE. Synergies and significant overlap (spatial, temporal and in brightness) with both ground- (OGLE, LSST) and space-based missions (GAIA, TESS) will greatly enhance the scientific value of the Kepler-SEP mission. GAIA will allow full characterization of the distance indicators. TESS will continuously monitor this field for at least one year, and together with the proposed mission provide long time series that cannot be obtained by other means. If Kepler-SEP program is successful, there is a possibility to place one of the so-called LSST "deep-drilling" fields in this region.

Galactic Stellar Populations in the Era of SDSS and Other Large Surveys

Studies of stellar populations, understood to mean collections of stars with common spatial, kinematic, chemical, and/or age distributions, have been reinvigorated during the last decade by the advent of large-area sky surveys such as SDSS, 2MASS, RAVE, and others. We review recent analyses of these data that, together with theoretical and modeling advances, are revolutionizing our understanding of the nature of the Milky Way, and galaxy formation and evolution in general. The formation of galaxies like the Milky Way was long thought to be a steady process leading to a smooth distribution of stars. However, the abundance of substructure in the multi-dimensional space of various observables, such as position, kinematics, and metallicity, is by now proven beyond doubt, and demonstrates the importance of mergers in the growth of galaxies. Unlike smooth models that involve simple components, the new data reviewed here clearly show many irregular structures, such as the Sagittarius dwarf tidal stream and the Virgo and Pisces overdensities in the halo, and the Monoceros stream closer to the Galactic plane. These recent developments have made it clear that the Milky Way is a complex and dynamical structure, one that is still being shaped by the merging of neighboring smaller galaxies. We also briefly discuss the next generation of wide-field sky surveys, such as SkyMapper, Pan-STARRS, Gaia and LSST, which will improve measurement precision manyfold, and comprise billions of individual stars. The ultimate goal, development of a coherent and detailed story of the assembly and evolutionary history of the Milky Way and other large spirals like it, now appears well within reach.

Clustering Measurements of broad-line AGNs: Review and Future

Despite substantial effort, the precise physical processes that lead to the growth of super-massive black holes in the centers of galaxies are still not well understood. These phases of black hole growth are thought to be of key importance in understanding galaxy evolution. Forthcoming missions such as eROSITA, HETDEX, eBOSS, BigBOSS, LSST, and Pan-STARRS will compile by far the largest ever Active Galactic Nuclei (AGNs) catalogs which will allow us to measure the spatial distribution of AGNs in the universe with unprecedented accuracy. For the first time, AGN clustering measurements will reach a level of precision that will not only allow for an alternative approach to answering open questions in AGN/galaxy co-evolution but will open a new frontier, allowing us to precisely determine cosmological parameters. This paper reviews the large-scale clustering measurements of broad line AGNs. We summarize how clustering is measured and which constraints can be derived from AGN clustering measurements, we discuss recent developments, and we briefly describe future projects that will deliver extremely large AGN samples which will enable AGN clustering measurements of unprecedented accuracy. In order to maximize the scientific return on the research fields of AGN/galaxy evolution and cosmology, we advise that the community develop a full understanding of the systematic uncertainties which will, in contrast to today’s measurement, be the dominant source of uncertainty.

Measuring the matter energy density and Hubble parameter from Large Scale Structure

We investigate the method to measure both the present value of the matter energy density contrast and the Hubble parameter directly from the measurement of the linear growth rate which is obtained from the large scale structure of the Universe. From this method, one can obtain the value of the nuisance cosmological parameter $\Omo$ (the present value of the matter energy density contrast) within 3% error if the growth rate measurement can be reached $z > 3.5$. One can also investigate the evolution of the Hubble parameter without any prior on the value of $H_0$ (the current value of the Hubble parameter). Especially, estimating the Hubble parameter are insensitive to the errors on the measurement of the normalized growth rate $f \sigma_8$. However, this method requires the high $z$ ($z > 3.5$) measurement of the growth rate in order to get the less than 5% errors on the measurements of $H(z)$ at $z \leq 1.2$ with the redshift bin $\Delta z = 0.2$. Thus, this will be suitable for the next generation large scale structure galaxy surveys like WFMOS and LSST.

The Effect of Weak Lensing on Distance Estimates from Supernovae

Using a sample of 608 Type Ia supernovae from the SDSS-II and BOSS surveys, combined with a sample of foreground galaxies from SDSS-II, we estimate the weak lensing convergence for each supernova line-of-sight. We find that the correlation between this measurement and the Hubble residuals is consistent with the prediction from lensing (at a significance of 1.7sigma). Strong correlations are also found between the residuals and supernova nuisance parameters after a linear correction is applied. When these other correlations are taken into account, the lensing signal is detected at 1.4sigma. We show for the first time that distance estimates from supernovae can be improved when lensing is incorporated by including a new parameter in the SALT2 methodology for determining distance moduli. The recovered value of the new parameter is consistent with the lensing prediction. Using WMAP7, HST and BAO data, we find the best-fit value of the new lensing parameter and show that the central values and uncertainties on Omega_m and w are unaffected. The lensing of supernovae, while only seen at marginal significance in this low redshift sample, will be of vital importance for the next generation of surveys, such as DES and LSST, which will be systematics dominated.

An r-Process Kilonova Associated with the Short-Hard GRB 130603B [Replacement]

We present ground-based optical and Hubble Space Telescope optical and near-IR observations of the short-hard GRB130603B at z=0.356, which demonstrate the presence of excess near-IR emission matching the expected brightness and color of an r-process powered transient (a "kilonova"). The early afterglow fades rapidly with alpha<-2.6 at t~8-32 hr post-burst and has a spectral index of beta=-1.5 (F_nu t^alpha*nu^beta), leading to an expected near-IR brightness at the time of the first HST observation of m(F160W)>29.3 AB mag. Instead, the detected source has m(F160W)=25.8+/-0.2 AB mag, corresponding to a rest-frame absolute magnitude of M(J)=-15.2 mag. The upper limit in the HST optical observations is m(F606W)>27.7 AB mag (3-sigma), indicating an unusually red color of V-H>1.9 mag. Comparing the observed near-IR luminosity to theoretical models of kilonovae produced by ejecta from the merger of an NS-NS or NS-BH binary, we infer an ejecta mass of M_ej~0.03-0.08 Msun for v_ej=0.1-0.3c. The inferred mass matches the expectations from numerical merger simulations. The presence of a kilonova provides the strongest evidence to date that short GRBs are produced by compact object mergers, and provides initial insight on the ejected mass and the primary role that compact object merger may play in the r-process. Equally important, it demonstrates that gravitational wave sources detected by Advanced LIGO/Virgo will be accompanied by optical/near-IR counterparts with unusually red colors, detectable by existing and upcoming large wide-field facilities (e.g., Pan-STARRS, DECam, Subaru, LSST).

Smoking Gun or Smoldering Embers? A Possible r-process Kilonova Associated with the Short-Hard GRB 130603B

We present Hubble Space Telescope optical and near-IR observations of the short-hard GRB 130603B (z=0.356) obtained 9.4 days post-burst. At the position of the burst we detect a red point source with m(F160W)=25.8+/-0.2 AB mag and m(F606W)>27.5 AB mag (3-sigma), corresponding to rest-frame absolute magnitudes of M_J -15.2 mag and M_B>-13.5 mag. A comparison to the early optical afterglow emission requires a decline rate of alpha_opt<-1.6 (F_nu t^alpha), consistent with the observed X-ray decline at about 1 hr to about 1 day. The observed red color of V-H>1.7 mag is also potentially consistent with the red optical colors of the afterglow at early time (F_nu nu^-1.6 in gri). Thus, an afterglow interpretation is feasible. Alternatively, the red color and faint absolute magnitude are due to emission from an r-process powered transient ("kilonova") produced by ejecta from the merger of an NS-NS or NS-BH binary, the most likely progenitors of short GRBs. In this scenario, the observed brightness implies an outflow with M_ej 0.01 Msun and v_ej 0.1c, in good agreement with the results of numerical merger simulations for roughly equal mass binary constituents (i.e., NS-NS). If true, the kilonova interpretation provides the strongest evidence to date that short GRBs are produced by compact object mergers, and places initial constraints on the ejected mass. Equally important, it demonstrates that gravitational wave sources detected by Advanced LIGO/Virgo will be accompanied by optical/near-IR counterparts with unusually red colors, detectable by existing and upcoming large wide-field facilities (e.g., Pan-STARRS, DECam, Subaru, LSST).

Dark energy with gravitational lens time delays

Strong lensing gravitational time delays are a powerful and cost effective probe of dark energy. Recent studies have shown that a single lens can provide a distance measurement with 6-7 % accuracy (including random and systematic uncertainties), provided sufficient data are available to determine the time delay and reconstruct the gravitational potential of the deflector. Gravitational-time delays are a low redshift (z~0-2) probe and thus allow one to break degeneracies in the interpretation of data from higher-redshift probes like the cosmic microwave background in terms of the dark energy equation of state. Current studies are limited by the size of the sample of known lensed quasars, but this situation is about to change. Even in this decade, wide field imaging surveys are likely to discover thousands of lensed quasars, enabling the targeted study of ~100 of these systems and resulting in substantial gains in the dark energy figure of merit. In the next decade, a further order of magnitude improvement will be possible with the 10000 systems expected to be detected and measured with LSST and Euclid. To fully exploit these gains, we identify three priorities. First, support for the development of software required for the analysis of the data. Second, in this decade, small robotic telescopes (1-4m in diameter) dedicated to monitoring of lensed quasars will transform the field by delivering accurate time delays for ~100 systems. Third, in the 2020′s, LSST will deliver 1000′s of time delays; the bottleneck will instead be the aquisition and analysis of high resolution imaging follow-up. Thus, the top priority for the next decade is to support fast high resolution imaging capabilities, such as those enabled by the James Webb Space Telescope and next generation adaptive optics systems on large ground based telescopes.

The Effective Number Density of Galaxies for Weak Lensing Measurements in the LSST Project [Replacement]

Future weak lensing surveys potentially hold the highest statistical power for constraining cosmological parameters compared to other cosmological probes. The statistical power of a weak lensing survey is determined by the sky coverage, the inverse of the noise in shear measurements, and the galaxy number density. The combination of the latter two factors is often expressed in terms of $n_{\rm eff}$ — the "effective number density of galaxies used for weak lensing measurements". In this work, we estimate $n_{\rm eff}$ for the Large Synoptic Survey Telescope (LSST) project, the most powerful ground-based lensing survey planned for the next two decades. We investigate how the following factors affect the resulting $n_{\rm eff}$ of the survey with detailed simulations: (1) survey time, (2) shear measurement algorithm, (3) algorithm for combining multiple exposures, (4) inclusion of data from multiple filter bands, (5) redshift distribution of the galaxies, and (6) masking and blending. For the first time, we quantify in a general weak lensing analysis pipeline the sensitivity of $n_{\rm eff}$ to the above factors. We find that with current weak lensing algorithms, expected distributions of observing parameters, and all lensing data ($r$- and $i$-band, covering 18,000 degree$^{2}$ of sky) for LSST, $n_{\rm eff} \approx37$ arcmin$^{-2}$ before considering blending and masking, $n_{\rm eff} \approx31$ arcmin$^{-2}$ when rejecting seriously blended galaxies and $n_{\rm eff} \approx26$ arcmin$^{-2}$ when considering an additional 15% loss of galaxies due to masking. With future improvements in weak lensing algorithms, these values could be expected to increase by up to 20%. Throughout the paper, we also stress the ways in which $n_{\rm eff}$ depends on our ability to understand and control systematic effects in the measurements.

Unidentified Moving Objects in Next Generation Time Domain Surveys

Existing and future wide-field photometric surveys will produce a time-lapse movie of the sky that will revolutionize our census of variable and moving astronomical and atmospheric phenomena. As with any revolution in scientific measurement capability, this new species of data will also present us with results that are sure to surprise and confound our understanding of the cosmos. While we cannot predict the unknown yields of such endeavors, it is a beneficial exercise to explore certain parameter spaces using reasonable assumptions for rates and observability. To this end I present a simple parameterized model of the detectability of unidentified flying objects (UFOs) with the Large Synoptic Survey Telescope (LSST). I also demonstrate that the LSST is well suited to place the first systematic constraints on the rate of UFO and extraterrestrial visits to our world.

Recurring flares from supermassive black hole binaries: implications for tidal disruption candidates and OJ 287

I discuss the possibility that accreting, supermassive black hole (SMBH) binaries with sub-parsec separations produce luminous, periodically recurring outbursts that interrupt periods of relative quiescence. This hypothesis is motivated by two characteristics found in simulations of binaries embedded in prograde accretion discs: (i) the formation of a central, low-density cavity, and (ii) the leakage of circumbinary gas into this cavity, occurring once per orbit, via discrete streams on nearly radial trajectories. The first feature will diminish the emergent optical/UV flux of the system relative to active galactic nuclei (AGN) powered by single SMBHs, while the second is likely to trigger periodic fluctuations in the emergent flux. I propose a simple toy model in which a leaked stream crosses its own orbit and shocks, converting its bulk kinetic energy to heat. The result is a hot, optically thick flow that is quickly accreted and produces a flare with an AGN-like spectrum that peaks in the UV and ranges from the optical to the soft X-ray. Due to the preceding quiescence, such a flare could plausibly be mistaken for the tidal disruption of a star. For typical binary periods of years to decades, the event rate in an individual system can be much higher than that predicted for stellar tidal disruptions but infrequent enough to hinder tests of periodicity. The flares proposed here can be produced by very massive (>10^8 Msol) SMBHs that would not tidally disrupt solar-type stars. They could be discovered serendipitously in the future by observatories such as LSST or eROSITA. I apply the model to the active galaxy OJ 287, whose production of periodic optical flares has long fueled speculation that it hosts a SMBH binary.

Recurring flares from supermassive black hole binaries: implications for tidal disruption candidates and OJ 287 [Replacement]

I discuss the possibility that accreting supermassive black hole (SMBH) binaries with sub-parsec separations produce periodically recurring luminous outbursts that interrupt periods of relative quiescence. This hypothesis is motivated by two characteristics found generically in simulations of binaries embedded in prograde accretion discs: (i) the formation of a central, low-density cavity around the binary, and (ii) the leakage of gas into this cavity, occurring once per orbit via discrete streams on nearly radial trajectories. The first feature would reduce the emergent optical/UV flux of the system relative to active galactic nuclei powered by single SMBHs, while the second can trigger quasiperiodic fluctuations in luminosity. I argue that the quasiperiodic accretion signature may be much more dramatic than previously thought, because the infalling gas streams can strongly shock-heat via self-collision and tidal compression, thereby enhancing viscous accretion. Any optically thick gas that is circularized about either SMBH can accrete before the next pair of streams is deposited, fueling transient, luminous flares that recur every orbit. Due to the diminished flux in between accretion episodes, such cavity-accretion flares could plausibly be mistaken for the tidal disruptions of stars in quiescent nuclei. The flares could be distinguished from tidal disruption events if their quasiperiodic recurrence is observed, or if they are produced by very massive SMBHs that cannot disrupt solar-type stars. They may be discovered serendipitously in surveys such as LSST or eROSITA. I present a heuristic toy model as a proof of concept for the production of cavity-accretion flares, and generate mock light curves and specta. I also apply the model to the active galaxy OJ 287, whose production of quasiperiodic pairs of optical flares has long fueled speculation that it hosts a SMBH binary.

ELT-MOS White Paper: Science Overview & Requirements

The workhorse instruments of the 8-10m class observatories have become their multi-object spectrographs (MOS), providing comprehensive follow-up to both ground-based and space-borne imaging. With the advent of deeper imaging surveys from, e.g., the HST and VISTA, there are a plethora of spectroscopic targets which are already beyond the sensitivity limits of current facilities. This wealth of targets will grow even more rapidly in the coming years, e.g., after the completion of ALMA, the launch of the JWST and Euclid, and the advent of the LSST. Thus, one of the key requirements underlying plans for the next generation of ground-based telescopes, the Extremely Large Telescopes (ELTs), is for even greater sensitivity for optical and infrared spectroscopy. Here we revisit the scientific motivation for a MOS capability on the European ELT, combining updated elements of science cases advanced from the Phase A instrument studies with new science cases which draw on the latest results and discoveries. These science cases address key questions related to galaxy evolution over cosmic time, from studies of resolved stellar populations in nearby galaxies out to observations of the most distant galaxies, and are used to identify the top-level requirements on an ‘E-ELT/MOS’. We argue that several of the most compelling ELT science cases demand MOS observations, in highly competitive areas of modern astronomy. Recent technical studies have demonstrated that important issues related to e.g. sky subtraction and multi-object AO can be solved, making fast- track development of a MOS instrument feasible. To ensure that ESO retains world leadership in exploring the most distant objects in the Universe, galaxy evolution and stellar populations, we are convinced that a MOS should have high priority in the instrumentation plan for the E-ELT.

 

You need to log in to vote

The blog owner requires users to be logged in to be able to vote for this post.

Alternatively, if you do not have an account yet you can create one here.

Powered by Vote It Up

^ Return to the top of page ^