Posts Tagged lsst

Recent Postings from lsst

Metrics for Optimization of Large Synoptic Survey Telescope Observations of Stellar Variables and Transients

The Large Synoptic Survey Telescope (LSST) will be the largest time-domain photometric survey ever. In order to maximize the LSST science yield for a broad array of transient stellar phenomena, it is necessary to optimize the survey cadence, coverage, and depth via quantitative metrics that are specifically designed to characterize the time-domain behavior of various types of stellar transients. In this paper we present three such metrics built on the LSST Metric Analysis Framework (MAF) model (Jones et al. 2014). Two of the metrics quantify the ability of LSST to detect non-periodic and/or non-recurring transient events, and the ability of LSST to reliably measure periodic signals of various timescales. The third metric provides a way to quantify the range of stellar parameters in the stellar populations that LSST will probe. We provide example uses of these metrics and discuss some implications based on these metrics for optimization of the LSST survey for observations of stellar variables and transients.

Metrics for Optimization of Large Synoptic Survey Telescope Observations of Stellar Variables and Transients [Replacement]

The Large Synoptic Survey Telescope (LSST) will be the largest time-domain photometric survey ever. In order to maximize the LSST science yield for a broad array of transient stellar phenomena, it is necessary to optimize the survey cadence, coverage, and depth via quantitative metrics that are specifically designed to characterize the time-domain behavior of various types of stellar transients. In this paper we present three such metrics built on the LSST Metric Analysis Framework (MAF) model (Jones et al. 2014). Two of the metrics quantify the ability of LSST to detect non-periodic and/or non-recurring transient events, and the ability of LSST to reliably measure periodic signals of various timescales. The third metric provides a way to quantify the range of stellar parameters in the stellar populations that LSST will probe. We provide example uses of these metrics and discuss some implications based on these metrics for optimization of the LSST survey for observations of stellar variables and transients.

Mapping charge transport effects in thick CCDs with a dithered array of 40,000 stars

We characterize the astrometric distortion at the edges of thick, fully-depleted CCDs in the lab using a bench-top simulation of LSST observing. By illuminating an array of forty thousand pinholes (30mu m diameter) at the object plane of a f/1.2 optical reimager, thousands of PSFs can be imaged over a 4Kx4K pixel CCD. Each high purity silicon pixel, 10mu m square by 100mu m deep, can then be individually characterized through a series of sub-pixel dithers in the X/Y plane. The unique character [response, position, shape] of each pixel as a function of flux, wavelength, back side bias, etc. can be investigated. We measure the magnitude and onset of astrometric error at the edges of the detector as a test of the experimental setup, using a LSST prototype CCD. We show that this astrometric error at the edge is sourced from non-uniformities in the electric field lines that define pixel boundaries. This edge distortion must be corrected in order to optimize the science output of weak gravitational lensing and large scale structure measurements for the LSST.

The impact of intrinsic alignment on current and future cosmic shear surveys

Intrinsic alignment (IA) of source galaxies is one of the major astrophysical systematics for ongoing and future weak lensing surveys. This paper presents the first forecasts of the impact of IA on cosmic shear measurements for current and future surveys (DES, Euclid, LSST, WFIRST) using simulated likelihood analyses and realistic covariances that include higher-order moments of the density field in the computation. We consider a range of possible IA scenarios and test mitigation schemes, which parameterize IA by the fraction of red galaxies, normalization, luminosity and redshift dependence of the IA signal (for a subset we consider joint IA and photo-z uncertainties). Compared to previous studies we find smaller biases in time-dependent dark energy models if IA is ignored in the analysis; the amplitude and significance of these biases vary as a function of survey properties (depth, statistical uncertainties), luminosity function, and IA scenario: Due to its small statistical errors and relatively shallow observing strategy Euclid is significantly impacted by IA. LSST and WFIRST benefit from their increased survey depth, while the larger statistical errors for DES decrease IA’s relative impact on cosmological parameters. The proposed IA mitigation scheme removes parameter biases due to IA for DES, LSST, and WFIRST even if the shape of the IA power spectrum is only poorly known; successful IA mitigation for Euclid requires more prior information. We explore several alternative IA mitigation strategies for Euclid; in the absence of alignment of blue galaxies we recommend the exclusion of red (IA contaminated) galaxies in cosmic shear analyses. We find that even a reduction of 20% in the number density of galaxies only leads to a 4-10% loss in cosmological constraining power.

Curvature Wavefront Sensing for the Large Synoptic Survey Telescope

The Large Synoptic Survey Telescope (LSST) will use an active optics system (AOS) to maintain alignment and surface figure on its three large mirrors. Corrective actions fed to the LSST AOS are determined from information derived from 4 curvature wavefront sensors located at the corners of the focal plane. Each wavefront sensor is a split detector such that the halves are 1mm on either side of focus. In this paper we describe the extensions to published curvature wavefront sensing algorithms needed to address challenges presented by the LSST, namely the large central obscuration, the fast f/1.23 beam, off-axis pupil distortions, and vignetting at the sensor locations. We also describe corrections needed for the split sensors and the effects from the angular separation of different stars providing the intra- and extra-focal images. Lastly, we present simulations that demonstrate convergence, linearity, and negligible noise when compared to atmospheric effects when the algorithm extensions are applied to the LSST optical system. The algorithm extensions reported here are generic and can easily be adapted to other wide-field optical systems including similar telescopes with large central obscuration and off-axis curvature sensing.

Chromatic CCD effects on weak lensing measurements for LSST [Replacement]

Wavelength-dependent point spread functions (PSFs) violate an implicit assumption in current galaxy shape measurement algorithms that deconvolve the PSF measured from stars (which have stellar spectral energy distributions (SEDs)) from images of galaxies (which have galactic SEDs). Since the absorption length of silicon depends on wavelength, CCDs are a potential source of PSF chromaticity. Here we develop two toy models to estimate the sensitivity of the cosmic shear survey from the Large Synoptic Survey Telescope to chromatic effects in CCDs. We then compare these toy models to simulated estimates of PSF chromaticity derived from the LSST photon simulator PhoSim. We find that even though sensor contributions to PSF chromaticity are subdominant to atmospheric contributions, they can still significantly bias cosmic shear results if left uncorrected, particularly in the redder filter bands and for objects that are off-axis in the field of view.

Chromatic CCD effects on weak lensing measurements for LSST [Replacement]

Wavelength-dependent point spread functions (PSFs) violate an implicit assumption in current galaxy shape measurement algorithms that deconvolve the PSF measured from stars (which have stellar spectral energy distributions (SEDs)) from images of galaxies (which have galactic SEDs). Since the absorption length of silicon depends on wavelength, CCDs are a potential source of PSF chromaticity. Here we develop two toy models to estimate the sensitivity of the cosmic shear survey from the Large Synoptic Survey Telescope to chromatic effects in CCDs. We then compare these toy models to simulated estimates of PSF chromaticity derived from the LSST photon simulator PhoSim. We find that even though sensor contributions to PSF chromaticity are subdominant to atmospheric contributions, they can still significantly bias cosmic shear results if left uncorrected, particularly in the redder filter bands and for objects that are off-axis in the field of view.

Transiting Planets with LSST II. Period Detection of Planets Orbiting 1 Solar Mass Hosts [Replacement]

The Large Synoptic Survey Telescope (LSST) will photometrically monitor approximately 1 billion stars for ten years. The resulting light curves can be used to detect transiting exoplanets. In particular, as demonstrated by Lund et al. (2015), LSST will probe stellar populations currently undersampled in most exoplanet transit surveys, including out to extragalactic distances. In this paper we test the efficiency of the box-fitting least-squares (BLS) algorithm for accurately recovering the periods of transiting exoplanets using simulated LSST data. We model planets with a range of radii orbiting a solar-mass star at a distance of 7 kpc, with orbital periods ranging from 0.5 to 20 days. We find that standard-cadence LSST observations will be able to reliably recover the periods of Hot Jupiters with periods shorter than approximately 3 days, however it will remain a challenge to confidently distinguish these transiting planets from false positives. At the same time, we find that the LSST deep drilling cadence is extremely powerful: the BLS algorithm successfully recovers at least 30% of sub-Saturn-size exoplanets with orbital periods as long as 20 days, and a simple BLS power criterion robustly distinguishes approximately 98% of these from photometric (i.e. statistical) false positives.

Transiting Planets with LSST II. Period Detection of Planets Orbiting 1 Solar Mass Hosts

The Large Synoptic Survey Telescope (LSST) will photometrically monitor ~1 billion stars for ten years. The resulting light curves can be used to detect transiting exoplanets. In particular, as demonstrated by Lund et al. (2015), LSST will probe stellar populations currently undersampled in most exoplanet transit surveys, including out to extragalactic distances. In this paper we test the efficiency of the box-fitting least-squares (BLS) algorithm for accurately recovering the periods of transiting exoplanets using simulated LSST data. We model planets with a range of radii orbiting a solar-mass star at a distance of 7 kpc, with orbital periods ranging from 0.5 to 20 d. We find that typical LSST observations will be able to reliably detect Hot Jupiters with periods shorter than ~3 d. At the same time, we find that the LSST deep drilling cadence is extremely powerful: the BLS algorithm successfully recovers at least 30% of sub-Saturn-size exoplanets with orbital periods as long as 20 d.

The Gaia-LSST Synergy

We discuss the synergy of Gaia and the Large Synoptic Survey Telescope (LSST) in the context of Milky Way studies. LSST can be thought of as Gaia’s deep complement because the two surveys will deliver trigonometric parallax, proper-motion, and photometric measurements with similar uncertainties at Gaia’s faint end at $r=20$, and LSST will extend these measurements to a limit about five magnitudes fainter. We also point out that users of Gaia data will have developed data analysis skills required to benefit from LSST data, and provide detailed information about how international participants can join LSST.

Periodograms for Multiband Astronomical Time Series

This paper introduces the multiband periodogram, a general extension of the well-known Lomb-Scargle approach for detecting periodic signals in time-domain data. In addition to advantages of the Lomb-Scargle method such as treatment of non-uniform sampling and heteroscedastic errors, the multiband periodogram significantly improves period finding for randomly sampled multiband light curves (e.g., Pan-STARRS, DES and LSST). The light curves in each band are modeled as arbitrary truncated Fourier series, with the period and phase shared across all bands. The key aspect is the use of Tikhonov regularization which drives most of the variability into the so-called base model common to all bands, while fits for individual bands describe residuals relative to the base model and typically require lower-order Fourier series. This decrease in the effective model complexity is the main reason for improved performance. We use simulated light curves and randomly subsampled SDSS Stripe 82 data to demonstrate the superiority of this method compared to other methods from the literature, and find that this method will be able to efficiently determine the correct period in the majority of LSST’s bright RR Lyrae stars with as little as six months of LSST data. A Python implementation of this method, along with code to fully reproduce the results reported here, is available on GitHub.

The Whole is Greater than the Sum of the Parts: Optimizing the Joint Science Return from LSST, Euclid and WFIRST

The focus of this report is on the opportunities enabled by the combination of LSST, Euclid and WFIRST, the optical surveys that will be an essential part of the next decade’s astronomy. The sum of these surveys has the potential to be significantly greater than the contributions of the individual parts. As is detailed in this report, the combination of these surveys should give us multi-wavelength high-resolution images of galaxies and broadband data covering much of the stellar energy spectrum. These stellar and galactic data have the potential of yielding new insights into topics ranging from the formation history of the Milky Way to the mass of the neutrino. However, enabling the astronomy community to fully exploit this multi-instrument data set is a challenging technical task: for much of the science, we will need to combine the photometry across multiple wavelengths with varying spectral and spatial resolution. We identify some of the key science enabled by the combined surveys and the key technical challenges in achieving the synergies.

The Whole is Greater than the Sum of the Parts: Optimizing the Joint Science Return from LSST, Euclid and WFIRST [Replacement]

The focus of this report is on the opportunities enabled by the combination of LSST, Euclid and WFIRST, the optical surveys that will be an essential part of the next decade’s astronomy. The sum of these surveys has the potential to be significantly greater than the contributions of the individual parts. As is detailed in this report, the combination of these surveys should give us multi-wavelength high-resolution images of galaxies and broadband data covering much of the stellar energy spectrum. These stellar and galactic data have the potential of yielding new insights into topics ranging from the formation history of the Milky Way to the mass of the neutrino. However, enabling the astronomy community to fully exploit this multi-instrument data set is a challenging technical task: for much of the science, we will need to combine the photometry across multiple wavelengths with varying spectral and spatial resolution. We identify some of the key science enabled by the combined surveys and the key technical challenges in achieving the synergies.

Improving the LSST dithering pattern and cadence for dark energy studies [Replacement]

The Large Synoptic Survey Telescope (LSST) will explore the entire southern sky over 10 years starting in 2022 with unprecedented depth and time sampling in six filters, $ugrizy$. Artificial power on the scale of the 3.5 deg LSST field-of-view will contaminate measurements of baryonic acoustic oscillations (BAO), which fall at the same angular scale at redshift $z \sim 1$. Using the HEALPix framework, we demonstrate the impact of an "un-dithered" survey, in which $17\%$ of each LSST field-of-view is overlapped by neighboring observations, generating a honeycomb pattern of strongly varying survey depth and significant artificial power on BAO angular scales. We find that adopting large dithers (i.e., telescope pointing offsets) of amplitude close to the LSST field-of-view radius reduces artificial structure in the galaxy distribution by a factor of $\sim$10. We propose an observing strategy utilizing large dithers within the main survey and minimal dithers for the LSST Deep Drilling Fields. We show that applying various magnitude cutoffs can further increase survey uniformity. We find that a magnitude cut of $r < 27.3$ removes significant spurious power from the angular power spectrum with a minimal reduction in the total number of observed galaxies over the ten-year LSST run. We also determine the effectiveness of the observing strategy for Type Ia SNe and predict that the main survey will contribute $\sim$100,000 Type Ia SNe. We propose a concentrated survey where LSST observes one-third of its main survey area each year, increasing the number of main survey Type Ia SNe by a factor of $\sim$1.5, while still enabling the successful pursuit of other science drivers.

Improving the LSST dithering pattern and cadence for dark energy studies

The Large Synoptic Survey Telescope (LSST) will explore the entire southern sky over 10 years starting in 2022 with unprecedented depth and time sampling in six filters, $ugrizy$. Artificial power on the scale of the 3.5 deg LSST field-of-view will contaminate measurements of baryonic acoustic oscillations (BAO), which fall at the same angular scale at redshift $z \sim 1$. Using the HEALPix framework, we demonstrate the impact of an "un-dithered" survey, in which $17\%$ of each LSST field-of-view is overlapped by neighboring observations, generating a honeycomb pattern of strongly varying survey depth and significant artificial power on BAO angular scales. We find that adopting large dithers (i.e., telescope pointing offsets) of amplitude close to the LSST field-of-view radius reduces artificial structure in the galaxy distribution by a factor of $\sim$10. We propose an observing strategy utilizing large dithers within the main survey and minimal dithers for the LSST Deep Drilling Fields. We show that applying various magnitude cutoffs can further increase survey uniformity. We find that a magnitude cut of $r < 27.3$ removes significant spurious power from the angular power spectrum with a minimal reduction in the total number of observed galaxies over the ten-year LSST run. We also determine the effectiveness of the observing strategy for Type Ia SNe and predict that the main survey will contribute $\sim$100,000 Type Ia SNe. We propose a concentrated survey where LSST observes one-third of its main survey area each year, increasing the number of main survey Type Ia SNe by a factor of $\sim$1.5, while still enabling the successful pursuit of other science drivers.

Crosstalk in multi-output CCDs for LSST [Replacement]

LSST’s compact, low power focal plane will be subject to electronic crosstalk with some unique signatures due to its readout geometry. This note describes the crosstalk mechanisms, ongoing characterization of prototypes, and implications for the observing cadence.

Synergy between the Large Synoptic Survey Telescope and the Square Kilometre Array

We provide an overview of the science benefits of combining information from the Square Kilometre Array (SKA) and the Large Synoptic Survey Telescope (LSST). We first summarise the capabilities and timeline of the LSST and overview its science goals. We then discuss the science questions in common between the two projects, and how they can be best addressed by combining the data from both telescopes. We describe how weak gravitational lensing and galaxy clustering studies with LSST and SKA can provide improved constraints on the causes of the cosmological acceleration. We summarise the benefits to galaxy evolution studies of combining deep optical multi-band imaging with radio observations. Finally, we discuss the excellent match between one of the most unique features of the LSST, its temporal cadence in the optical waveband, and the time resolution of the SKA.

Probing Neutrino Hierarchy and Chirality via Wakes

The relic neutrinos are expected to acquire a bulk relative velocity with respect to the dark matter at low redshifts, and downstream of dark matter halos neutrino wakes are expected to develop. We propose a method of measuring the neutrino mass based on this mechanism. The neutrino wake will cause a dipole distortion of the galaxy-galaxy lensing pattern. This effect could be detected by combining upcoming lensing surveys, e.g. the LSST and Euclid surveys with a low redshift galaxy survey or a 21cm intensity mapping survey which can map the neutrino flow field. The data obtained with LSST and Euclid should enable us to make positive detection if the three neutrino masses are Quasi-Degenerate, and a future high precision 21cm lensing survey would allow the normal hierarchy and inverted hierarchy cases to be distinguished, and even the right handed Dirac neutrinos may be detectable.

Probing Neutrino Hierarchy and Chirality via Wakes [Cross-Listing]

The relic neutrinos are expected to acquire a bulk relative velocity with respect to the dark matter at low redshifts, and downstream of dark matter halos neutrino wakes are expected to develop. We propose a method of measuring the neutrino mass based on this mechanism. The neutrino wake will cause a dipole distortion of the galaxy-galaxy lensing pattern. This effect could be detected by combining upcoming lensing surveys, e.g. the LSST and Euclid surveys with a low redshift galaxy survey or a 21cm intensity mapping survey which can map the neutrino flow field. The data obtained with LSST and Euclid should enable us to make positive detection if the three neutrino masses are Quasi-Degenerate, and a future high precision 21cm lensing survey would allow the normal hierarchy and inverted hierarchy cases to be distinguished, and even the right handed Dirac neutrinos may be detectable.

LSST optical beam simulator

We describe a camera beam simulator for the LSST which is capable of illuminating a 60mm field at f/1.2 with realistic astronomical scenes, enabling studies of CCD astrometric and photometric performance. The goal is to fully simulate LSST observing, in order to characterize charge transport and other features in the thick fully depleted CCDs and to probe low level systematics under realistic conditions. The automated system simulates the centrally obscured LSST beam and sky scenes, including the spectral shape of the night sky. The doubly telecentric design uses a nearly unit magnification design consisting of a spherical mirror, three BK7 lenses, and one beam-splitter window. To achieve the relatively large field the beam-splitter window is used twice. The motivation for this LSST beam test facility was driven by the need to fully characterize a new generation of thick fully-depleted CCDs, and assess their suitability for the broad range of science which is planned for LSST. Due to the fast beam illumination and the thick silicon design [each pixel is 10 microns wide and over 100 microns deep] at long wavelengths there can be effects of photon transport and charge transport in the high purity silicon. The focal surface covers a field more than sufficient for a 40×40 mm LSST CCD. Delivered optical quality meets design goals, with 50% energy within a 5 micron circle. The tests of CCD performance are briefly described.

Data Mining for Gravitationally Lensed Quasars

Gravitationally lensed (GL) quasars are brighter than their unlensed counterparts and produce images with distinctive morphological signatures. Past searches and target selection algorithms, in particular the Sloan Quasar Lens Search (SQLS), have relied on basic morphological criteria, which were applied to samples of bright, spectroscopically confirmed quasars. The SQLS techniques are not sufficient for searching into new surveys (e.g. DES, PS1, LSST), because spectroscopic information is not readily available and the large data volume requires higher purity in target/candidate selection. We carry out a systematic exploration of machine learning techniques and demonstrate that a two step strategy can be highly effective. In the first step we use catalog-level information ($griz$+WISE magnitudes, second moments) to preselect targets, using artificial neural networks. The accepted targets are then inspected with pixel-by-pixel pattern recognition algorithms (Gradient-Boosted Trees), to form a final set of candidates. The results from this procedure can be used to further refine the simpler SQLS algorithms, with a twofold (or threefold) gain in purity and the same (or $80\%$) completeness at target-selection stage, or a purity of $70\%$ and a completeness of $60\%$ after the candidate-selection step. Simpler photometric searches in $griz$+WISE based on colour cuts would provide samples with $7\%$ purity or less. Our technique is extremely fast, as a list of candidates can be obtained from a stage III experiment (e.g. DES catalog/database) in {a few} CPU hours. The techniques are easily extendable to Stage IV experiments like LSST with the addition of time domain information.

Spectroscopic Needs for Calibration of LSST Photometric Redshifts

This white paper summarizes the conclusions of the Snowmass White Paper "Spectroscopic Needs for Imaging Dark Energy Experiments" (arXiv:1309.5384) which are relevant to the calibration of LSST photometric redshifts; i.e., the accurate characterization of biases and uncertainties in photo-z’s. Any significant miscalibration will lead to systematic errors in photo-z’s, impacting nearly all extragalactic science with LSST. As existing deep redshift samples have failed to yield highly-secure redshifts for a systematic 20%-60% of their targets, it is a strong possibility that future deep spectroscopic samples will not solve the calibration problem on their own. The best options in this scenario are provided by cross-correlation methods that utilize clustering with objects from spectroscopic surveys (which need not be fully representative) to trace the redshift distribution of the full sample. For spectroscopy, the eBOSS survey would enable a basic calibration of LSST photometric redshifts, while the expected LSST-DESI overlap would be more than sufficient for an accurate calibration at z>0.2. A DESI survey of nearby galaxies conducted in bright time would enable accurate calibration down to z~0. The expanded areal coverage provided by the transfer of the DESI instrument (or duplication of it at the Blanco Telescope) would enable the best possible calibration from cross-correlations, in addition to other science gains.

Spectroscopic Needs for Training of LSST Photometric Redshifts

This white paper summarizes those conclusions of the Snowmass White Paper "Spectroscopic Needs for Imaging Dark Energy Experiments" (arXiv:1309.5384) which are relevant to the training of LSST photometric redshifts; i.e., the use of spectroscopic redshifts to improve algorithms and reduce photo-z errors. The larger and more complete the available training set is, the smaller the RMS error in photo-z estimates should be, increasing LSST’s constraining power. Among the better US-based options for this work are the proposed MANIFEST fiber feed for the Giant Magellan Telescope or (with lower survey speed) the WFOS spectrograph on the Thirty Meter Telescope (TMT). Due to its larger field of view and higher multiplexing, the PFS spectrograph on Subaru would be able to obtain a baseline training sample faster than TMT; comparable performance could be achieved with a highly-multiplexed spectrograph on Gemini with at least a 20 arcmin diameter field of view.

Maximizing LSST's Scientific Return: Ensuring Participation from Smaller Institutions

The remarkable scientific return and legacy of LSST, in the era that it will define, will not only be realized in the breakthrough science that will be achieved with catalog data. This Big Data survey will shape the way the entire astronomical community advances — or fails to embrace — new ways of approaching astronomical research and data. In this white paper, we address the NRC template questions 4,5,6,8 and 9, with a focus on the unique challenges for smaller, and often under-resourced, institutions, including institutions dedicated to underserved minority populations, in the efficient and effective use of LSST data products to maximize LSST’s scientific return.

The Variable Sky of Deep Synoptic Surveys [Replacement]

The discovery of variable and transient sources is an essential product of synoptic surveys. The alert stream will require filtering for personalized criteria — a process managed by a functionality commonly described as a Broker. In order to understand quantitatively the magnitude of the alert generation and Broker tasks, we have undertaken an analysis of the most numerous types of variable targets in the sky — Galactic stars, QSOs, AGNs and asteroids. It is found that LSST will be capable of discovering ~10^5 high latitude |b| > 20 deg) variable stars per night at the beginning of the survey. (The corresponding number for |b| < 20 deg is orders of magnitude larger, but subject to caveats concerning extinction and crowding.) However, the number of new discoveries may well drop below 100/night within 1 year. The same analysis applied to GAIA clarifies the complementarity of the GAIA and LSST surveys. Discovery of variable galactic nuclei (AGNs) and Quasi Stellar Objects (QSOs) are each predicted to begin at ~3000 per night, and decrease by 50X over 4 years. SNe are expected at ~1100/night, and after several survey years will dominate the new variable discovery rate. LSST asteroid discoveries will start at > 10^5 per night, and if orbital determination has a 50% success rate per epoch, will drop below 1000/night within 2 years.

The Variable Sky of Deep Synoptic Surveys [Replacement]

The discovery of variable and transient sources is an essential product of synoptic surveys. The alert stream will require filtering for personalized criteria — a process managed by a functionality commonly described as a Broker. In order to understand quantitatively the magnitude of the alert generation and Broker tasks, we have undertaken an analysis of the most numerous types of variable targets in the sky — Galactic stars, QSOs, AGNs and asteroids. It is found that LSST will be capable of discovering ~10^5 high latitude |b| > 20 deg) variable stars per night at the beginning of the survey. (The corresponding number for |b| < 20 deg is orders of magnitude larger, but subject to caveats concerning extinction and crowding.) However, the number of new discoveries may well drop below 100/night within 2 years. The same analysis applied to GAIA clarifies the complementarity of the GAIA and LSST surveys. Discovery of variable galactic nuclei (AGNs) and Quasi Stellar Objects (QSOs) are each predicted to begin at ~3000 per night, and decrease by 50X over 4 years. SNe are expected at ~1100/night, and after several survey years will dominate the new variable discovery rate. LSST asteroid discoveries will start at > 10^5 per night, and if orbital determination has a 50% success rate per epoch, will drop below 1000/night within 2 years.

The Variable Sky of Deep Synoptic Surveys

The discovery of variable and transient sources is an essential product of synoptic surveys. The alert stream will require filtering for personalized criteria — a process managed by a functionality commonly described as a Broker. In order to understand quantitatively the magnitude of the alert generation and Broker tasks, we have undertaken an analysis of the most numerous types of variable targets in the sky — Galactic stars, QSOs, AGNs and asteroids. It is found that LSST will be capable of discovering ~10^4 high latitude |b| > 20 deg) variable stars per night at the beginning of the survey. (The corresponding number for |b| < 20 deg is 2 orders of magnitude larger, but subject to caveats concerning extinction and crowding.) However, the number of new discoveries may well drop below 100/night within less than 1 year. The same analysis applied to GAIA clarifies the complementarity of the GAIA and LSST surveys. Discovery of variable galactic nuclei (AGNs) and Quasi Stellar Objects (QSOs) are each predicted to begin at ~3000 per night, and decrease by 50X over 4 years. SNe are expected at ~1100/night, and after several survey years will dominate the new variable discovery rate. LSST asteroid discoveries will start at > 10^5 per night, and if orbital determination has a 50% success rate per epoch, will drop below 1000/night within 2 years.

Strong Lens Time Delay Challenge: II. Results of TDC1

We present the results of the first strong lens time delay challenge. The motivation, experimental design, and entry level challenge are described in a companion paper. This paper presents the main challenge, TDC1, which consisted in analyzing thousands of simulated light curves blindly. The observational properties of the light curves cover the range in quality obtained for current targeted efforts (e.g. COSMOGRAIL) and expected from future synoptic surveys (e.g. LSST), and include "evilness" in the form of simulated systematic errors. 7 teams participated in TDC1, submitting results from 78 different method variants. After a describing each method, we compute and analyze basic statistics measuring accuracy (or bias) $A$, goodness of fit $\chi^2$, precision $P$, and success rate $f$. For some methods we identify outliers as an important issue. Other methods show that outliers can be controlled via visual inspection or conservative quality control. Several methods are competitive, i.e. give $|A|<0.03$, $P<0.03$, and $\chi^2<1.5$, with some of the methods already reaching sub-percent accuracy. The fraction of light curves yielding a time delay measurement is typically in the range $f = $20–40%. It depends strongly on the quality of the data: COSMOGRAIL-quality cadence and light curve lengths yield significantly higher $f$ than does sparser sampling. We estimate that LSST should provide around 400 robust time-delay measurements, each with $P<0.03$ and $|A|<0.01$, comparable to current lens modeling uncertainties. In terms of observing strategies, we find that $A$ and $f$ depend mostly on season length, while P depends mostly on cadence and campaign duration.

Strong Lens Time Delay Challenge: II. Results of TDC1 [Replacement]

We present the results of the first strong lens time delay challenge. The motivation, experimental design, and entry level challenge are described in a companion paper. This paper presents the main challenge, TDC1, which consisted of analyzing thousands of simulated light curves blindly. The observational properties of the light curves cover the range in quality obtained for current targeted efforts (e.g.,~COSMOGRAIL) and expected from future synoptic surveys (e.g.,~LSST), and include simulated systematic errors. \nteamsA\ teams participated in TDC1, submitting results from \nmethods\ different method variants. After a describing each method, we compute and analyze basic statistics measuring accuracy (or bias) $A$, goodness of fit $\chi^2$, precision $P$, and success rate $f$. For some methods we identify outliers as an important issue. Other methods show that outliers can be controlled via visual inspection or conservative quality control. Several methods are competitive, i.e., give $|A|<0.03$, $P<0.03$, and $\chi^2<1.5$, with some of the methods already reaching sub-percent accuracy. The fraction of light curves yielding a time delay measurement is typically in the range $f = $20–40\%. It depends strongly on the quality of the data: COSMOGRAIL-quality cadence and light curve lengths yield significantly higher $f$ than does sparser sampling. Taking the results of TDC1 at face value, we estimate that LSST should provide around 400 robust time-delay measurements, each with $P<0.03$ and $|A|<0.01$, comparable to current lens modeling uncertainties. In terms of observing strategies, we find that $A$ and $f$ depend mostly on season length, while P depends mostly on cadence and campaign duration.

Improving LSST Photometric Calibration with Gaia Data

We consider the possibility that the Gaia mission can supply data which will improve the photometric calibration of LSST. After outlining the LSST calibra- tion process and the information that will be available from Gaia, we explore two options for using Gaia data. The first is to use Gaia G-band photometry of selected stars, in conjunction with knowledge of the stellar parameters Teff, log g, and AV, and in some cases Z, to create photometric standards in the LSST u, g, r, i, z, and y bands. The accuracies of the resulting standard magnitudes are found to be insufficient to satisfy LSST requirements when generated from main sequence (MS) stars, but generally adequate from DA white dwarfs (WD). The second option is combine the LSST bandpasses into a synthetic Gaia G band, which is a close approximation to the real Gaia G band. This allows synthetic Gaia G photometry to be directly compared with actual Gaia G photometry at a level of accuracy which is useful for both verifying and improving LSST photometric calibration.

Transiting Planets with LSST I: Potential for LSST Exoplanet Detection [Replacement]

The Large Synoptic Survey Telescope (LSST) has been designed in order to satisfy several different scientific objectives that can be addressed by a ten-year synoptic sky survey. However, LSST will also provide a large amount of data that can then be exploited for additional science beyond its primary goals. We demonstrate the potential of using LSST data to search for transiting exoplanets, and in particular to find planets orbiting host stars that are members of stellar populations that have been less thoroughly probed by current exoplanet surveys. We find that existing algorithms can detect in simulated LSST light curves the transits of Hot Jupiters around solar-type stars, Hot Neptunes around K dwarfs, and planets orbiting stars in the Large Magellanic Cloud. We also show that LSST would have the sensitivity to potentially detect Super-Earths orbiting red dwarfs, including those in habitable zone orbits, if they are present in some fields that LSST will observe. From these results, we make the case that LSST has the ability to provide a valuable contribution to exoplanet science.

Transiting Planets with LSST I: Potential for LSST Exoplanet Detection

The Large Synoptic Survey Telescope (LSST) has been designed in order to satisfy several different scientific objectives that can be addressed by a ten-year synoptic sky survey. However, LSST will also provide a large amount of data that can then be exploited for additional science beyond its primary goals. We demonstrate the potential of using LSST data to search for transiting exoplanets, and in particular to find planets orbiting host stars that are members of stellar populations that have been less thoroughly probed by current exoplanet surveys. We find that existing algorithms can detect in simulated LSST light curves the transits of Hot Jupiters around solar-type stars, Hot Neptunes around K dwarfs, and planets orbiting stars in the Large Magellanic Cloud. We also show that LSST would have the sensitivity to potentially detect Super-Earths orbiting red dwarfs, including those in habitable zone orbits, if they are present in some fields that LSST will observe. From these results, we make the case that LSST has the ability to provide a valuable contribution to exoplanet science.

Too Many, Too Few, or Just Right? The Predicted Number and Distribution of Milky Way Dwarf Galaxies

We predict the spatial distribution and number of Milky Way dwarf galaxies to be discovered in the DES and LSST surveys, by completeness correcting the observed SDSS dwarf population. We apply most massive in the past, earliest forming, and earliest infall toy models to a set of dark matter-only simulated Milky Way/M31 halo pairs from Exploring the Local Volume In Simulations (ELVIS). The observed spatial distribution of Milky Way dwarfs in the LSST-era will discriminate between the earliest infall and other simplified models for how dwarf galaxies populate dark matter subhalos. Inclusive of all toy models and simulations, at 90% confidence we predict a total of 37-114 L $\gtrsim 10^3$L$_{\odot}$ dwarfs and 131-782 L $\lesssim 10^3$L$_{\odot}$ dwarfs within 300 kpc. These numbers of L $\gtrsim 10^3$L$_{\odot}$ dwarfs are dramatically lower than previous predictions, owing primarily to our use of updated detection limits and the decreasing number of SDSS dwarfs discovered per sky area. For an effective $r_{\rm limit}$ of 25.8 mag, we predict: 3-13 L $\gtrsim 10^3$L$_{\odot}$ and 9-99 L $\lesssim 10^3$L$_{\odot}$ dwarfs for DES, and 18-53 L $\gtrsim 10^3$L$_{\odot}$ and 53-307 L $\lesssim 10^3$L$_{\odot}$ dwarfs for LSST. These enormous predicted ranges ensure a coming decade of near-field excitement with these next generation surveys.

On the Performance of Quasar Reverberation Mapping in the Era of Time-Domain Photometric Surveys

We quantitatively assess, by means of comprehensive numerical simulations, the ability of broad-band photometric surveys to recover the broad emission line region (BLR) size in quasars under various observing conditions and for a wide range of object properties. Focusing on the general characteristics of the Large Synoptic Survey Telescope (LSST), we find that the slope of the size-luminosity relation for the BLR in quasars can be determined with unprecedented accuracy, of order a few percent, over a broad luminosity range and out to $z\sim 3$. In particular, major emission lines for which the BLR size can be reliably measured with LSST include H$\alpha$, MgII $\lambda 2799$, CIII] $\lambda 1909$, CIV $\lambda 1549$, and Ly$\alpha$, amounting to a total of $\gtrsim 10^5$ time-delay measurements for all transitions. Combined with an estimate for the emission line velocity dispersion, upcoming photometric surveys will facilitate the estimation of black hole masses in AGN over a broad range of luminosities and redshifts, allow for refined calibrations of BLR size-luminosity-redshift relations in different transitions, as well as lead to more reliable cross-calibration with other black hole mass estimation techniques.

Radio Astronomy in LSST Era

A community meeting on the topic of "Radio Astronomy in the LSST Era" was hosted by the National Radio Astronomy Observatory in Charlottesville, VA (2013 May 6–8). The focus of the workshop was on time domain radio astronomy and sky surveys. For the time domain, the extent to which radio and visible wavelength observations are required to understand several classes of transients was stressed, but there are also classes of radio transients for which no visible wavelength counterpart is yet known, providing an opportunity for discovery. From the LSST perspective, the LSST is expected to generate as many as 1 million alerts nightly, which will require even more selective specification and identification of the classes and characteristics of transients that can warrant follow up, at radio or any wavelength. The LSST will also conduct a deep survey of the sky, producing a catalog expected to contain over 38 billion objects in it. Deep radio wavelength sky surveys will also be conducted on a comparable time scale, and radio and visible wavelength observations are part of the multi-wavelength approach needed to classify and understand these objects. Radio wavelengths are valuable because they are unaffected by dust obscuration and, for galaxies, contain contributions both from star formation and from active galactic nuclei. The workshop touched on several other topics, on which there was consensus including the placement of other LSST "Deep Drilling Fields," inter-operability of software tools, and the challenge of filtering and exploiting the LSST data stream. There were also topics for which there was insufficient time for full discussion or for which no consensus was reached, which included the procedures for following up on LSST observations and the nature for future support of researchers desiring to use LSST data products.

Optical selection of quasars: SDSS and LSST

Over the last decade, quasar sample sizes have increased from several thousand to several hundred thousand, thanks mostly to SDSS imaging and spectroscopic surveys. LSST, the next-generation optical imaging survey, will provide hundreds of detections per object for a sample of more than ten million quasars with redshifts of up to about seven. We briefly review optical quasar selection techniques, with emphasis on methods based on colors, variability properties and astrometric behavior.

Cosmic shear without shape noise

We describe a new method for reducing the shape noise in weak lensing measurements by an order of magnitude. Our method relies on spectroscopic measurements of disk galaxy rotation and makes use of the Tully-Fisher (TF) relation in order to control for the intrinsic orientations of galaxy disks. For this new proposed experiment, the shape noise ceases to be an important source of statistical error. Using CosmoLike, a new cosmological analysis software package, we simulate likelihood analyses for two spectroscopic weak lensing survey concepts (roughly similar in scale to Dark Energy Survey Task Force Stage III and Stage IV missions) and compare their constraining power to a cosmic shear survey from the Large Synoptic Survey Telescope (LSST). Our forecasts in seven-dimensional cosmological parameter space include statistical uncertainties resulting from shape noise, cosmic variance, halo sample variance, and higher-order moments of the density field. We marginalize over systematic uncertainties arising from photometric redshift errors and shear calibration biases considering both optimistic and conservative assumptions about LSST systematic errors. We find that even the TF-Stage III is highly competitive with the optimistic LSST scenario, while evading the most important sources of theoretical and observational systematic error inherent in traditional weak lensing techniques. Furthermore, the TF technique enables a narrow-bin cosmic shear tomography approach to tightly constrain time-dependent signatures in the dark energy phenomenon.

Strong Lens Time Delay Challenge: I. Experimental Design

The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters as well as probe the dark matter (sub-)structure within the lens galaxy. The number of lenses with measured time delays is growing rapidly as a result of some dedicated efforts; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ~1000 lens systems consisting of a foreground elliptical galaxy producing multiple images of a background quasar. In an effort to assess the present capabilities of the community to accurately measure the time delays in strong gravitational lens systems, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we invite the community to take part in a "Time Delay Challenge" (TDC). The challenge is organized as a set of "ladders", each containing a group of simulated datasets to be analyzed blindly by participating independent analysis teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs’ datasets increasing in complexity and realism to incorporate a variety of anticipated physical and experimental effects. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of datasets, and is designed to be used as a practice set by the participating teams as they set up their analysis pipelines. The (non-mandatory) deadline for completion of TDC0 will be the TDC1 launch date, December 1, 2013. TDC1 will consist of some 1000 light curves, a sample designed to provide the statistical power to make meaningful statements about the sub-percent accuracy that will be required to provide competitive Dark Energy constraints in the LSST era.

Growth of Cosmic Structure: Probing Dark Energy Beyond Expansion [Replacement]

The quantity and quality of cosmic structure observations have greatly accelerated in recent years. Further leaps forward will be facilitated by imminent projects, which will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to the nature of dark matter and dark energy. Dark energy and gravity both affect how rapidly structure grows; the greater the acceleration, the more suppressed the growth of structure, while the greater the gravity, the more enhanced the growth. While distance measurements also constrain dark energy, the comparison of growth and distance data tests whether General Relativity describes the laws of physics accurately on large scales. Modified gravity models are able to reproduce the distance measurements but at the cost of altering the growth of structure (these signatures are described in more detail in the accompanying paper on Novel Probes of Gravity and Dark Energy). Upcoming surveys will exploit these differences to determine whether the acceleration of the Universe is due to dark energy or to modified gravity. To realize this potential, both wide field imaging and spectroscopic redshift surveys play crucial roles. Projects including DES, eBOSS, DESI, PFS, LSST, Euclid, and WFIRST are in line to map more than a 1000 cubic-billion-light-year volume of the Universe. These will map the cosmic structure growth rate to 1% in the redshift range 0<z<2, over the last 3/4 of the age of the Universe.

Growth of Cosmic Structure: Probing Dark Energy Beyond Expansion

The quantity and quality of cosmic structure observations have greatly accelerated in recent years. Further leaps forward will be facilitated by imminent projects, which will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to the nature of dark matter and dark energy. Dark energy and gravity both affect how rapidly structure grows; the greater the acceleration, the more suppressed the growth of structure, while the greater the gravity, the more enhanced the growth. While distance measurements also constrain dark energy, the comparison of growth and distance data tests whether General Relativity describes the laws of physics accurately on large scales. Modified gravity models are able to reproduce the distance measurements but at the cost of altering the growth of structure (these signatures are described in more detail in the accompanying paper on Novel Probes of Gravity and Dark Energy). Upcoming surveys will exploit these differences to determine whether the acceleration of the Universe is due to dark energy or to modified gravity. To realize this potential, both wide field imaging and spectroscopic redshift surveys play crucial roles. Projects including DES, eBOSS, DESI, PFS, LSST, Euclid, and WFIRST are in line to map more than a 1000 cubic-billion-light-year volume of the Universe. These will map the cosmic structure growth rate to 1% in the redshift range 0<z<2, over the last 3/4 of the age of the Universe.

Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST [Replacement]

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Scale Synoptic Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2. However, this outlook may be conservative, given that our model underpredicts the number of ultra-faint galaxies with large potential annihilation signals actually discovered in the Sloan Digital Sky Survey. Our simulation-based approach focusing on the Milky Way satellite population demographics complements existing empirically-based estimates.

Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST [Replacement]

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Synoptic Survey Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2-4. However, this outlook may be conservative, given that our model underpredicts the number of ultra-faint galaxies with large potential annihilation signals actually discovered in the Sloan Digital Sky Survey. Our simulation-based approach focusing on the Milky Way satellite population demographics complements existing empirically-based estimates.

Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST [Replacement]

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Synoptic Survey Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2-4. However, this outlook may be conservative, given that our model underpredicts the number of ultra-faint galaxies with large potential annihilation signals actually discovered in the Sloan Digital Sky Survey. Our simulation-based approach focusing on the Milky Way satellite population demographics complements existing empirically-based estimates.

Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Scale Synoptic Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2.

Measuring the Thermal Sunyaev-Zel'dovich Effect Through the Cross Correlation of Planck and WMAP Maps with ROSAT Galaxy Cluster Catalogs

We measure a significant correlation between the thermal Sunyaev-Zel’dovich effect in the Planck and WMAP maps and an X-ray cluster map based on ROSAT. We use the 100, 143 and 343 GHz Planck maps and the WMAP 94 GHz map to obtain this cluster cross spectrum. We check our measurements for contamination from dusty galaxies using the cross correlations with the 220, 545 and 843 GHz maps from Planck. Our measurement yields a direct characterization of the cluster power spectrum over a wide range of angular scales that is consistent with large cosmological simulations. The amplitude of this signal depends on cosmological parameters that determine the growth of structure (\sigma_8 and \Omega_M) and scales as \sigma_8^7.4 and \Omega_M^1.9 around the multipole (ell) ~ 1000. We constrain \sigma_8 and \Omega_M from the cross-power spectrum to be \sigma_8 (\Omega_M/0.30)^0.26 = 0.8 +/- 0.02. Since this cross spectrum produces a tight constraint in the \sigma_8 and \Omega_M plane the errors on a \sigma_8 constraint will be mostly limited by the uncertainties from external constraints. Future cluster catalogs, like those from eRosita and LSST, and pointed multi-wavelength observations of clusters will improve the constraining power of this cross spectrum measurement. In principle this analysis can be extended beyond \sigma_8 and \Omega_M to constrain dark energy or the sum of the neutrino masses.

The Kepler-SEP Mission: Harvesting the South Ecliptic Pole large-amplitude variables with Kepler

As a response to the white paper call, we propose to turn Kepler to the South Ecliptic Pole (SEP) and observe thousands of large amplitude variables for years with high cadence in the frame of the Kepler-SEP Mission. The degraded pointing stability will still allow observing these stars with reasonable (probably better than mmag) accuracy. Long-term continuous monitoring already proved to be extremely helpful to investigate several areas of stellar astrophysics. Space-based missions opened a new window to the dynamics of pulsation in several class of pulsating variable stars and facilitated detailed studies of eclipsing binaries. The main aim of this mission is to better understand the fascinating dynamics behind various stellar pulsational phenomena (resonances, mode coupling, chaos, mode selection) and interior physics (turbulent convection, opacities). This will also improve the applicability of these astrophysical tools for distance measurements, population and stellar evolution studies. We investigated the pragmatic details of such a mission and found a number of advantages: minimal reprogramming of the flight software, a favorable field of view, access to both galactic and LMC objects. However, the main advantage of the SEP field comes from the large sample of well classified targets, mainly through OGLE. Synergies and significant overlap (spatial, temporal and in brightness) with both ground- (OGLE, LSST) and space-based missions (GAIA, TESS) will greatly enhance the scientific value of the Kepler-SEP mission. GAIA will allow full characterization of the distance indicators. TESS will continuously monitor this field for at least one year, and together with the proposed mission provide long time series that cannot be obtained by other means. If Kepler-SEP program is successful, there is a possibility to place one of the so-called LSST "deep-drilling" fields in this region.

Galactic Stellar Populations in the Era of SDSS and Other Large Surveys

Studies of stellar populations, understood to mean collections of stars with common spatial, kinematic, chemical, and/or age distributions, have been reinvigorated during the last decade by the advent of large-area sky surveys such as SDSS, 2MASS, RAVE, and others. We review recent analyses of these data that, together with theoretical and modeling advances, are revolutionizing our understanding of the nature of the Milky Way, and galaxy formation and evolution in general. The formation of galaxies like the Milky Way was long thought to be a steady process leading to a smooth distribution of stars. However, the abundance of substructure in the multi-dimensional space of various observables, such as position, kinematics, and metallicity, is by now proven beyond doubt, and demonstrates the importance of mergers in the growth of galaxies. Unlike smooth models that involve simple components, the new data reviewed here clearly show many irregular structures, such as the Sagittarius dwarf tidal stream and the Virgo and Pisces overdensities in the halo, and the Monoceros stream closer to the Galactic plane. These recent developments have made it clear that the Milky Way is a complex and dynamical structure, one that is still being shaped by the merging of neighboring smaller galaxies. We also briefly discuss the next generation of wide-field sky surveys, such as SkyMapper, Pan-STARRS, Gaia and LSST, which will improve measurement precision manyfold, and comprise billions of individual stars. The ultimate goal, development of a coherent and detailed story of the assembly and evolutionary history of the Milky Way and other large spirals like it, now appears well within reach.

Clustering Measurements of broad-line AGNs: Review and Future

Despite substantial effort, the precise physical processes that lead to the growth of super-massive black holes in the centers of galaxies are still not well understood. These phases of black hole growth are thought to be of key importance in understanding galaxy evolution. Forthcoming missions such as eROSITA, HETDEX, eBOSS, BigBOSS, LSST, and Pan-STARRS will compile by far the largest ever Active Galactic Nuclei (AGNs) catalogs which will allow us to measure the spatial distribution of AGNs in the universe with unprecedented accuracy. For the first time, AGN clustering measurements will reach a level of precision that will not only allow for an alternative approach to answering open questions in AGN/galaxy co-evolution but will open a new frontier, allowing us to precisely determine cosmological parameters. This paper reviews the large-scale clustering measurements of broad line AGNs. We summarize how clustering is measured and which constraints can be derived from AGN clustering measurements, we discuss recent developments, and we briefly describe future projects that will deliver extremely large AGN samples which will enable AGN clustering measurements of unprecedented accuracy. In order to maximize the scientific return on the research fields of AGN/galaxy evolution and cosmology, we advise that the community develop a full understanding of the systematic uncertainties which will, in contrast to today’s measurement, be the dominant source of uncertainty.

Measuring the matter energy density and Hubble parameter from Large Scale Structure

We investigate the method to measure both the present value of the matter energy density contrast and the Hubble parameter directly from the measurement of the linear growth rate which is obtained from the large scale structure of the Universe. From this method, one can obtain the value of the nuisance cosmological parameter $\Omo$ (the present value of the matter energy density contrast) within 3% error if the growth rate measurement can be reached $z > 3.5$. One can also investigate the evolution of the Hubble parameter without any prior on the value of $H_0$ (the current value of the Hubble parameter). Especially, estimating the Hubble parameter are insensitive to the errors on the measurement of the normalized growth rate $f \sigma_8$. However, this method requires the high $z$ ($z > 3.5$) measurement of the growth rate in order to get the less than 5% errors on the measurements of $H(z)$ at $z \leq 1.2$ with the redshift bin $\Delta z = 0.2$. Thus, this will be suitable for the next generation large scale structure galaxy surveys like WFMOS and LSST.

The Effect of Weak Lensing on Distance Estimates from Supernovae

Using a sample of 608 Type Ia supernovae from the SDSS-II and BOSS surveys, combined with a sample of foreground galaxies from SDSS-II, we estimate the weak lensing convergence for each supernova line-of-sight. We find that the correlation between this measurement and the Hubble residuals is consistent with the prediction from lensing (at a significance of 1.7sigma). Strong correlations are also found between the residuals and supernova nuisance parameters after a linear correction is applied. When these other correlations are taken into account, the lensing signal is detected at 1.4sigma. We show for the first time that distance estimates from supernovae can be improved when lensing is incorporated by including a new parameter in the SALT2 methodology for determining distance moduli. The recovered value of the new parameter is consistent with the lensing prediction. Using WMAP7, HST and BAO data, we find the best-fit value of the new lensing parameter and show that the central values and uncertainties on Omega_m and w are unaffected. The lensing of supernovae, while only seen at marginal significance in this low redshift sample, will be of vital importance for the next generation of surveys, such as DES and LSST, which will be systematics dominated.

 

You need to log in to vote

The blog owner requires users to be logged in to be able to vote for this post.

Alternatively, if you do not have an account yet you can create one here.

Powered by Vote It Up

^ Return to the top of page ^