Array ( [0] => tag/lsst/ ) lsst « Vox Charta

# Posts Tagged lsst

## Recent Postings from lsst

### Data Mining for Gravitationally Lensed Quasars

Gravitationally lensed (GL) quasars are brighter than their unlensed counterparts and produce images with distinctive morphological signatures. Past searches and target selection algorithms, in particular the Sloan Quasar Lens Search (SQLS), have relied on basic morphological criteria, which were applied to samples of bright, spectroscopically confirmed quasars. The SQLS techniques are not sufficient for searching into new surveys (e.g. DES, PS1, LSST), because spectroscopic information is not readily available and the large data volume requires higher purity in target/candidate selection. We carry out a systematic exploration of machine learning techniques and demonstrate that a two step strategy can be highly effective. In the first step we use catalog-level information ($griz$+WISE magnitudes, second moments) to preselect targets, using artificial neural networks. The accepted targets are then inspected with pixel-by-pixel pattern recognition algorithms (Gradient-Boosted Trees), to form a final set of candidates. The results from this procedure can be used to further refine the simpler SQLS algorithms, with a twofold (or threefold) gain in purity and the same (or $80\%$) completeness at target-selection stage, or a purity of $70\%$ and a completeness of $60\%$ after the candidate-selection step. Simpler photometric searches in $griz$+WISE based on colour cuts would provide samples with $7\%$ purity or less. Our technique is extremely fast, as a list of candidates can be obtained from a stage III experiment (e.g. DES catalog/database) in {a few} CPU hours. The techniques are easily extendable to Stage IV experiments like LSST with the addition of time domain information.

### Spectroscopic Needs for Calibration of LSST Photometric Redshifts

This white paper summarizes the conclusions of the Snowmass White Paper "Spectroscopic Needs for Imaging Dark Energy Experiments" (arXiv:1309.5384) which are relevant to the calibration of LSST photometric redshifts; i.e., the accurate characterization of biases and uncertainties in photo-z’s. Any significant miscalibration will lead to systematic errors in photo-z’s, impacting nearly all extragalactic science with LSST. As existing deep redshift samples have failed to yield highly-secure redshifts for a systematic 20%-60% of their targets, it is a strong possibility that future deep spectroscopic samples will not solve the calibration problem on their own. The best options in this scenario are provided by cross-correlation methods that utilize clustering with objects from spectroscopic surveys (which need not be fully representative) to trace the redshift distribution of the full sample. For spectroscopy, the eBOSS survey would enable a basic calibration of LSST photometric redshifts, while the expected LSST-DESI overlap would be more than sufficient for an accurate calibration at z>0.2. A DESI survey of nearby galaxies conducted in bright time would enable accurate calibration down to z~0. The expanded areal coverage provided by the transfer of the DESI instrument (or duplication of it at the Blanco Telescope) would enable the best possible calibration from cross-correlations, in addition to other science gains.

### Spectroscopic Needs for Training of LSST Photometric Redshifts

This white paper summarizes those conclusions of the Snowmass White Paper "Spectroscopic Needs for Imaging Dark Energy Experiments" (arXiv:1309.5384) which are relevant to the training of LSST photometric redshifts; i.e., the use of spectroscopic redshifts to improve algorithms and reduce photo-z errors. The larger and more complete the available training set is, the smaller the RMS error in photo-z estimates should be, increasing LSST’s constraining power. Among the better US-based options for this work are the proposed MANIFEST fiber feed for the Giant Magellan Telescope or (with lower survey speed) the WFOS spectrograph on the Thirty Meter Telescope (TMT). Due to its larger field of view and higher multiplexing, the PFS spectrograph on Subaru would be able to obtain a baseline training sample faster than TMT; comparable performance could be achieved with a highly-multiplexed spectrograph on Gemini with at least a 20 arcmin diameter field of view.

### Maximizing LSST's Scientific Return: Ensuring Participation from Smaller Institutions

The remarkable scientific return and legacy of LSST, in the era that it will define, will not only be realized in the breakthrough science that will be achieved with catalog data. This Big Data survey will shape the way the entire astronomical community advances — or fails to embrace — new ways of approaching astronomical research and data. In this white paper, we address the NRC template questions 4,5,6,8 and 9, with a focus on the unique challenges for smaller, and often under-resourced, institutions, including institutions dedicated to underserved minority populations, in the efficient and effective use of LSST data products to maximize LSST’s scientific return.

### The Variable Sky of Deep Synoptic Surveys

The discovery of variable and transient sources is an essential product of synoptic surveys. The alert stream will require filtering for personalized criteria — a process managed by a functionality commonly described as a Broker. In order to understand quantitatively the magnitude of the alert generation and Broker tasks, we have undertaken an analysis of the most numerous types of variable targets in the sky — Galactic stars, QSOs, AGNs and asteroids. It is found that LSST will be capable of discovering ~10^4 high latitude |b| > 20 deg) variable stars per night at the beginning of the survey. (The corresponding number for |b| < 20 deg is 2 orders of magnitude larger, but subject to caveats concerning extinction and crowding.) However, the number of new discoveries may well drop below 100/night within less than 1 year. The same analysis applied to GAIA clarifies the complementarity of the GAIA and LSST surveys. Discovery of variable galactic nuclei (AGNs) and Quasi Stellar Objects (QSOs) are each predicted to begin at ~3000 per night, and decrease by 50X over 4 years. SNe are expected at ~1100/night, and after several survey years will dominate the new variable discovery rate. LSST asteroid discoveries will start at > 10^5 per night, and if orbital determination has a 50% success rate per epoch, will drop below 1000/night within 2 years.

### Strong Lens Time Delay Challenge: II. Results of TDC1

We present the results of the first strong lens time delay challenge. The motivation, experimental design, and entry level challenge are described in a companion paper. This paper presents the main challenge, TDC1, which consisted in analyzing thousands of simulated light curves blindly. The observational properties of the light curves cover the range in quality obtained for current targeted efforts (e.g. COSMOGRAIL) and expected from future synoptic surveys (e.g. LSST), and include "evilness" in the form of simulated systematic errors. 7 teams participated in TDC1, submitting results from 78 different method variants. After a describing each method, we compute and analyze basic statistics measuring accuracy (or bias) $A$, goodness of fit $\chi^2$, precision $P$, and success rate $f$. For some methods we identify outliers as an important issue. Other methods show that outliers can be controlled via visual inspection or conservative quality control. Several methods are competitive, i.e. give $|A|<0.03$, $P<0.03$, and $\chi^2<1.5$, with some of the methods already reaching sub-percent accuracy. The fraction of light curves yielding a time delay measurement is typically in the range $f =$20–40%. It depends strongly on the quality of the data: COSMOGRAIL-quality cadence and light curve lengths yield significantly higher $f$ than does sparser sampling. We estimate that LSST should provide around 400 robust time-delay measurements, each with $P<0.03$ and $|A|<0.01$, comparable to current lens modeling uncertainties. In terms of observing strategies, we find that $A$ and $f$ depend mostly on season length, while P depends mostly on cadence and campaign duration.

### Improving LSST Photometric Calibration with Gaia Data

We consider the possibility that the Gaia mission can supply data which will improve the photometric calibration of LSST. After outlining the LSST calibra- tion process and the information that will be available from Gaia, we explore two options for using Gaia data. The first is to use Gaia G-band photometry of selected stars, in conjunction with knowledge of the stellar parameters Teff, log g, and AV, and in some cases Z, to create photometric standards in the LSST u, g, r, i, z, and y bands. The accuracies of the resulting standard magnitudes are found to be insufficient to satisfy LSST requirements when generated from main sequence (MS) stars, but generally adequate from DA white dwarfs (WD). The second option is combine the LSST bandpasses into a synthetic Gaia G band, which is a close approximation to the real Gaia G band. This allows synthetic Gaia G photometry to be directly compared with actual Gaia G photometry at a level of accuracy which is useful for both verifying and improving LSST photometric calibration.

### Transiting Planets with LSST I: Potential for LSST Exoplanet Detection

The Large Synoptic Survey Telescope (LSST) has been designed in order to satisfy several different scientific objectives that can be addressed by a ten-year synoptic sky survey. However, LSST will also provide a large amount of data that can then be exploited for additional science beyond its primary goals. We demonstrate the potential of using LSST data to search for transiting exoplanets, and in particular to find planets orbiting host stars that are members of stellar populations that have been less thoroughly probed by current exoplanet surveys. We find that existing algorithms can detect in simulated LSST light curves the transits of Hot Jupiters around solar-type stars, Hot Neptunes around K dwarfs, and planets orbiting stars in the Large Magellanic Cloud. We also show that LSST would have the sensitivity to potentially detect Super-Earths orbiting red dwarfs, including those in habitable zone orbits, if they are present in some fields that LSST will observe. From these results, we make the case that LSST has the ability to provide a valuable contribution to exoplanet science.

### Too Many, Too Few, or Just Right? The Predicted Number and Distribution of Milky Way Dwarf Galaxies

We predict the spatial distribution and number of Milky Way dwarf galaxies to be discovered in the DES and LSST surveys, by completeness correcting the observed SDSS dwarf population. We apply most massive in the past, earliest forming, and earliest infall toy models to a set of dark matter-only simulated Milky Way/M31 halo pairs from Exploring the Local Volume In Simulations (ELVIS). The observed spatial distribution of Milky Way dwarfs in the LSST-era will discriminate between the earliest infall and other simplified models for how dwarf galaxies populate dark matter subhalos. Inclusive of all toy models and simulations, at 90% confidence we predict a total of 37-114 L $\gtrsim 10^3$L$_{\odot}$ dwarfs and 131-782 L $\lesssim 10^3$L$_{\odot}$ dwarfs within 300 kpc. These numbers of L $\gtrsim 10^3$L$_{\odot}$ dwarfs are dramatically lower than previous predictions, owing primarily to our use of updated detection limits and the decreasing number of SDSS dwarfs discovered per sky area. For an effective $r_{\rm limit}$ of 25.8 mag, we predict: 3-13 L $\gtrsim 10^3$L$_{\odot}$ and 9-99 L $\lesssim 10^3$L$_{\odot}$ dwarfs for DES, and 18-53 L $\gtrsim 10^3$L$_{\odot}$ and 53-307 L $\lesssim 10^3$L$_{\odot}$ dwarfs for LSST. These enormous predicted ranges ensure a coming decade of near-field excitement with these next generation surveys.

### On the Performance of Quasar Reverberation Mapping in the Era of Time-Domain Photometric Surveys

We quantitatively assess, by means of comprehensive numerical simulations, the ability of broad-band photometric surveys to recover the broad emission line region (BLR) size in quasars under various observing conditions and for a wide range of object properties. Focusing on the general characteristics of the Large Synoptic Survey Telescope (LSST), we find that the slope of the size-luminosity relation for the BLR in quasars can be determined with unprecedented accuracy, of order a few percent, over a broad luminosity range and out to $z\sim 3$. In particular, major emission lines for which the BLR size can be reliably measured with LSST include H$\alpha$, MgII $\lambda 2799$, CIII] $\lambda 1909$, CIV $\lambda 1549$, and Ly$\alpha$, amounting to a total of $\gtrsim 10^5$ time-delay measurements for all transitions. Combined with an estimate for the emission line velocity dispersion, upcoming photometric surveys will facilitate the estimation of black hole masses in AGN over a broad range of luminosities and redshifts, allow for refined calibrations of BLR size-luminosity-redshift relations in different transitions, as well as lead to more reliable cross-calibration with other black hole mass estimation techniques.

### Optical selection of quasars: SDSS and LSST

Over the last decade, quasar sample sizes have increased from several thousand to several hundred thousand, thanks mostly to SDSS imaging and spectroscopic surveys. LSST, the next-generation optical imaging survey, will provide hundreds of detections per object for a sample of more than ten million quasars with redshifts of up to about seven. We briefly review optical quasar selection techniques, with emphasis on methods based on colors, variability properties and astrometric behavior.

### Cosmic shear without shape noise

We describe a new method for reducing the shape noise in weak lensing measurements by an order of magnitude. Our method relies on spectroscopic measurements of disk galaxy rotation and makes use of the Tully-Fisher (TF) relation in order to control for the intrinsic orientations of galaxy disks. For this new proposed experiment, the shape noise ceases to be an important source of statistical error. Using CosmoLike, a new cosmological analysis software package, we simulate likelihood analyses for two spectroscopic weak lensing survey concepts (roughly similar in scale to Dark Energy Survey Task Force Stage III and Stage IV missions) and compare their constraining power to a cosmic shear survey from the Large Synoptic Survey Telescope (LSST). Our forecasts in seven-dimensional cosmological parameter space include statistical uncertainties resulting from shape noise, cosmic variance, halo sample variance, and higher-order moments of the density field. We marginalize over systematic uncertainties arising from photometric redshift errors and shear calibration biases considering both optimistic and conservative assumptions about LSST systematic errors. We find that even the TF-Stage III is highly competitive with the optimistic LSST scenario, while evading the most important sources of theoretical and observational systematic error inherent in traditional weak lensing techniques. Furthermore, the TF technique enables a narrow-bin cosmic shear tomography approach to tightly constrain time-dependent signatures in the dark energy phenomenon.

### Strong Lens Time Delay Challenge: I. Experimental Design

The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters as well as probe the dark matter (sub-)structure within the lens galaxy. The number of lenses with measured time delays is growing rapidly as a result of some dedicated efforts; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ~1000 lens systems consisting of a foreground elliptical galaxy producing multiple images of a background quasar. In an effort to assess the present capabilities of the community to accurately measure the time delays in strong gravitational lens systems, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we invite the community to take part in a "Time Delay Challenge" (TDC). The challenge is organized as a set of "ladders", each containing a group of simulated datasets to be analyzed blindly by participating independent analysis teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs’ datasets increasing in complexity and realism to incorporate a variety of anticipated physical and experimental effects. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of datasets, and is designed to be used as a practice set by the participating teams as they set up their analysis pipelines. The (non-mandatory) deadline for completion of TDC0 will be the TDC1 launch date, December 1, 2013. TDC1 will consist of some 1000 light curves, a sample designed to provide the statistical power to make meaningful statements about the sub-percent accuracy that will be required to provide competitive Dark Energy constraints in the LSST era.

### Growth of Cosmic Structure: Probing Dark Energy Beyond Expansion

The quantity and quality of cosmic structure observations have greatly accelerated in recent years. Further leaps forward will be facilitated by imminent projects, which will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to the nature of dark matter and dark energy. Dark energy and gravity both affect how rapidly structure grows; the greater the acceleration, the more suppressed the growth of structure, while the greater the gravity, the more enhanced the growth. While distance measurements also constrain dark energy, the comparison of growth and distance data tests whether General Relativity describes the laws of physics accurately on large scales. Modified gravity models are able to reproduce the distance measurements but at the cost of altering the growth of structure (these signatures are described in more detail in the accompanying paper on Novel Probes of Gravity and Dark Energy). Upcoming surveys will exploit these differences to determine whether the acceleration of the Universe is due to dark energy or to modified gravity. To realize this potential, both wide field imaging and spectroscopic redshift surveys play crucial roles. Projects including DES, eBOSS, DESI, PFS, LSST, Euclid, and WFIRST are in line to map more than a 1000 cubic-billion-light-year volume of the Universe. These will map the cosmic structure growth rate to 1% in the redshift range 0<z<2, over the last 3/4 of the age of the Universe.

### Growth of Cosmic Structure: Probing Dark Energy Beyond Expansion [Replacement]

The quantity and quality of cosmic structure observations have greatly accelerated in recent years. Further leaps forward will be facilitated by imminent projects, which will enable us to map the evolution of dark and baryonic matter density fluctuations over cosmic history. The way that these fluctuations vary over space and time is sensitive to the nature of dark matter and dark energy. Dark energy and gravity both affect how rapidly structure grows; the greater the acceleration, the more suppressed the growth of structure, while the greater the gravity, the more enhanced the growth. While distance measurements also constrain dark energy, the comparison of growth and distance data tests whether General Relativity describes the laws of physics accurately on large scales. Modified gravity models are able to reproduce the distance measurements but at the cost of altering the growth of structure (these signatures are described in more detail in the accompanying paper on Novel Probes of Gravity and Dark Energy). Upcoming surveys will exploit these differences to determine whether the acceleration of the Universe is due to dark energy or to modified gravity. To realize this potential, both wide field imaging and spectroscopic redshift surveys play crucial roles. Projects including DES, eBOSS, DESI, PFS, LSST, Euclid, and WFIRST are in line to map more than a 1000 cubic-billion-light-year volume of the Universe. These will map the cosmic structure growth rate to 1% in the redshift range 0<z<2, over the last 3/4 of the age of the Universe.

### Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Scale Synoptic Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2.

### Prospects for Detecting Gamma Rays from Annihilating Dark Matter in Dwarf Galaxies in the Era of DES and LSST [Replacement]

Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Scale Synoptic Telescope, LSST) optical imaging surveys discover more of the Milky Way’s ultra-faint satellite galaxies, they may increase Fermi’s sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Way’s satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermi’s sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermi’s sensitivity by more than a factor of ~2. However, this outlook may be conservative, given that our model underpredicts the number of ultra-faint galaxies with large potential annihilation signals actually discovered in the Sloan Digital Sky Survey. Our simulation-based approach focusing on the Milky Way satellite population demographics complements existing empirically-based estimates.

### Measuring the Thermal Sunyaev-Zel'dovich Effect Through the Cross Correlation of Planck and WMAP Maps with ROSAT Galaxy Cluster Catalogs

We measure a significant correlation between the thermal Sunyaev-Zel’dovich effect in the Planck and WMAP maps and an X-ray cluster map based on ROSAT. We use the 100, 143 and 343 GHz Planck maps and the WMAP 94 GHz map to obtain this cluster cross spectrum. We check our measurements for contamination from dusty galaxies using the cross correlations with the 220, 545 and 843 GHz maps from Planck. Our measurement yields a direct characterization of the cluster power spectrum over a wide range of angular scales that is consistent with large cosmological simulations. The amplitude of this signal depends on cosmological parameters that determine the growth of structure (\sigma_8 and \Omega_M) and scales as \sigma_8^7.4 and \Omega_M^1.9 around the multipole (ell) ~ 1000. We constrain \sigma_8 and \Omega_M from the cross-power spectrum to be \sigma_8 (\Omega_M/0.30)^0.26 = 0.8 +/- 0.02. Since this cross spectrum produces a tight constraint in the \sigma_8 and \Omega_M plane the errors on a \sigma_8 constraint will be mostly limited by the uncertainties from external constraints. Future cluster catalogs, like those from eRosita and LSST, and pointed multi-wavelength observations of clusters will improve the constraining power of this cross spectrum measurement. In principle this analysis can be extended beyond \sigma_8 and \Omega_M to constrain dark energy or the sum of the neutrino masses.

### The Kepler-SEP Mission: Harvesting the South Ecliptic Pole large-amplitude variables with Kepler

As a response to the white paper call, we propose to turn Kepler to the South Ecliptic Pole (SEP) and observe thousands of large amplitude variables for years with high cadence in the frame of the Kepler-SEP Mission. The degraded pointing stability will still allow observing these stars with reasonable (probably better than mmag) accuracy. Long-term continuous monitoring already proved to be extremely helpful to investigate several areas of stellar astrophysics. Space-based missions opened a new window to the dynamics of pulsation in several class of pulsating variable stars and facilitated detailed studies of eclipsing binaries. The main aim of this mission is to better understand the fascinating dynamics behind various stellar pulsational phenomena (resonances, mode coupling, chaos, mode selection) and interior physics (turbulent convection, opacities). This will also improve the applicability of these astrophysical tools for distance measurements, population and stellar evolution studies. We investigated the pragmatic details of such a mission and found a number of advantages: minimal reprogramming of the flight software, a favorable field of view, access to both galactic and LMC objects. However, the main advantage of the SEP field comes from the large sample of well classified targets, mainly through OGLE. Synergies and significant overlap (spatial, temporal and in brightness) with both ground- (OGLE, LSST) and space-based missions (GAIA, TESS) will greatly enhance the scientific value of the Kepler-SEP mission. GAIA will allow full characterization of the distance indicators. TESS will continuously monitor this field for at least one year, and together with the proposed mission provide long time series that cannot be obtained by other means. If Kepler-SEP program is successful, there is a possibility to place one of the so-called LSST "deep-drilling" fields in this region.

### Galactic Stellar Populations in the Era of SDSS and Other Large Surveys

Studies of stellar populations, understood to mean collections of stars with common spatial, kinematic, chemical, and/or age distributions, have been reinvigorated during the last decade by the advent of large-area sky surveys such as SDSS, 2MASS, RAVE, and others. We review recent analyses of these data that, together with theoretical and modeling advances, are revolutionizing our understanding of the nature of the Milky Way, and galaxy formation and evolution in general. The formation of galaxies like the Milky Way was long thought to be a steady process leading to a smooth distribution of stars. However, the abundance of substructure in the multi-dimensional space of various observables, such as position, kinematics, and metallicity, is by now proven beyond doubt, and demonstrates the importance of mergers in the growth of galaxies. Unlike smooth models that involve simple components, the new data reviewed here clearly show many irregular structures, such as the Sagittarius dwarf tidal stream and the Virgo and Pisces overdensities in the halo, and the Monoceros stream closer to the Galactic plane. These recent developments have made it clear that the Milky Way is a complex and dynamical structure, one that is still being shaped by the merging of neighboring smaller galaxies. We also briefly discuss the next generation of wide-field sky surveys, such as SkyMapper, Pan-STARRS, Gaia and LSST, which will improve measurement precision manyfold, and comprise billions of individual stars. The ultimate goal, development of a coherent and detailed story of the assembly and evolutionary history of the Milky Way and other large spirals like it, now appears well within reach.

### Clustering Measurements of broad-line AGNs: Review and Future

Despite substantial effort, the precise physical processes that lead to the growth of super-massive black holes in the centers of galaxies are still not well understood. These phases of black hole growth are thought to be of key importance in understanding galaxy evolution. Forthcoming missions such as eROSITA, HETDEX, eBOSS, BigBOSS, LSST, and Pan-STARRS will compile by far the largest ever Active Galactic Nuclei (AGNs) catalogs which will allow us to measure the spatial distribution of AGNs in the universe with unprecedented accuracy. For the first time, AGN clustering measurements will reach a level of precision that will not only allow for an alternative approach to answering open questions in AGN/galaxy co-evolution but will open a new frontier, allowing us to precisely determine cosmological parameters. This paper reviews the large-scale clustering measurements of broad line AGNs. We summarize how clustering is measured and which constraints can be derived from AGN clustering measurements, we discuss recent developments, and we briefly describe future projects that will deliver extremely large AGN samples which will enable AGN clustering measurements of unprecedented accuracy. In order to maximize the scientific return on the research fields of AGN/galaxy evolution and cosmology, we advise that the community develop a full understanding of the systematic uncertainties which will, in contrast to today’s measurement, be the dominant source of uncertainty.

### Measuring the matter energy density and Hubble parameter from Large Scale Structure

We investigate the method to measure both the present value of the matter energy density contrast and the Hubble parameter directly from the measurement of the linear growth rate which is obtained from the large scale structure of the Universe. From this method, one can obtain the value of the nuisance cosmological parameter $\Omo$ (the present value of the matter energy density contrast) within 3% error if the growth rate measurement can be reached $z > 3.5$. One can also investigate the evolution of the Hubble parameter without any prior on the value of $H_0$ (the current value of the Hubble parameter). Especially, estimating the Hubble parameter are insensitive to the errors on the measurement of the normalized growth rate $f \sigma_8$. However, this method requires the high $z$ ($z > 3.5$) measurement of the growth rate in order to get the less than 5% errors on the measurements of $H(z)$ at $z \leq 1.2$ with the redshift bin $\Delta z = 0.2$. Thus, this will be suitable for the next generation large scale structure galaxy surveys like WFMOS and LSST.

### The Effect of Weak Lensing on Distance Estimates from Supernovae

Using a sample of 608 Type Ia supernovae from the SDSS-II and BOSS surveys, combined with a sample of foreground galaxies from SDSS-II, we estimate the weak lensing convergence for each supernova line-of-sight. We find that the correlation between this measurement and the Hubble residuals is consistent with the prediction from lensing (at a significance of 1.7sigma). Strong correlations are also found between the residuals and supernova nuisance parameters after a linear correction is applied. When these other correlations are taken into account, the lensing signal is detected at 1.4sigma. We show for the first time that distance estimates from supernovae can be improved when lensing is incorporated by including a new parameter in the SALT2 methodology for determining distance moduli. The recovered value of the new parameter is consistent with the lensing prediction. Using WMAP7, HST and BAO data, we find the best-fit value of the new lensing parameter and show that the central values and uncertainties on Omega_m and w are unaffected. The lensing of supernovae, while only seen at marginal significance in this low redshift sample, will be of vital importance for the next generation of surveys, such as DES and LSST, which will be systematics dominated.

### Smoking Gun or Smoldering Embers? A Possible r-process Kilonova Associated with the Short-Hard GRB 130603B

We present Hubble Space Telescope optical and near-IR observations of the short-hard GRB 130603B (z=0.356) obtained 9.4 days post-burst. At the position of the burst we detect a red point source with m(F160W)=25.8+/-0.2 AB mag and m(F606W)>27.5 AB mag (3-sigma), corresponding to rest-frame absolute magnitudes of M_J -15.2 mag and M_B>-13.5 mag. A comparison to the early optical afterglow emission requires a decline rate of alpha_opt<-1.6 (F_nu t^alpha), consistent with the observed X-ray decline at about 1 hr to about 1 day. The observed red color of V-H>1.7 mag is also potentially consistent with the red optical colors of the afterglow at early time (F_nu nu^-1.6 in gri). Thus, an afterglow interpretation is feasible. Alternatively, the red color and faint absolute magnitude are due to emission from an r-process powered transient ("kilonova") produced by ejecta from the merger of an NS-NS or NS-BH binary, the most likely progenitors of short GRBs. In this scenario, the observed brightness implies an outflow with M_ej 0.01 Msun and v_ej 0.1c, in good agreement with the results of numerical merger simulations for roughly equal mass binary constituents (i.e., NS-NS). If true, the kilonova interpretation provides the strongest evidence to date that short GRBs are produced by compact object mergers, and places initial constraints on the ejected mass. Equally important, it demonstrates that gravitational wave sources detected by Advanced LIGO/Virgo will be accompanied by optical/near-IR counterparts with unusually red colors, detectable by existing and upcoming large wide-field facilities (e.g., Pan-STARRS, DECam, Subaru, LSST).

### An r-Process Kilonova Associated with the Short-Hard GRB 130603B [Replacement]

We present ground-based optical and Hubble Space Telescope optical and near-IR observations of the short-hard GRB130603B at z=0.356, which demonstrate the presence of excess near-IR emission matching the expected brightness and color of an r-process powered transient (a "kilonova"). The early afterglow fades rapidly with alpha<-2.6 at t~8-32 hr post-burst and has a spectral index of beta=-1.5 (F_nu t^alpha*nu^beta), leading to an expected near-IR brightness at the time of the first HST observation of m(F160W)>29.3 AB mag. Instead, the detected source has m(F160W)=25.8+/-0.2 AB mag, corresponding to a rest-frame absolute magnitude of M(J)=-15.2 mag. The upper limit in the HST optical observations is m(F606W)>27.7 AB mag (3-sigma), indicating an unusually red color of V-H>1.9 mag. Comparing the observed near-IR luminosity to theoretical models of kilonovae produced by ejecta from the merger of an NS-NS or NS-BH binary, we infer an ejecta mass of M_ej~0.03-0.08 Msun for v_ej=0.1-0.3c. The inferred mass matches the expectations from numerical merger simulations. The presence of a kilonova provides the strongest evidence to date that short GRBs are produced by compact object mergers, and provides initial insight on the ejected mass and the primary role that compact object merger may play in the r-process. Equally important, it demonstrates that gravitational wave sources detected by Advanced LIGO/Virgo will be accompanied by optical/near-IR counterparts with unusually red colors, detectable by existing and upcoming large wide-field facilities (e.g., Pan-STARRS, DECam, Subaru, LSST).

### Dark energy with gravitational lens time delays

Strong lensing gravitational time delays are a powerful and cost effective probe of dark energy. Recent studies have shown that a single lens can provide a distance measurement with 6-7 % accuracy (including random and systematic uncertainties), provided sufficient data are available to determine the time delay and reconstruct the gravitational potential of the deflector. Gravitational-time delays are a low redshift (z~0-2) probe and thus allow one to break degeneracies in the interpretation of data from higher-redshift probes like the cosmic microwave background in terms of the dark energy equation of state. Current studies are limited by the size of the sample of known lensed quasars, but this situation is about to change. Even in this decade, wide field imaging surveys are likely to discover thousands of lensed quasars, enabling the targeted study of ~100 of these systems and resulting in substantial gains in the dark energy figure of merit. In the next decade, a further order of magnitude improvement will be possible with the 10000 systems expected to be detected and measured with LSST and Euclid. To fully exploit these gains, we identify three priorities. First, support for the development of software required for the analysis of the data. Second, in this decade, small robotic telescopes (1-4m in diameter) dedicated to monitoring of lensed quasars will transform the field by delivering accurate time delays for ~100 systems. Third, in the 2020′s, LSST will deliver 1000′s of time delays; the bottleneck will instead be the aquisition and analysis of high resolution imaging follow-up. Thus, the top priority for the next decade is to support fast high resolution imaging capabilities, such as those enabled by the James Webb Space Telescope and next generation adaptive optics systems on large ground based telescopes.

### Unidentified Moving Objects in Next Generation Time Domain Surveys

Existing and future wide-field photometric surveys will produce a time-lapse movie of the sky that will revolutionize our census of variable and moving astronomical and atmospheric phenomena. As with any revolution in scientific measurement capability, this new species of data will also present us with results that are sure to surprise and confound our understanding of the cosmos. While we cannot predict the unknown yields of such endeavors, it is a beneficial exercise to explore certain parameter spaces using reasonable assumptions for rates and observability. To this end I present a simple parameterized model of the detectability of unidentified flying objects (UFOs) with the Large Synoptic Survey Telescope (LSST). I also demonstrate that the LSST is well suited to place the first systematic constraints on the rate of UFO and extraterrestrial visits to our world.

### Recurring flares from supermassive black hole binaries: implications for tidal disruption candidates and OJ 287 [Replacement]

I discuss the possibility that accreting supermassive black hole (SMBH) binaries with sub-parsec separations produce periodically recurring luminous outbursts that interrupt periods of relative quiescence. This hypothesis is motivated by two characteristics found generically in simulations of binaries embedded in prograde accretion discs: (i) the formation of a central, low-density cavity around the binary, and (ii) the leakage of gas into this cavity, occurring once per orbit via discrete streams on nearly radial trajectories. The first feature would reduce the emergent optical/UV flux of the system relative to active galactic nuclei powered by single SMBHs, while the second can trigger quasiperiodic fluctuations in luminosity. I argue that the quasiperiodic accretion signature may be much more dramatic than previously thought, because the infalling gas streams can strongly shock-heat via self-collision and tidal compression, thereby enhancing viscous accretion. Any optically thick gas that is circularized about either SMBH can accrete before the next pair of streams is deposited, fueling transient, luminous flares that recur every orbit. Due to the diminished flux in between accretion episodes, such cavity-accretion flares could plausibly be mistaken for the tidal disruptions of stars in quiescent nuclei. The flares could be distinguished from tidal disruption events if their quasiperiodic recurrence is observed, or if they are produced by very massive SMBHs that cannot disrupt solar-type stars. They may be discovered serendipitously in surveys such as LSST or eROSITA. I present a heuristic toy model as a proof of concept for the production of cavity-accretion flares, and generate mock light curves and specta. I also apply the model to the active galaxy OJ 287, whose production of quasiperiodic pairs of optical flares has long fueled speculation that it hosts a SMBH binary.

### Recurring flares from supermassive black hole binaries: implications for tidal disruption candidates and OJ 287

I discuss the possibility that accreting, supermassive black hole (SMBH) binaries with sub-parsec separations produce luminous, periodically recurring outbursts that interrupt periods of relative quiescence. This hypothesis is motivated by two characteristics found in simulations of binaries embedded in prograde accretion discs: (i) the formation of a central, low-density cavity, and (ii) the leakage of circumbinary gas into this cavity, occurring once per orbit, via discrete streams on nearly radial trajectories. The first feature will diminish the emergent optical/UV flux of the system relative to active galactic nuclei (AGN) powered by single SMBHs, while the second is likely to trigger periodic fluctuations in the emergent flux. I propose a simple toy model in which a leaked stream crosses its own orbit and shocks, converting its bulk kinetic energy to heat. The result is a hot, optically thick flow that is quickly accreted and produces a flare with an AGN-like spectrum that peaks in the UV and ranges from the optical to the soft X-ray. Due to the preceding quiescence, such a flare could plausibly be mistaken for the tidal disruption of a star. For typical binary periods of years to decades, the event rate in an individual system can be much higher than that predicted for stellar tidal disruptions but infrequent enough to hinder tests of periodicity. The flares proposed here can be produced by very massive (>10^8 Msol) SMBHs that would not tidally disrupt solar-type stars. They could be discovered serendipitously in the future by observatories such as LSST or eROSITA. I apply the model to the active galaxy OJ 287, whose production of periodic optical flares has long fueled speculation that it hosts a SMBH binary.

### ELT-MOS White Paper: Science Overview & Requirements

The workhorse instruments of the 8-10m class observatories have become their multi-object spectrographs (MOS), providing comprehensive follow-up to both ground-based and space-borne imaging. With the advent of deeper imaging surveys from, e.g., the HST and VISTA, there are a plethora of spectroscopic targets which are already beyond the sensitivity limits of current facilities. This wealth of targets will grow even more rapidly in the coming years, e.g., after the completion of ALMA, the launch of the JWST and Euclid, and the advent of the LSST. Thus, one of the key requirements underlying plans for the next generation of ground-based telescopes, the Extremely Large Telescopes (ELTs), is for even greater sensitivity for optical and infrared spectroscopy. Here we revisit the scientific motivation for a MOS capability on the European ELT, combining updated elements of science cases advanced from the Phase A instrument studies with new science cases which draw on the latest results and discoveries. These science cases address key questions related to galaxy evolution over cosmic time, from studies of resolved stellar populations in nearby galaxies out to observations of the most distant galaxies, and are used to identify the top-level requirements on an ‘E-ELT/MOS’. We argue that several of the most compelling ELT science cases demand MOS observations, in highly competitive areas of modern astronomy. Recent technical studies have demonstrated that important issues related to e.g. sky subtraction and multi-object AO can be solved, making fast- track development of a MOS instrument feasible. To ensure that ESO retains world leadership in exploring the most distant objects in the Universe, galaxy evolution and stellar populations, we are convinced that a MOS should have high priority in the instrumentation plan for the E-ELT.

### Machine-assisted discovery of relationships in astronomy

High-volume feature-rich data sets are becoming the bread-and-butter of 21st century astronomy but present significant challenges to scientific discovery. In particular, identifying scientifically significant relationships between sets of parameters is non-trivial. Similar problems in biological and geosciences have led to the development of systems which can explore large parameter spaces and identify potentially interesting sets of associations. In this paper, we describe the application of automated discovery systems of relationships to astronomical data sets, focussing on an evolutionary programming technique and an information-theory technique. We demonstrate their use with classical astronomical relationships – the Hertzsprung-Russell diagram and the fundamental plane of elliptical galaxies. We also show how they work with the issue of binary classification which is relevant to the next generation of large synoptic sky surveys, such as LSST. We find that comparable results to more familiar techniques, such as decision trees, are achievable. Finally, we consider the reality of the relationships discovered and how this can be used for feature selection and extraction.

### Limb-Darkening Coefficients for Eclipsing White Dwarfs [Replacement]

We present extensive calculations of linear and non-linear limb-darkening coefficients as well as complete intensity profiles appropriate for modeling the light-curves of eclipsing white dwarfs. We compute limb-darkening coefficients in the Johnson-Kron-Cousins UBVRI photometric system as well as the Large Synoptic Survey Telescope (LSST) ugrizy system using the most up-to-date model atmospheres available. In all, we provide the coefficients for seven different limb-darkening laws. We describe the variations of these coefficients as a function of the atmospheric parameters, including the effects of convection at low effective temperatures. Finally, we discuss the importance of having readily available limb-darkening coefficients in the context of present and future photometric surveys like the LSST, Palomar Transient Factory, and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS). The LSST, for example, may find ~10^5 eclipsing white dwarfs. The limb-darkening calculations presented here will be an essential part of the detailed analysis of all of these systems.

### Limb-Darkening Coefficients for Eclipsing White Dwarfs

We present extensive calculations of linear and non-linear limb-darkening coefficients as well as complete intensity profiles appropriate for modeling the light-curves of eclipsing white dwarfs. We compute limb-darkening coefficients in the Johnson-Kron-Cousins UBVRI photometric system as well as the Large Synoptic Survey Telescope (LSST) ugrizy system using the most up-to-date model atmospheres available. In all, we provide the coefficients for seven different limb-darkening laws. We describe the variations of these coefficients as a function of the atmospheric parameters, including the effects of convection at low effective temperatures. Finally, we discuss the importance of having readily available limb-darkening coefficients in the context of present and future photometric surveys like the LSST, Palomar Transient Factory (PTF), and the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS). The LSST, for example, may find ~10^5 eclipsing white dwarfs. The limb-darkening calculations presented here will be an essential part of the detailed analysis of all of these systems.

### The Interior Structure Constants as an Age Diagnostic for Low-Mass, Pre-Main Sequence Detached Eclipsing Binary Stars

We propose a novel method for determining the ages of low-mass, pre-main sequence stellar systems using the apsidal motion of low-mass detached eclipsing binaries. The apsidal motion of a binary system with an eccentric orbit provides information regarding the interior structure constants of the individual stars. These constants are related to the normalized stellar interior density distribution and can be extracted from the predictions of stellar evolution models. We demonstrate that low-mass, pre-main sequence stars undergoing radiative core contraction display rapidly changing interior structure constants (greater than 5% per 10 Myr) that, when combined with observational determinations of the interior structure constants (with 5 — 10% precision), allow for a robust age estimate. This age estimate, unlike those based on surface quantities, is largely insensitive to the surface layer where effects of magnetic activity are likely to be most pronounced. On the main sequence, where age sensitivity is minimal, the interior structure constants provide a valuable test of the physics used in stellar structure models of low-mass stars. There are currently no known systems where this technique is applicable. Nevertheless, the emphasis on time domain astronomy with current missions, such as Kepler, and future missions, such as LSST, has the potential to discover systems where the proposed method will be observationally feasible.

### The cosmological information of shear peaks: beyond the abundance

We study the cosmological information of weak lensing (WL) peaks, focusing on two other statistics besides their abundance: the stacked tangential-shear profiles and the peak-peak correlation function. We use a large ensemble of simulated WL maps with survey specifications relevant to future missions like Euclid and LSST, to explore the three peak probes. We find that the correlation function of peaks with high signal-to-noise (S/N) measured from fields of size 144 sq. deg. has a maximum of ~0.3 at an angular scale ~10 arcmin. For peaks with smaller S/N, the amplitude of the correlation function decreases, and its maximum occurs on smaller angular scales. We compare the peak observables measured with and without shape noise and find that for S/N~3 only ~5% of the peaks are due to large-scale structures, the rest being generated by shape noise. The covariance matrix of the probes is examined: the correlation function is only weakly covariant on scales < 30 arcmin, and slightly more on larger scales; the shear profiles are very correlated for theta > 2 arcmin, with a correlation coefficient as high as 0.7. Using the Fisher-matrix formalism, we compute the cosmological constraints for {Om_m, sig_8, w, n_s} considering each probe separately, as well as in combination. We find that the correlation function of peaks and shear profiles yield marginalized errors which are larger by a factor of 2-4 for {Om_m, sig_8} than the errors yielded by the peak abundance alone, while the errors for {w, n_s} are similar. By combining the three probes, the marginalized constraints are tightened by a factor of ~2 compared to the peak abundance alone, the least contributor to the error reduction being the correlation function. This work therefore recommends that future WL surveys use shear peaks beyond their abundance in order to constrain the cosmological model.

### Will Upcoming Observations Distinguish Between Slow-Roll Dark Energy and a Cosmological Constant?

Numerous upcoming observations, such as WFIRST, BOSS, BigBOSS, LSST, Euclid, and Planck, will constrain dark energy (DE)’s equation of state with great precision. They may well find the ratio of pressure to energy density, w, is -1, meaning DE is equivalent to a cosmological constant. However, many time-varying DE models have also been proposed. A single parametrization to test a broad class of them and that is itself motivated by a physical picture is therefore desirable. We suggest the simplest model of DE has the same mechanism as inflation, likely a scalar field slowly rolling down its potential. If this is so, DE will have a generic equation of state and the Universe will have a generic dependence of the Hubble constant on redshift independent of the potential’s starting value and shape. This equation of state and expression for the Hubble constant offer the desired model-independent but physically motivated parametrization, because they will hold for most of the standard scalar-field models of DE such as quintessence and phantom DE. Using it, we conduct a \chi^2 analysis and find that experiments in the next seven years should be able to distinguish any of these time-varying DE models on the one hand from a cosmological constant on the other to 73% confidence if w today differs from -1 by 3.5%. In the limit of perfectly accurate measurements of \Omega_m and H_0, this confidence would rise to 96%. We also include discussion of the current status of DE experiment, a table compiling the techniques each will use, and tables of the precisions of the experiments for which this information was available at the time of publication.

### A new method to improve photometric redshift reconstruction. Applications to the Large Synoptic Survey Telescope [Replacement]

In the next decade, the LSST will become a major facility for the astronomical community. However accurately determining the redshifts of the observed galaxies without using spectroscopy is a major challenge. Reconstruction of the redshifts with high resolution and well-understood uncertainties is mandatory for many science goals, including the study of baryonic acoustic oscillations. We investigate different approaches to establish the accuracy that can be reached by the LSST six-band photometry. We construct a realistic mock galaxy catalog, based on the GOODS survey luminosity function, by simulating the expected apparent magnitude distribution for the LSST. To reconstruct the photometric redshifts (photo-z’s), we consider a template-fitting method and a neural network method. The photo-z reconstruction from both of these techniques is tested on real CFHTLS data and also on simulated catalogs. We describe a new method to improve photo-z reconstruction that efficiently removes catastrophic outliers via a likelihood ratio statistical test. This test uses the posterior probability functions of the fit parameters and the colors. We show that the photometric redshift accuracy will meet the stringent LSST requirements up to redshift $\sim2.5$ after a selection that is based on the likelihood ratio test or on the apparent magnitude for galaxies with $S/N>5$ in at least 5 bands. The former selection has the advantage of retaining roughly 35% more galaxies for a similar photo-z performance compared to the latter. Photo-z reconstruction using a neural network algorithm is also described. In addition, we utilize the CFHTLS spectro-photometric catalog to outline the possibility of combining the neural network and template-fitting methods. We conclude that the photo-z’s will be accurately estimated with the LSST if a Bayesian prior probability and a calibration sample are used.

### On rates of supernovae strongly lensed by galactic haloes in Millennium Simulation

We make use of publicly available results from N-body Millennium Simulation to create mock samples of lensed supernovae type Ia and core-collapse. Simulating galaxy-galaxy lensing we derive the rates of lensed supernovae and find than at redshifts higher that 0.5 about 0.06 per cent of supernovae will be lensed by a factor two or more. Future wide field surveys like Gaia or LSST should be able to detect lensed supernovae in their unbiased sky monitoring. Gaia (from 2013) will detect at least 2 cases whereas LSST (from 2018) will see more than 500 a year. Large number of future lensed supernovae will allow to verify results of cosmological simulations. The strong galaxy- galaxy lensing gives an opportunity to reach high-redshift supernovae type Ia and extend the Hubble diagram sample.

### Supernovae and Cosmology with Future European Facilities

Prospects for future supernova surveys are discussed, focusing on the ESA Euclid mission and the European Extremely Large Telescope(E-ELT), both expected to be in operation around the turn of the decade. Euclid is a 1.2m space survey telescope that will operate at visible and near-infrared wavelengths, and has the potential to find and obtain multi-band lightcurves for thousands of distant supernovae. The E-ELT is a planned general-purpose ground-based 40m-class optical-IR telescope with adaptive optics built in, which will be capable of obtaining spectra of Type Ia supernovae to redshifts of at least four. The contribution to supernova cosmology with these facilities will be discussed in the context of other future supernova programs such as those proposed for DES, JWST, LSST and WFIRST.

### Mapping the Local Halo: Statistical Parallax Analysis of SDSS Low-Mass Subdwarfs

We present a statistical parallax study of nearly 2,000 M subdwarfs with photometry and spectroscopy from the Sloan Digital Sky Survey. Statistical parallax analysis yields the mean absolute magnitudes, mean velocities and velocity ellipsoids for homogenous samples of stars. We selected homogeneous groups of subdwarfs based on their photometric colors and spectral appearance. We examined the color-magnitude relations of low-mass subdwarfs and quantified their dependence on the newly-refined metallicity parameter, zeta. We also developed a photometric metallicity parameter, delta(g-r), based on the g-r and r-z colors of low-mass stars and used it to select stars with similar metallicities. The kinematics of low-mass subdwarfs as a function of color and metallicity were also examined and compared to main sequence M dwarfs. We find that the SDSS subdwarfs share similar kinematics to the inner halo and thick disk. The color-magnitude relations derived in this analysis will be a powerful tool for identifying and characterizing low-mass metal-poor subdwarfs in future surveys such as GAIA and LSST, making them important and plentiful tracers of the stellar halo.

### Finding the First Cosmic Explosions I: Pair-Instability Supernovae [Replacement]

The first stars are the key to the formation of primitive galaxies, early cosmological reionization and chemical enrichment, and the origin of supermassive black holes. Unfortunately, in spite of their extreme luminosities, individual Population III stars will likely remain beyond the reach of direct observation for decades to come. However, their properties could be revealed by their supernova explosions, which may soon be detected by a new generation of NIR observatories such as JWST and WFIRST. We present light curves and spectra for Pop III pair-instability supernovae calculated with the Los Alamos radiation hydrodynamics code RAGE. Our numerical simulations account for the interaction of the blast with realistic circumstellar envelopes, the opacity of the envelope, and Lyman absorption by the neutral IGM at high redshift, all of which are crucial to computing the NIR signatures of the first cosmic explosions. We find that JWST will detect pair-instability supernovae out to z > 30, WFIRST will detect them in all-sky surveys out to z ~ 15 – 20 and LSST and Pan-STARRS will find them at z ~ 7 – 8. The discovery of these ancient explosions will probe the first stellar populations and reveal the existence of primitive galaxies that might not otherwise have been detected.

### Finding the First Cosmic Explosions I: Pair-Instability Supernovae [Replacement]

The first stars are the key to the formation of primitive galaxies, early cosmological reionization and chemical enrichment, and the origin of supermassive black holes. Unfortunately, in spite of their extreme luminosities, individual Population III stars will likely remain beyond the reach of direct observation for decades to come. However, their properties could be revealed by their supernova explosions, which may soon be detected by a new generation of NIR observatories such as JWST and WFIRST. We present light curves and spectra for Pop III pair-instability supernovae calculated with the Los Alamos radiation hydrodynamics code RAGE. Our numerical simulations account for the interaction of the blast with realistic circumstellar envelopes, the opacity of the envelope, and Lyman absorption by the neutral IGM at high redshift, all of which are crucial to computing the NIR signatures of the first cosmic explosions. We find that JWST will detect pair-instability supernovae out to z > 30, WFIRST will detect them in all-sky surveys out to z ~ 15 – 20 and LSST and Pan-STARRS will find them at z ~ 7 – 8. The discovery of these ancient explosions will probe the first stellar populations and reveal the existence of primitive galaxies that might not otherwise have been detected.

### Finding the First Cosmic Explosions I: Pair-Instability Supernovae [Replacement]

The first stars are the key to the formation of primitive galaxies, early cosmological reionization and chemical enrichment, and the origin of supermassive black holes. Unfortunately, in spite of their extreme luminosities, individual Population III stars will likely remain beyond the reach of direct observation for decades to come. However, their properties could be revealed by their supernova explosions, which may soon be detected by a new generation of NIR observatories such as JWST and WFIRST. We present light curves and spectra for Pop III pair-instability supernovae calculated with the Los Alamos radiation hydrodynamics code RAGE. Our numerical simulations account for the interaction of the blast with realistic circumstellar envelopes, the opacity of the envelope, and Lyman absorption by the neutral IGM at high redshift, all of which are crucial to computing the NIR signatures of the first cosmic explosions. We find that JWST will detect pair-instability supernovae out to z > 30, WFIRST will detect them in all-sky surveys out to z ~ 15 – 20 and LSST and Pan-STARRS will find them at z ~ 7 – 8. The discovery of these ancient explosions will probe the first stellar populations and reveal the existence of primitive galaxies that might not otherwise have been detected.

### Finding the First Cosmic Explosions I: Pair-Instability Supernovae

The first stars are the key to the formation of primitive galaxies, early cosmological reionization and chemical enrichment, and the origin of supermassive black holes. Unfortunately, in spite of their extreme luminosities, individual Population III stars will likely remain beyond the reach of direct observation for decades to come. However, their properties could be revealed by their supernova explosions, which may soon be detected by a new generation of NIR observatories such as JWST and WFIRST. We present light curves and spectra for Pop III pair-instability supernovae calculated with the Los Alamos radiation hydrodynamics code RAGE. Our numerical simulations account for the interaction of the blast with realistic circumstellar envelopes, the opacity of the envelope, and Lyman absorption by the neutral IGM at high redshift, all of which are crucial to computing the NIR signatures of the first cosmic explosions. We find that JWST will detect pair-instability supernovae out to z > 30, WFIRST will detect them in all-sky surveys out to z ~ 15 – 20 and LSST and Pan-STARRS will find them at z ~ 7 – 8. The discovery of these ancient explosions will probe the first stellar populations and reveal the existence of primitive galaxies that might not otherwise have been detected.

### Opening the 100-Year Window for Time Domain Astronomy

The large-scale surveys such as PTF, CRTS and Pan-STARRS-1 that have emerged within the past 5 years or so employ digital databases and modern analysis tools to accentuate research into Time Domain Astronomy (TDA). Preparations are underway for LSST which, in another 6 years, will usher in the second decade of modern TDA. By that time the Digital Access to a Sky Century @ Harvard (DASCH) project will have made available to the community the full sky Historical TDA database and digitized images for a century (1890–1990) of coverage. We describe the current DASCH development and some initial results, and outline plans for the "production scanning" phase and data distribution which is to begin in 2012. That will open a 100-year window into temporal astrophysics, revealing rare transients and (especially) astrophysical phenomena that vary on time-scales of a decade. It will also provide context and archival comparisons for the deeper modern surveys

### Effect of Our Galaxy's Motion on Weak Lensing Measurements of Shear and Convergence [Replacement]

In this work we investigate the effect on weak-lensing shear and convergence measurements due to distortions from the Lorentz boost induced by our Galaxy’s motion. While no ellipticity is induced in an image from the Lorentz boost to first order in beta = v/c, the image is magnified. This affects the inferred convergence at a 10 per cent level, and is most notable for low multipoles in the convergence power spectrum C {\kappa}{\kappa} and for surveys with large sky coverage like LSST and DES. Experiments which image only small fractions of the sky and convergence power spectrum determinations at l > 5 can safely neglect the boost effect to first order in beta.

### Millions of Multiples: Detecting and Characterizing Close-Separation Binary Systems in Synoptic Sky Surveys

The direct detection of binary systems in wide-field surveys is limited by the size of the stars’ point-spread-functions (PSFs). A search for elongated objects can find closer companions, but is limited by the precision to which the PSF shape can be calibrated for individual stars. We have developed the BinaryFinder algorithm to search for close binaries by using precision measurements of PSF ellipticity across wide-field survey images. We show that the algorithm is capable of reliably detecting binary systems down to approximately 1/5 of the seeing limit, and can directly measure the systems’ position angles, separations and contrast ratios. To verify the algorithm’s performance we evaluated 100,000 objects in Palomar Transient Factory (PTF) wide-field-survey data for signs of binarity, and then used the Robo-AO robotic laser adaptive optics system to verify the parameters of 44 high-confidence targets. We show that BinaryFinder correctly predicts the presence of close companions with a <5% false-positive rate, measures the detected binaries’ position angles within 2 degrees and separations within 25%, and weakly constrains their contrast ratios. When applied to the full PTF dataset, we estimate that BinaryFinder will discover and characterize ~450,000 physically-associated binary systems with separations <2 arcseconds and magnitudes brighter than R=18. New wide-field synoptic surveys with high sensitivity and sub-arcsecond angular resolution, such as LSST, will allow BinaryFinder to reliably detect millions of very faint binary systems with separations as small as 0.1 arcseconds.

### Stellar transits in active galactic nuclei

Supermassive black holes (SMBH) are typically surrounded by a dense stellar population in galactic nuclei. Stars crossing the line of site in active galactic nuclei (AGN) produce a characteristic transit lightcurve, just like extrasolar planets do when they transit their host star. We examine the possibility of finding such AGN transits in deep optical, UV, and X-ray surveys. We calculate transit lightcurves using the Novikov–Thorne thin accretion disk model, including general relatistic effects. Based on the expected properties of stellar cusps, we find that around 10^6 solar mass SMBHs, transits of red giants are most common for stars on close orbits with transit durations of a few weeks and orbital periods of a few years. We find that detecting AGN transits requires repeated observations of thousands of low mass AGNs to 1% photometric accuracy in optical, or ~ 10% in UV bands or soft X-ray. It may be possible to identify stellar transits in the Pan-STARRS and LSST optical and the eROSITA X-ray surveys. Such observations could be used to constrain black hole mass, spin, inclination and accretion rate. Transit rates and durations could give valuable information on the circumnuclear stellar clusters as well. Transit lightcurves could be used to image accretion disks with unprecedented resolution, allowing to resolve the SMBH silhouette in distant AGNs.

### Degeneracies in parametrized modified gravity models

We study degeneracies between parameters in some of the widely used parametrized modified gravity models. We investigate how different observables from a future photometric weak lensing survey such as LSST, correlate the effects of these parameters and to what extent the degeneracies are broken. We also study the impact of other degenerate effects, namely massive neutrinos and some of the weak lensing systematics, on the correlations.

### You need to log in to vote

The blog owner requires users to be logged in to be able to vote for this post.

Alternatively, if you do not have an account yet you can create one here.