Performant feature extraction for photometric time series

Authors: Anastasia Lavrukhina, Konstantin Malanchev

arXiv: 2302.10837v1 - DOI (astro-ph.IM)
4 pages, 3 figures. EAS 2022 proceeding, to be published in Memorie della SAIt

Abstract: Astronomy is entering the era of large surveys of the variable sky such as Zwicky Transient Facility (ZTF) and the forthcoming Legacy Survey of Space and Time (LSST) which are intended to produce up to a million alerts per night. Such an amount of photometric data requires efficient light-curve pre-processing algorithms for the purposes of subsequent data quality cuts, classification, and characterization analysis. In this work, we present the new library "light-curve" for Python and Rust, which is intended for feature extraction from light curves of variable astronomical sources. The library is suitable for machine learning classification problems: it provides a fast implementation of feature extractors, which outperforms other public available codes, and consists of dozens features describing shape, magnitude distribution, and periodic properties of light curves. It includes not only features which had been shown to provide a high performance in classification tasks, but also new features we developed to improve classification quality of selected types of objects. The "light-curve" library is currently used by the ANTARES, AMPEL, and Fink broker systems for analyzing the ZTF alert stream, and has been selected for use with the LSST.

Submitted to arXiv on 21 Feb. 2023

Explore the paper tree

Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant

Also access our AI generated Summaries, or ask questions about this paper to our AI assistant.

Look for similar papers (in beta version)

By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.