O'TRAIN: A robust and flexible `real or bogus' classifier for the study of the optical transient sky

DOI: 
10.1051/0004-6361/202142952
Publication date: 
08/08/2022
Main author: 
Makhlouf, K.
IAA authors: 
Kann, D. A.
Authors: 
Makhlouf, K.;Turpin, D.;Corre, D.;Karpov, S.;Kann, D. A.;Klotz, A.
Journal: 
Astronomy and Astrophysics
Publication type: 
Article
Volume: 
664
Pages: 
A81
Abstract: 
Context. Scientific interest in studying high-energy transient phenomena in the Universe has risen sharply over the last decade. At present, multiple ground-based survey projects have emerged to continuously monitor the optical (and multi-messenger) transient sky at higher image cadences and covering ever larger portions of the sky every night. These novel approaches are leading to a substantial increase in global alert rates, which need to be handled with care, especially with regard to keeping the level of false alarms as low as possible. Therefore, the standard transient detection pipelines previously designed for narrow field-of-view instruments must now integrate more sophisticated tools to deal with the growing number and diversity of alerts and false alarms. <BR /> Aims: Deep machine learning algorithms have now proven their efficiency in recognising patterns in images. These methods are now used in astrophysics to perform different classification tasks such as identifying bogus from real transient point-like sources. We explore this method to provide a robust and flexible algorithm that could be included in any kind of transient detection pipeline. <BR /> Methods: We built a convolutional neural network (CNN) algorithm in order to perform a `real or bogus' classification task on transient candidate cutouts (subtraction residuals) provided by different kinds of optical telescopes. The training involved human-supervised labelling of the cutouts, which are split into two balanced data sets with `true' and `false' point-like source candidates. We tested our CNN model on the candidates produced by two different transient detection pipelines. In addition, we made use of several diagnostic tools to evaluate the classification performance of our CNN models. <BR /> Results: We show that our CNN algorithm can be successfully trained on a large and diverse array of images on very different pixel scales. In this training process, we did not detect any strong over- or underfitting with the requirement of providing cutouts with a limited size no larger than 50 × 50 pixels. Tested on optical images from four different telescopes and utilising two different transient detection pipelines, our CNN model provides a robust `real or bogus' classification performance accuracy from 93% up to 98% for well-classified candidates. <P />The codes and diagnostic tools presented in this paper are available at <A href="https://github.com/dcorre/otrain">https://github.com/dcorre/otrain</A>
Database: 
ADS
SCOPUS
URL: 
https://ui.adsabs.harvard.edu/#abs/2022A&A...664A..81M/abstract
ADS Bibcode: 
2022A&A...664A..81M
Keywords: 
methods: numerical;techniques: image processing;Astrophysics - Instrumentation and Methods for Astrophysics