TY - JOUR
T1 - Euclid preparation : XXXIII. Characterization of convolutional neural networks for the identification of galaxy-galaxy strong-lensing events
AU - Leuzzi, L.
AU - Meneghetti, M.
AU - Angora, G.
AU - Metcalf, R. B.
AU - Moscardini, L.
AU - Rosati, P.
AU - Bergamini, P.
AU - Calura, F.
AU - Clément, B.
AU - Gavazzi, R.
AU - Gentile, F.
AU - Lochner, M.
AU - Grillo, C.
AU - Vernardos, G.
AU - Aghanim, N.
AU - Amara, A.
AU - Amendola, L.
AU - Auricchio, N.
AU - Bodendorf, C.
AU - Bonino, D.
AU - Branchini, E.
AU - Brescia, M.
AU - Brinchmann, J.
AU - Camera, S.
AU - Capobianco, V.
AU - Carbone, C.
AU - Carretero, J.
AU - Castellano, M.
AU - Cavuoti, S.
AU - Cimatti, A.
AU - Cledassou, R.
AU - Congedo, G.
AU - Conselice, C. J.
AU - Conversi, L.
AU - Copin, Y.
AU - Corcione, L.
AU - Courbin, F.
AU - Cropper, M.
AU - Da Silva, A.
AU - Degaudenzi, H.
AU - Dinis, J.
AU - Dubath, F.
AU - Dupac, X.
AU - Dusini, S.
AU - Farrens, S.
AU - Niemi, S. M.
AU - Schneider, P.
AU - Wang, Y.
AU - Gozaliasl, G.
AU - Sánchez, A. G.
AU - Euclid Collaboration
N1 - Funding Information:
The authors acknowledge the Euclid Consortium, the European Space Agency, and a number of agencies and institutes that have supported the development of Euclid, in particular the Academy of Finland, the Agenzia Spaziale Italiana, the Belgian Science Policy, the Canadian Euclid Consortium, the French Centre National d’Etudes Spatiales, the Deutsches Zentrum für Luft- und Raumfahrt, the Danish Space Research Institute, the Fundação para a Ciência e a Tecnologia, the Ministerio de Ciencia e Innovación, the National Aeronautics and Space Administration, the National Astronomical Observatory of Japan, the Netherlandse Onderzoekschool Voor Astronomie, the Norwegian Space Agency, the Romanian Space Agency, the State Secretariat for Education, Research and Innovation (SERI) at the Swiss Space Office (SSO), and the United Kingdom Space Agency. A complete and detailed list is available on the Euclid web site ( http://www.euclid-ec.org ). We acknowledge support from the grants PRIN-MIUR 2017 WSCC32, PRIN-MIUR 2020 SKSTHZ and ASI no. 2018-23-HH.0. M.M. was supported by INAF Grant "The Big-Data era of cluster lensing". This work has made use of CosmoHub. CosmoHub has been developed by the Port d’Informació Científica (PIC), maintained through a collaboration of the Institut de Física d’Altes Energies (IFAE) and the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) and the Institute of Space Sciences (CSIC & IEEC), and was partially funded by the "Plan Estatal de Investigación Científica y Técnica y de Innovación" program of the Spanish government.
Publisher Copyright:
© 2024 EDP Sciences. All rights reserved.
PY - 2024/1/1
Y1 - 2024/1/1
N2 - Forthcoming imaging surveys will increase the number of known galaxy-scale strong lenses by several orders of magnitude. For this to happen, images of billions of galaxies will have to be inspected to identify potential candidates. In this context, deep-learning techniques are particularly suitable for finding patterns in large data sets, and convolutional neural networks (CNNs) in particular can efficiently process large volumes of images. We assess and compare the performance of three network architectures in the classification of strong-lensing systems on the basis of their morphological characteristics. In particular, we implemented a classical CNN architecture, an inception network, and a residual network. We trained and tested our networks on different subsamples of a data set of 40 000 mock images whose characteristics were similar to those expected in the wide survey planned with the ESA mission Euclid, gradually including larger fractions of faint lenses. We also evaluated the importance of adding information about the color difference between the lens and source galaxies by repeating the same training on single- and multiband images. Our models find samples of clear lenses with 90% precision and completeness. Nevertheless, when lenses with fainter arcs are included in the training set, the performance of the three models deteriorates with accuracy values of ~0.87 to ~0.75, depending on the model. Specifically, the classical CNN and the inception network perform similarly in most of our tests, while the residual network generally produces worse results. Our analysis focuses on the application of CNNs to high-resolution space-like images, such as those that the Euclid telescope will deliver. Moreover, we investigated the optimal training strategy for this specific survey to fully exploit the scientific potential of the upcoming observations. We suggest that training the networks separately on lenses with different morphology might be needed to identify the faint arcs. We also tested the relevance of the color information for the detection of these systems, and we find that it does not yield a significant improvement. The accuracy ranges from ~0.89 to ~0.78 for the different models. The reason might be that the resolution of the Euclid telescope in the infrared bands is lower than that of the images in the visual band.
AB - Forthcoming imaging surveys will increase the number of known galaxy-scale strong lenses by several orders of magnitude. For this to happen, images of billions of galaxies will have to be inspected to identify potential candidates. In this context, deep-learning techniques are particularly suitable for finding patterns in large data sets, and convolutional neural networks (CNNs) in particular can efficiently process large volumes of images. We assess and compare the performance of three network architectures in the classification of strong-lensing systems on the basis of their morphological characteristics. In particular, we implemented a classical CNN architecture, an inception network, and a residual network. We trained and tested our networks on different subsamples of a data set of 40 000 mock images whose characteristics were similar to those expected in the wide survey planned with the ESA mission Euclid, gradually including larger fractions of faint lenses. We also evaluated the importance of adding information about the color difference between the lens and source galaxies by repeating the same training on single- and multiband images. Our models find samples of clear lenses with 90% precision and completeness. Nevertheless, when lenses with fainter arcs are included in the training set, the performance of the three models deteriorates with accuracy values of ~0.87 to ~0.75, depending on the model. Specifically, the classical CNN and the inception network perform similarly in most of our tests, while the residual network generally produces worse results. Our analysis focuses on the application of CNNs to high-resolution space-like images, such as those that the Euclid telescope will deliver. Moreover, we investigated the optimal training strategy for this specific survey to fully exploit the scientific potential of the upcoming observations. We suggest that training the networks separately on lenses with different morphology might be needed to identify the faint arcs. We also tested the relevance of the color information for the detection of these systems, and we find that it does not yield a significant improvement. The accuracy ranges from ~0.89 to ~0.78 for the different models. The reason might be that the resolution of the Euclid telescope in the infrared bands is lower than that of the images in the visual band.
KW - Gravitational lensing: strong
KW - Methods: data analysis
KW - Methods: statistical
KW - Surveys
UR - http://www.scopus.com/inward/record.url?scp=85182904839&partnerID=8YFLogxK
U2 - 10.1051/0004-6361/202347244
DO - 10.1051/0004-6361/202347244
M3 - Article
AN - SCOPUS:85182904839
SN - 0004-6361
VL - 681
SP - 1
EP - 23
JO - Astronomy and Astrophysics
JF - Astronomy and Astrophysics
M1 - A68
ER -