首页    期刊浏览 2025年03月03日 星期一
登录注册

文章基本信息

  • 标题:Finding strong gravitational lenses through self-attention - Study based on the Bologna Lens Challenge
  • 本地全文:下载
  • 作者:Hareesh Thuruthipilly ; Adam Zadrozny ; Agnieszka Pollo
  • 期刊名称:Astronomy & Astrophysics
  • 印刷版ISSN:0004-6361
  • 电子版ISSN:1432-0746
  • 出版年度:2022
  • 卷号:664
  • 页码:1-17
  • DOI:10.1051/0004-6361/202142463
  • 语种:English
  • 出版社:EDP Sciences
  • 摘要:Context. The upcoming large-scale surveys, such as the Rubin Observatory Legacy Survey of Space and Time, are expected to find approximately 105 strong gravitational lenses by analysing data many orders of magnitude larger than those in contemporary astronomical surveys. In this case, non-automated techniques will be highly challenging and time-consuming, if they are possible at all. Aims. We propose a new automated architecture based on the principle of self-attention to find strong gravitational lenses. The advantages of self-attention-based encoder models over convolution neural networks (CNNs) are investigated, and ways to optimise the outcome of encoder models are analysed. Methods. We constructed and trained 21 self-attention-based encoder models and five CNNs to identify gravitational lenses from the Bologna Lens Challenge. Each model was trained separately using 18 000 simulated images, cross-validated using 2000 images, and then applied to a test set with 100 000 images. We used four different metrics for evaluation: classification accuracy, the area under the receiver operating characteristic (AUROC) curve, and TPR0 and TPR10 scores (two metrics of evaluation for the Bologna challenge). The performance of self-attention-based encoder models and CNNs participating in the challenge are compared. Results. The encoder models performed better than the CNNs. They were able to surpass the CNN models that participated in the Bologna Lens Challenge by a high margin for the TPR0 and TPR10. In terms of the AUROC, the encoder models with 3 × 106 parameters had equivalent scores to the top CNN model, which had around 23 × 106 parameters. Conclusions. Self-attention-based models have clear advantages compared to simpler CNNs. They perform competitively in comparison to the currently used residual neural networks. Self-attention-based models can identify lensing candidates with a high confidence level and will be able to filter out potential candidates from real data. Moreover, introducing the encoder layers can also tackle the overfitting problem present in the CNNs by acting as effective filters.
  • 关键词:gravitational lensing: strong;methods: data analysis;techniques: image processing;cosmology: observations
国家哲学社会科学文献中心版权所有