摘要:Due to the exponential growth of high-quality fake photos on social media and the Internet, it is critical to develop robust forgery detection tools. Traditional picture- and video-editing techniques include copying areas of the image, referred to as the copy-move approach. The standard image processing methods physically search for patterns relevant to the duplicated material, restricting the usage in enormous data categorization. On the contrary, while deep learning (DL) models have exhibited improved performance, they have significant generalization concerns because of their high reliance on training datasets and the requirement for good hyperparameter selection. With this in mind, this article provides an automated deep learning-based fusion model for detecting and localizing copy-move forgeries (DLFM-CMDFC). The proposed DLFM-CMDFC technique combines models of generative adversarial networks (GANs) and densely connected networks (DenseNets). The two outputs are combined in the DLFM-CMDFC technique to create a layer for encoding the input vectors with the initial layer of an extreme learning machine (ELM) classifier. Additionally, the ELM model’s weight and bias values are optimally adjusted using the artificial fish swarm algorithm (AFSA). The networks’ outputs are supplied into the merger unit as input. Finally, a faked image is used to identify the difference between the input and target areas. Two benchmark datasets are used to validate the proposed model’s performance. The experimental results established the proposed model’s superiority over recently developed approaches.