Drone-DETR: Efficient Small Object Detection for Remote Sensing Image Using Enhanced RT-DETR Model.

Yaning Kong, Xiangfeng Shang, Shijie Jia
Author Information
  1. Yaning Kong: College of Computer and Communication Engineering, Dalian Jiaotong University, Dalian 116028, China.
  2. Xiangfeng Shang: College of Computer and Communication Engineering, Dalian Jiaotong University, Dalian 116028, China.
  3. Shijie Jia: College of Computer and Communication Engineering, Dalian Jiaotong University, Dalian 116028, China.

Abstract

Performing low-latency, high-precision object detection on unmanned aerial vehicles (UAVs) equipped with vision sensors holds significant importance. However, the current limitations of embedded UAV devices present challenges in balancing accuracy and speed, particularly in the analysis of high-precision remote sensing images. This challenge is particularly pronounced in scenarios involving numerous small objects, intricate backgrounds, and occluded overlaps. To address these issues, we introduce the Drone-DETR model, which is based on RT-DETR. To overcome the difficulties associated with detecting small objects and reducing redundant computations arising from complex backgrounds in ultra-wide-angle images, we propose the Effective Small Object Detection Network (ESDNet). This network preserves detailed information about small objects, reduces redundant computations, and adopts a lightweight architecture. Furthermore, we introduce the Enhanced Dual-Path Feature Fusion Attention Module (EDF-FAM) within the neck network. This module is specifically designed to enhance the network's ability to handle multi-scale objects. We employ a dynamic competitive learning strategy to enhance the model's capability to efficiently fuse multi-scale features. Additionally, we incorporate the P2 shallow feature layer from the ESDNet into the neck network to enhance the model's ability to fuse small-object features, thereby enhancing the accuracy of small object detection. Experimental results indicate that the Drone-DETR model achieves an mAP of 53.9% with only 28.7 million parameters on the VisDrone2019 dataset, representing an 8.1% enhancement over RT-DETR-R18.

Keywords

Grants

  1. LJKMZ20220826/Scientific Research Project of Liaoning Provincial Department of Education in China

Word Cloud

Similar Articles

Cited By