Landslides pose severe threats to infrastructure, economies, and human lives, necessitating accurate detection and predictive mapping across diverse geographic regions. With advancements in deep learning and remote sensing, automated landslide detection has become increasingly effective. This study presents a comprehensive approach integrating multi-source satellite imagery and deep learning models to enhance landslide identification and prediction. We leverage Sentinel-2 multispectral data and ALOS PALSAR-derived slope and Digital Elevation Model (DEM) layers to capture critical environmental features influencing landslide occurrences. Various geospatial analysis techniques are employed to assess the impact of terrain characteristics, vegetation cover, and rainfall on detection accuracy. Additionally, we evaluate the performance of multiple state-of-the-art deep learning segmentation models, including U-Net, DeepLabV3+, and ResNet, to determine their effectiveness in landslide detection. The proposed framework contributes to the development of reliable early warning systems, improved disaster risk management, and sustainable land-use planning. Our findings provide valuable insights into the potential of deep learning and multi-source remote sensing in creating robust, scalable, and transferable landslide prediction models.
Index Terms - Image Processing, Machine Learning, Deep Learning, Computer Vision, Remote Sensing.
The selected study areas represent diverse geographic and climatic conditions. Their locations are depicted on a global landslide susceptibility map generated using multiple explanatory variables such as slope degree, forest loss, geology, road networks, and fault lines.
To ensure high-quality landslide annotations, we employed a two-step workflow:
The Landslide4Sense benchmark dataset includes 128×128 window-size patches, each containing 14 distinct data layers. The first 12 bands consist of multi-spectral data from Sentinel-2, while bands 13 and 14 represent the Digital Elevation Model (DEM) and slope data derived from ALOS PALSAR. Each patch is accurately labeled, with ground truth polygons outlined in red to indicate landslide areas.
Visualizing each unique layer inside the generated landslide dataset's 128x128 window-size patches. The first 12 bands feature multi-spectral data from Sentinel-2, while bands 13 and 14 contain DEM data and slope from ALOS PALSAR. The patches in the last column are accurately labeled, and they are complimented by red polygons signifying the landslide category. These patches in last column refers to the ground truth Polygons.
The 14 layers in the Landslide4Sense dataset:
Dataset Statistics:
Each patch contains pixel-wise labels indicating landslide and non-landslide areas. The dataset exhibits significant variability in landslide shape, size, distribution, and frequency across study areas. Sentinel-2 bands 4 and 5 show the highest spectral differences between landslide and non-landslide areas.
Our results demonstrate that ResNet34, VGG-16, and EfficientNet-B0 achieved the highest F1 Scores, indicating superior performance in distinguishing landslide-prone areas from non-landslide regions. The ResNet34-based U-Net model attained the best balance between precision and recall, achieving an F1 Score of 0.7470, making it the most reliable among the tested architectures. Notably, VGG-16 and EfficientNet-B0 also performed well, with F1 Scores of 0.7357 and 0.7341, respectively.
The classic U-Net architecture, while still effective, demonstrated a lower F1 Score of 0.7012, highlighting the advantage of deeper and more advanced feature extraction architectures like ResNet and EfficientNet-B0. SeResNet-50 and SeResNeXt50_32x4d also showcased competitive performance, emphasizing the benefit of integrating Squeeze-and-Excitation modules for better feature representation.
The results indicate that hybrid models leveraging deeper feature extraction mechanisms significantly enhance landslide detection performance compared to standard U-Net. The ability to capture both local and global contextual information plays a crucial role in improving segmentation quality. This study further reinforces the need for models that efficiently integrate multi-scale feature representations for landslide susceptibility mapping in complex terrains.
| Models | F1 Score | Precision | Recall |
|---|---|---|---|
| ResNet34 | 0.7470 | 0.7737 | 0.7267 |
| VGG16 | 0.7357 | 0.7650 | 0.7121 |
| EfficientNet-B0 | 0.7341 | 0.7536 | 0.7221 |
| ResNeXt50_32X4D | 0.7330 | 0.7453 | 0.7247 |
| SeResNet-50 | 0.7328 | 0.7826 | 0.6950 |
| DenseNet121 | 0.7290 | 0.7241 | 0.7400 |
| SeResNeXt50_32x4D | 0.7279 | 0.7249 | 0.7350 |
| InceptionV4 | 0.7246 | 0.7631 | 0.6945 |
| InceptionResNetV2 | 0.7151 | 0.7774 | 0.6692 |
| DeepLabV3+ | 0.7141 | 0.7471 | 0.6897 |
| MobileNetV2 | 0.7119 | 0.7000 | 0.7337 |
| U-Net | 0.7012 | 0.7906 | 0.6338 |
| MiT-B1 | 0.6989 | 0.7574 | 0.6596 |
Table: Comparison of performance evaluation metrics of segmentation models tested on the Landslide4Sense dataset.
Performance Metrics for Landslide Prediction in U-Net
Performance Metrics for Landslide Prediction in ResNet34, VGG-16, EfficientNet-B0, ResNeXt50_32X4D, SeResNet-50, DenseNet121, SeResNeXt50_32x4D, InceptionV4, InceptionResNetV2, DeepLabV3+, MobileNetV2, MiT-b1_14C.
The comparative evaluation underscores the strength of ResNet34 and VGG-16 as encoders in the U-Net framework. These models not only offer high predictive accuracy but also minimize false detections. The increase in recall values across these models suggests a significant improvement in detecting subtle landslide regions, which is critical for real-world applications where missing a landslide event could lead to disastrous consequences. By leveraging advanced deep learning techniques, our study demonstrates that multi-source satellite imagery, when processed with optimized architectures, can significantly enhance landslide detection accuracy. These findings contribute to the growing field of deep learning applications in geospatial analysis, paving the way for more reliable and scalable landslide prediction systems.
We evaluated various deep learning models for landslide detection using the Landslide4Sense dataset with Sentinel-2 imagery and ALOS PALSAR elevation data. The ResNet34-based U-Net achieved the highest F1 Score of 0.7470 with balanced precision (0.7737) and recall (0.7267). VGG16 and EfficientNet-B0 also performed well with F1 Scores of 0.7357 and 0.7341 respectively. Advanced architectures significantly outperformed the classic U-Net (F1: 0.7012), demonstrating the importance of deeper feature extraction mechanisms for complex geospatial data.
This study demonstrates that hybrid deep learning models with advanced feature extraction significantly outperform traditional U-Net for landslide detection. ResNet34-based U-Net emerged as the most reliable architecture with an F1 Score of 0.7470. Multi-source data integration combining optical imagery with elevation information proves crucial for accurate landslide identification. These findings contribute to developing more reliable disaster risk management and early warning systems for landslide-prone regions.
@article{burange2025landslide,
title={Landslide Detection and Mapping Using Deep Learning Across Multi-Source Satellite Data and Geographic Regions},
author={Burange, Rahul and Shinde, Harsh and Mutyalwar, Omkar},
journal={Available at SSRN 5225437},
year={2025},
doi={10.2139/ssrn.5225437}
}
@article{burange2025landslide,
title={Landslide Detection and Mapping Using Deep Learning Across Multi-Source Satellite Data and Geographic Regions},
author={Burange, Rahul A and Shinde, Harsh K and Mutyalwar, Omkar},
journal={arXiv preprint arXiv:2507.01123},
year={2025}
}
@article{burange2025comprehensive,
title={A Comprehensive Approach to Landslide Detection: Deep Learning and Remote Sensing Integration},
author={Burange, Rahul and Shinde, Harsh and Mutyalwar, Omkar},
year={2025},
publisher={IJARCCE}
}
@article{burange2025exhaustive,
title={An Exhaustive Review on Deep Learning for Advanced Landslide Detection and Prediction from Multi-Source Satellite Imagery},
author={Burange, Rahul and Shinde, Harsh and Mutyalwar, Omkar},
journal={Available at SSRN 5155990},
year={2025},
doi={10.2139/ssrn.5155990},
url={https://ssrn.com/abstract=5155990}
}