Technical Papers
Dec 3, 2020

Detection and Semantic Segmentation of Disaster Damage in UAV Footage

Publication: Journal of Computing in Civil Engineering
Volume 35, Issue 2

Abstract

In the aftermath of large-scale disasters, such as hurricanes, floods, or earthquakes, preliminarily damage assessment (PDA) is carried out to determine the impact and magnitude of damage and meet the needs of affected individuals, businesses, and communities. Traditionally, site evaluation and consensus-based assessment techniques are used to estimate the extent of the damage. More recently, given their low-cost, ease of operation, and ability to be deployed ondemand, unmanned aerial vehicles (UAVs) are increasingly used for disaster response and mitigation. However, the resulting large volume of visual data collected by and shared among first responders and volunteer groups is not used effectively because current practices of processing such data are heavily human-dependent, extremely resource-intensive, and significantly slow compared to the fast-evolving nature and progression of disaster impact. This paper contributes to the core body of knowledge by presenting a fully annotated dataset (with the object classes people, flooded area, and damaged and undamaged building roof, car, debris, vegetation, road, and boat) and a host of convolutional neural network (CNN) models for detecting and segmenting critical objects in the aerial footage of disaster sites. For best results, two CNN-based image segmentation architectures, namely, Mask-RCNN and Pyramid Scene Parsing Network (PSPNet), are adopted (through transfer learning), trained, validated, and tested on annotated videos to detect countable and bulk objects. The paper further introduces a targeted data augmentation technique to preserve data balance, as well as a data-driven approach to splitting highly mismatched classes for better model performance. Through these improvements, the best performing Mask-RCNN model generates pixel-level segmentations of countable objects with a 51.54% mean average precision (mAP). Additionally, the best performing PSPNet models can achieve mean intersection over union (mIoU) as high as 32.17% and accuracy as high as 77.01% on bulk objects.

Get full access to this article

View all available purchase options and get full access to this article.

Data Availability Statement

All data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.

Acknowledgments

The authors would like to thank Texas A&M University’s High Performance Research Computing (HPRC) for providing the necessary computing infrastructure for model training. Any opinions, findings, conclusions, and recommendations expressed in this paper are those of the authors and do not necessarily represent the views of the HPRC.

References

American Red Cross. 2019. “American red cross damage assessment.” Accessed June 15, 2020. http://www.resiliencenw.org/2012files/LongTermRecovery/DisasterAssessmentWorkshop.pdf.
Badrinarayanan, V., A. Kendall, and R. Cipolla. 2017. “SegNet: A deep convolutional encoder-decoder architecture for image segmentation.” IEEE Trans. Pattern Anal. Mach. Intell. 39 (12): 2481–2495. https://doi.org/10.1109/TPAMI.2016.2644615.
Bhatt, P., S. Sarangi, and S. Pappula. 2018. “Coarse clustering and classification of images with CNN features for participatory sensing in agriculture.” In Proc., Int. Conf. on Pattern Recognition Applications and Methods, 488–495. Funchal, Portugal: Springer. https://doi.org/10.5220/0006648504880495.
Birds Eye UAS. 2018. “Hurricane Michael, St Joe Beach FL.” YouTube. Accessed April 1, 2020. https://www.youtube.com/watch?v=9nLpGrpCP3o.
Chen, M., and J. Li. 2019. “Deep convolutional neural network application on rooftop detection for aerial image.” Preprint, submitted April 21, 2020. http://arxiv.org/abs/1910.13509.
Conques, M. 2017. “Hurricane Harvey Houston Halftime.” YouTube. Accessed April 1, 2020. https://www.youtube.com/watch?v=kzqqVoDKY8c.
Cordts, M., M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth and B. Schiele. 2016. “The cityscapes dataset for semantic urban scene understanding.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 3213–3223. New York: IEEE. https://doi.org/10.1109/cvpr.2016.350.
Cova, T. J. 1999. “GIS in emergency management.” In Geographical Information systems: Principles, techniques, applications, and management, edited by P. A. Longley, M. F. Goodchild, D. J. Maguire, and D. W. Rhind, 845–858. New York: John Wiley & Sons.
CRED and UNISDR (Centre for Research on the Epidemiology of Disasters and United Nations Office for Disaster Risk Reduction). 2018. “Economic losses, poverty and disasters: 1998–2017.” Accessed April 21, 2020. https://www.cred.be/sites/default/files/CRED_Economic_Losses_10oct.pdf.
Daily, M., S. Medasani, R. Behringer, and M. Trivedi. 2017. “Self-driving cars.” Computer 50 (12): 18–23. https://doi.org/10.1109/MC.2017.4451204.
Everingham, M., L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. 2010. “The PASCAL visual object classes (VOC) challenge.” Int. J. Comput. Vis. 88 (2): 303–338. https://doi.org/10.1007/s11263-009-0275-4.
FEMA. 2016. “Damage assessment operations manual.” Accessed April 21, 2020. https://www.fema.gov/media-library-data/1459972926996-a31eb90a2741e86699ef34ce2069663a/PDAManualFinal6.pdf.
FEMA. 2019. “National urban search & rescue (US&R) response system: Rescue field operations guide.” Accessed April 21, 2020. https://www.fema.gov/pdf/emergency/usr/usr_23_20080205_rog.pdf.
FEMA. 2020. “Preliminary damage assessment guide.” Accessed June 15, 2020. https://www.fema.gov/media-library/assets/documents/109040.
Foreman, J. 2017. “Hurricane Harvey in Friendswood.” YouTube. Accessed April 1, 2020. https://www.youtube.com/watch?v=Wai_b9pVcZE.
Gao, H., G. Barbier, and R. Goolsby. 2011. “Harnessing the crowdsourcing power of social media for disaster relief.” IEEE Intell. Syst. 26 (3): 10–14. https://doi.org/10.1109/MIS.2011.52.
Ghaffarian, S., and N. Kerle. 2019. “Towards post-disaster debris identification for precise damage and recovery assessments from UAV and satellite images.” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLII-2 (W13): 297–302. https://doi.org/10.5194/isprs-archives-XLII-2-W13-297-2019.
Ghaffarian, S., N. Kerle, E. Pasolli, and J. Jokar Arsanjani. 2019. “Post-disaster building database updating using automated deep learning: An integration of pre-disaster OpenStreetMap and multi-temporal satellite data.” Remote Sens. 11 (20): 2427. https://doi.org/10.3390/rs11202427.
Girshick, R. 2015. “Fast R-CNN.” In Proc., IEEE Int. Conf. on Computer Vision, 1440–1448. New York: IEEE. https://doi.org/10.1109/iccv.2015.169.
Girshick, R., J. Donahue, T. Darrell and J. Malik. 2014. “Rich feature hierarchies for accurate object detection and semantic segmentation.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 580–587. New York: IEEE. https://doi.org/10.1109/cvpr.2014.81.
Haixiang, G., L. Yijing, J. Shang, G. Mingyun, H. Yuanyue, and G. Bing. 2017. “Learning from class-imbalanced data: Review of methods and applications.” Expert Syst. Appl. 73 (May): 220–239. https://doi.org/10.1016/j.eswa.2016.12.035.
He, K., G. Gkioxari, P. Dollár, and R. Girshick. 2018. “Mask R-CNN.” In Proc., IEEE Int. Conf. on Computer Vision, 2980–2988. New York: IEEE. https://doi.org/10.1109/iccv.2017.322.
He, K., X. Zhang, S. Ren, and J. Sun. 2016. “Deep residual learning for image recognition.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 770–778. New York: IEEE. https://doi.org/10.1109/cvpr.2016.90.
Hoque, M. A., S. Phinn, C. Roelfsema, and I. Childs. 2017. “Tropical cyclone disaster management using remote sensing and spatial analysis: A review.” Int. J. Disaster Risk Reduct. 22 (Jun): 345–354. https://doi.org/10.1016/j.ijdrr.2017.02.008.
Kerle, N., F. Nex, M. Gerke, D. Duarte, and A. Vetrivel. 2020. “UAV-based structural damage mapping: A review.” ISPRS Int. J. Geo-Inf. 9 (1): 14. https://doi.org/10.3390/ijgi9010014.
Kirkland, C. 2017. “Bear Creek Harvey flood 2017.” YouTube. Accessed April 1, 2020. https://www.youtube.com/watch?v=VMsvNz-bzKk.
Krizhevsky, A., I. Sutskever, and G. E. Hinton. 2017. “ImageNet classification with deep convolutional neural networks.” Commun. Assoc. Comput. Mach. 60 (6): 84–90. https://doi.org/10.1145/3065386.
Kruijff, G. M., F. Pirri, M. Gianni, P. Papadakis, M. Pizzoli, A. Sinha, and S. Corrao. 2012. “Rescue robots at earthquake-hit Mirandola, Italy: A field report.” In Proc., Int. Symp. on Safety, Security, and Rescue Robotics (SSRR). New York: IEEE. https://doi.org/10.1109/ssrr.2012.6523866.
LabelBox. 2019. “Labelbox.” Accessed April 21, 2020. https://labelbox.com.
LeCun, Y., L. Bottou, Y. Bengio, and P. Haffner. 1998. “Gradient-based learning applied to document recognition.” Proc. IEEE 86 (11): 2278–2324. https://doi.org/10.1109/5.726791.
Li, X., D. Caragea, H. Zhang and M. Imran. 2018. “Localizing and quantifying damage in social media images.” In Proc., 2018  IEEE/ACM Int. Conf. on Advances in Social Networks Analysis and Mining (ASONAM), 194–201. New York: IEEE. https://doi.org/10.1109/asonam.2018.8508298.
Lin, T.-Y., P. Goyal, R. Girshick, K. He and P. Dollár. 2017. “Focal loss for dense object detection.” In Proc., IEEE Int. Conf. on Computer Vision, 2999–3007. New York: IEEE. https://doi.org/10.1109/iccv.2017.324.
Lin, T.-Y., M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. 2014. “Microsoft COCO: Common objects in context.” In Proc., European Conf. on Computer Vision, 740–755. Zurich, Switzerland: Springer. https://doi.org/10.1007/978-3-319-10602-1_48.
Liu, T., A. Abd-Elrahman, J. Morton, and V. L. Wilhelm. 2018. “Comparing fully convolutional networks, random forest, support vector machine, and patch-based deep convolutional neural networks for object-based wetland mapping using images from small unmanned aircraft system.” GISci. Remote Sens. 55 (2): 243–264. https://doi.org/10.1080/15481603.2018.1426091.
Liu, Y., and L. Wu. 2016. “Geological disaster recognition on optical remote sensing images using deep learning.” Procedia Comput. Sci. 91: 566–575. https://doi.org/10.1016/j.procs.2016.07.144.
Meissner, A., T. Luckenbach, T. Risse, T. Kirste, and H. Kirchner. 2002. “Design challenges for an integrated disaster management communication and information system.” In Proc., Workshop on Disaster Recovery Networks (DIREN 2002). New York: IEEE.
Neumann, B., A. T. Vafeidis, J. Zimmermann, and R. J. Nicholls. 2015. “Future coastal population growth and exposure to sea-level rise and coastal flooding—A global assessment.” PLoS ONE 10 (3): e0118571. https://doi.org/10.1371/journal.pone.0118571.
Oquab, M., L. Bottou, I. Laptev, and J. Sivic. 2014. “Learning and transferring mid-level image representations using convolutional neural networks.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 1717–1724. New York: IEEE. https://doi.org/10.1109/cvpr.2014.222.
Pi, Y., N. D. Nath, and A. H. Behzadan. 2020a. “Convolutional neural networks for object detection in aerial imagery for disaster response and recovery.” Adv. Eng. Inf. 43 (Jan): 101009. https://doi.org/10.1016/j.aei.2019.101009.
Pi, Y., N. D. Nath, and A. H. Behzadan. 2020b. “Disaster impact information retrieval using deep learning object detection in crowdsourced drone footage.” In Proc., Int. Workshop on Intelligent Computing in Engineering, 134–143. Plymouth, UK: European Group for Intelligent Computing in Engineering.
Rahman, M. A., and Y. Wang. 2016. “Optimizing intersection-over-union in deep neural networks for image segmentation.” In Proc., Int. Symp. on Visual Computing, 234–244. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-50835-1_22.
Rahnemoonfar, M., R. Murphy, M. V. Miquel, D. Dobbs, and A. Adams. 2018. “Flooded area detection from UAV images based on densely connected recurrent neural networks.” In Proc., IEEE Int. Geoscience and Remote Sensing Symp., 1788–1791. New York: IEEE. https://doi.org/10.1109/igarss.2018.8517946.
Rao, Y., W. Liu, J. Pu, J. Deng, and Q. Wang. 2018. “Roads detection of aerial image with FCN-CRF model.” In Proc., 2018 IEEE Visual Communications and Image Processing (VCIP), 1–4. New York: IEEE. https://doi.org/10.1109/vcip.2018.8698718.
Redmon, J., S. Divvala, R. Girshick, and A. Farhadi. 2016. “You only look once: Unified, real-time object detection.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 779–788. New York: IEEE. https://doi.org/10.1109/cvpr.2016.91.
Ren, S., K. He, R. Girshick, and J. Sun. 2017. “Faster R-CNN: Towards real-time object detection with region proposal networks.” IEEE Trans. Pattern Anal. Mach. Intell. 39 (6): 1137–1149. https://doi.org/10.1109/tpami.2016.2577031.
Ronneberger, O., P. Fischer, and T. Brox. 2015. “U-Net: Convolutional networks for biomedical image segmentation.” In Proc., Int. Conf. on Medical Image Computing and Computer-Assisted Intervention, 234–241. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-24574-4_28.
Rouhi, R., M. Jafari, S. Kasaei, and P. Keshavarzian. 2015. “Benign and malignant breast tumors classification based on region growing and CNN segmentation.” Expert Syst. Appl. 42 (3): 990–1002. https://doi.org/10.1016/j.eswa.2014.09.020.
Seinfeld, J. H., and S. N. Pandis. 2016. Atmospheric chemistry and physics: From air pollution to climate change. Hoboken, NJ: Wiley. https://doi.org/10.5860/choice.35-5721.
Simonyan, K., and A. Zisserman. 2014. “Very deep convolutional networks for large-scale image recognition.” Preprint, submitted April 21, 2020. http://arxiv.org/abs/1409.1556.
Socher, R., J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. 2011. “Semi-supervised recursive autoencoders for predicting sentiment distributions.” In Proc., Conf. on Empirical Methods in Natural Language Processing, 151–161. Edinburgh, UK: Association for Computational Linguistics.
The Scoop. 2018. “Hurricane Florence brings total loss flooding to North and South Carolina! Bodies of water overflow!” YouTube. Accessed April 1, 2020. https://www.youtube.com/watch?v=sleJJl3f1YU.
Tomaszewski, B. 2014. Geographic information systems (GIS) for disaster management. Boca Raton, FL: CRC Press.
TRSET (Truth Revealed the Secrets of the End Times). 2018. “Panama City Florida Oct. 11 2018 Hurricane Michael part 1.” YouTube. Accessed April 1, 2020. https://www.youtube.com/watch?v=ULbTH3U2qgI.
Uno, K., and K. Kashiyama. 2008. “Development of simulation system for the disaster evacuation based on multi-agent model using GIS.” Supplement, Tsinghua Sci. Technol. 13 (S1): 348–353. https://doi.org/10.1016/S1007-0214(08)70173-1.
Wang, T., Y. Chen, M. Qiao, and H. Snoussi. 2018. “A fast and robust convolutional neural network-based defect detection model in product quality control.” Int. J. Adv. Manuf. Technol. 94 (9–12): 3465–3471. https://doi.org/10.1007/s00170-017-0882-0.
Wilson, S. G., and T. R. Fischetti. 2010. Coastal population trends in the United States: 1960 to 2008.. Suitland, MD: US Census Bureau.
Wisner, B. 2006. “A review of the role of education and knowledge in disaster risk reduction.” Accessed July 24, 2020. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=518F521A612DD6CF807B6558DE23E595?doi=10.1.1.205.3258&rep=rep1&type=pdf.
Yang, K., K. Wang, L. M. Bergasa, E. Romera, W. Hu, D. Sun, J. Sun, R. Cheng, T. Chen, and E. López. 2018. “Unifying terrain awareness for the visually impaired through real-time semantic segmentation.” Sensors 18 (5): 1506. https://doi.org/10.3390/s18051506.
Yu, M., C. Yang, and Y. Li. 2018. “Big data in natural disaster management: A review.” Geosciences 8 (5): 165. https://doi.org/10.3390/geosciences8050165.
Zerger, A., and D. I. Smith. 2003. “Impediments to using GIS for real-time disaster decision support.” Comput. Environ. Urban Syst. 27 (2): 123–141. https://doi.org/10.1016/S0198-9715(01)00021-7.
Zhao, H., J. Shi, X. Qi, X. Wang, and J. Jia. 2017. “Pyramid scene parsing network.” In Proc., IEEE Conf. on Computer Vision and Pattern Recognition, 6230–6239. New York: IEEE. https://doi.org/10.1109/cvpr.2017.660.

Information & Authors

Information

Published In

Go to Journal of Computing in Civil Engineering
Journal of Computing in Civil Engineering
Volume 35Issue 2March 2021

History

Received: Apr 22, 2020
Accepted: Aug 31, 2020
Published online: Dec 3, 2020
Published in print: Mar 1, 2021
Discussion open until: May 3, 2021

Permissions

Request permissions for this article.

Authors

Affiliations

Ph.D. Candidate, Dept. of Construction Science, Texas A&M Univ., 3137 TAMU, College Station, TX 77843. Email: [email protected]
Nipun D. Nath, S.M.ASCE [email protected]
Ph.D. Student, Zachry Dept. of Civil Engineering, Texas A&M Univ., 3136 TAMU, College Station, TX 77843. Email: [email protected]
Amir H. Behzadan, M.ASCE [email protected]
Clark Construction Endowed Associate Professor, Dept. of Construction Science, Texas A&M Univ., 3137 TAMU, College Station, TX 77843 (corresponding author). Email: [email protected]

Metrics & Citations

Metrics

Citations

Download citation

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

Cited by

  • A Review of Unmanned Aerial Vehicle Applications in Construction Management: 2016–2021, Standards, 10.3390/standards3020009, 3, 2, (95-109), (2023).
  • Sensitivity Analysis of Parameters of U-Net Model for Semantic Segmentation of Silt Storage Dams from Remote Sensing Images, Canadian Journal of Remote Sensing, 10.1080/07038992.2023.2178834, 49, 1, (2023).
  • Extracting Worker Unsafe Behaviors from Construction Images Using Image Captioning with Deep Learning–Based Attention Mechanism, Journal of Construction Engineering and Management, 10.1061/JCEMD4.COENG-12096, 149, 2, (2023).
  • Post-Flood Analysis for Damage and Restoration Assessment Using Drone Imagery, Remote Sensing, 10.3390/rs14194952, 14, 19, (4952), (2022).
  • undefined, 2022 International Conference on Business Analytics for Technology and Security (ICBATS), 10.1109/ICBATS54253.2022.9759056, (1-6), (2022).
  • Computer Vision and Multi-Object Tracking for Traffic Measurement from Campus Monitoring Cameras, Computing in Civil Engineering 2021, 10.1061/9780784483893.117, (950-958), (2022).
  • Analysis of Flight Parameters on UAV Semantic Segmentation Performance for Highway Infrastructure Monitoring, Computing in Civil Engineering 2021, 10.1061/9780784483893.109, (885-893), (2022).
  • Unmanned Aerial Systems (UASs) in the AECO Industry, Journal of Computing in Civil Engineering, 10.1061/(ASCE)CP.1943-5487.0000996, 36, 1, (2022).
  • The role of unmanned aerial vehicle and related technologies in disasters, Remote Sensing Applications: Society and Environment, 10.1016/j.rsase.2022.100873, 28, (100873), (2022).
  • A survey of disaster management and SAR operations using sensors and supporting techniques, International Journal of Disaster Risk Reduction, 10.1016/j.ijdrr.2022.103295, 82, (103295), (2022).
  • See more

View Options

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Get Access

Access content

Please select your options to get access

Log in/Register Log in via your institution (Shibboleth)
ASCE Members: Please log in to see member pricing

Purchase

Save for later Information on ASCE Library Cards
ASCE Library Cards let you download journal articles, proceedings papers, and available book chapters across the entire ASCE Library platform. ASCE Library Cards remain active for 24 months or until all downloads are used. Note: This content will be debited as one download at time of checkout.

Terms of Use: ASCE Library Cards are for individual, personal use only. Reselling, republishing, or forwarding the materials to libraries or reading rooms is prohibited.
ASCE Library Card (5 downloads)
$105.00
Add to cart
ASCE Library Card (20 downloads)
$280.00
Add to cart
Buy Single Article
$35.00
Add to cart

Media

Figures

Other

Tables

Share

Share

Copy the content Link

Share with email

Email a colleague

Share