Skip to main content Skip to main navigation

Publication

Rapid Computer Vision-Aided Disaster Response via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite Imagery

Tim G. J. Rudner; Marc Rußwurm; Jakub Fil; Ramona Pelich; Benjamin Bischke; Veronika Kopacková; Piotr Bilinski
In: First workshop on AI for Social Good. Workshop on AI for Social Good, located at NeurIPS2018, December 8, Montreal, Canada, Curran Associates, Inc, 2018.

Abstract

Natural disasters can cause loss of life and substantial property damage. Moreover, the economic ramifications of disaster damage disproportionately impact the most vulnerable members of society. In this paper, we propose Multi3Net, a novel approach for rapid and accurate disaster damage segmentation by fusing multiresolution, multisensor, and multitemporal satellite imagery in a convolutional neural network. In our method, segmentation maps can be produced as soon as at least a single satellite image acquisition has been successful and subsequently be improved upon once additional imagery becomes available. This way, we are able to reduce the amount of time needed to generate satellite imagery-based disaster damage maps, enabling first responders and local authorities to make swift and well-informed decisions when responding to disaster events. We demonstrate the performance and usefulness of our approach for earthquake and flood events. To encourage future research into image fusion for disaster relief, we release the first open-source dataset of fully preprocessed and labeled multiresolution, multispectral, and multitemporal satellite images of disaster sites along with our source code at https://github.com/FrontierDevelopmentLab/multi3net.