Assessing the Robustness of Deep Learning Streamflow Models Under Climate Change
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Long Short-Term Memory networks provide the most accurate rainfall-runoff predictions to-date, but their reliability under climate change is not well understood. We explore the robustness of these models under climate nonstationarity by creating train and test data splits thatare designed to simulate climate bias. By training on forcing data from hydrological years of high (low) aridity and testing on data from hydrological years of low (high) aridity, we can begin to quantify the performance and relative robustness of that performance under climate nonstationarity. We benchmark against a calibrated conceptual model (the Sacramento Soil Moisture Accounting model) and a calibrated process-based model (the NOAA National WaterModel), and found that LSTMs were generally more accurate than both, even when trained on climatologically biased data splits. The process-based model did not show as large of a performance gap as the conceptual and deep learning models, however (i) this model was not calibrated on a climate-biased data split and (ii) LSTMs always out-performed the process-based benchmark, even when the LSTM training data had climatological bias. We find that although all hydrologic models reported here degrade under nonstationarity, DL models demonstrate greater robustness. We also tested the hypothesis that dynamic climate attributes as inputs into the LSTM would improve performance under climate nonstationarity. We found no predictive value with the addition of dynamic, as opposed to static, climate attribute inputs.