https://doi.org/
You're currently viewing an old version of this dataset. To see the current version, click here.

Data for: Metamorphic Testing of Machine Learning and Conceptual Hydrologic Models

Predicting the response of hydrologic systems to modified driving forces, beyond patterns that have occurred in the past, is of high importance for estimating climate change impacts or the effect of management measures. This kind of predictions requires a model, but the impossibility of testing such predictions against observed data makes it difficult to estimate their reliability. Metamorphic testing offers a methodology for assessing models beyond validation with real data. It consists of defining input changes for which the expected responses are assumed to be known at least qualitatively, and to test model behavior for consistency with these expectations. To increase the gain of information and reduce the subjectivity of this approach, we extend this methodology to a multi-model approach and include a sensitivity analysis of the predictions to training or calibration options. This allows us to quantitatively analyse differences in predictions between different model structures and calibration options in addition to the qualitative test to the expectations. In our case study, we apply this approach to selected conceptual and machine learning hydrological models calibrated to basins from the CAMELS data set. Our results confirm the superiority of the machine learning models over the conceptual hydrologic models regarding the quality of fit during calibration and validation periods. However, we also find that the response of machine learning models to modified inputs can deviate from the expectations and the magnitude and even the sign of the response can depend on the training data. In addition, even in cases in which all models passed the metamorphic test, there are cases in which the quantitative response is different for different model structures. This demonstrates the importance of this kind of testing beyond and in addition to the usual calibration-validation analysis to identify potential problems and stimulate the development of improved models.

Data and Resources

Citation

Metadata

Author
  • [
  • "
  • R
  • e
  • i
  • c
  • h
  • e
  • r
  • t
  • ,
  • P
  • e
  • t
  • e
  • r
  • "
  • ,
  • "
  • M
  • a
  • ,
  • K
  • a
  • i
  • "
  • ,
  • "
  • H
  • \
  • u
  • 0
  • 0
  • f
  • 6
  • g
  • e
  • ,
  • M
  • a
  • r
  • v
  • i
  • n
  • "
  • ,
  • "
  • F
  • e
  • n
  • i
  • c
  • i
  • a
  • ,
  • F
  • a
  • b
  • r
  • i
  • z
  • i
  • o
  • "
  • ,
  • "
  • B
  • a
  • i
  • t
  • y
  • -
  • J
  • e
  • s
  • i
  • ,
  • M
  • a
  • r
  • c
  • o
  • "
  • ,
  • "
  • F
  • e
  • n
  • g
  • ,
  • D
  • a
  • p
  • e
  • n
  • g
  • "
  • ,
  • "
  • S
  • h
  • e
  • n
  • ,
  • C
  • h
  • a
  • o
  • p
  • e
  • n
  • g
  • "
  • ]
Curator Reichert, Peter
Contact Reichert, Peter <Peter.Reichert@eawag.ch>