Skip to main content

Subtle lessons from the art of model-observation confrontations*

Gavin
Schmidt
NASA GISS
Talk
(Invited)
Since the beginning of climate modeling, there has been a focus on evaluating the skill of the models in reproducing key emergent patterns and trends from the observations. But despite the fact that (famously) ‘all models are wrong’, they have collectively provided skillful estimates in hindcasts, forecasts and other out-of-sample tests. Nonetheless, the literature is replete with reported discrepancies in the models compared to the observations. These discrepancies have often been resolved, though some of them have persisted for decades. The reconciliation happens through multiple pathways – sometimes through the correction of bugs in the climate model code, updates to the forcing fields, the addition of new relevant processes etc. In other cases, the observed trends were found to be erroneous, or (more kindly) they did not have sufficient structural uncertainty associated with them. In yet other examples, it was the method of comparison that was at fault. History suggests that these resolutions have provided us with deeper insight into the climate system, and improved skill, but with each new generation of models, with their added processes and greater complexity, there are new challenges (such as the spread in ECS in CMIP6), new observations, but also new opportunities to learn from them.
Presentation file