How Idealizations Provide Understanding
Forthcoming: In S. R. Grimm, C. Baumberger, and S. Ammon (eds.), Explaining Understanding: New Essays in Epistemology and the Philosophy of Science. Routledge, New York.
Abstract: How can a model that stops short of representing the whole truth about the causal production of a phenomenon help us to understand the phenomenon? I answer this question from the perspective of what I call the simple view of understanding, on which to understand a phenomenon is to grasp a correct explanation of the phenomenon. Idealizations, I have argued in previous work, flag factors that are causally relevant but explanatorily irrelevant to the phenomena to be explained. Though useful to the would-be understander, such flagging is only a first step. Are there any further and more advanced ways that idealized models aid understanding? Yes, I propose: the manipulation of idealized models can provide considerable insight into the reasons that some causal factors are difference-makers and others are not, which helps the understander to grasp the nature of explanatory connections and so to better grasp the explanation itself.
PDF version of How Idealizations Provide Understanding.