-->

Learn about each of the loan on marketplace for business plan

Can't we all just get along?, Econometrics edition

Can't we all just get along?, Econometrics edition


Some academic fights I understand, like the argument over whether to use sticky prices in DSGE models. Others I have trouble comprehending. One of these is the fight between champions of structural and quasi-experimental econometrics. Angrist and Pischke, the champions of the quasi-experimental approach, waste few opportunities to diss structural work, and the structural folks often fire back. What I don't get is: Why not just do both?

Each approach has unavoidable strengths and weaknesses. Francis Diebold explains these in a nerdy way in a recent blog post. I tried to explain these in a non-nerdy Bloomberg View post a year ago.

The strength of the structural approach, relative to the quasi-experimental approach, is that you can make much bigger, bolder predictions. With the quasi-experimental approach, you typically have a linear model, and you estimate the slope of that line around a single point in the space of observables. As we all remember from high school calculus, we can always do that as long as something is differentiable:


But as you get farther from that point, extrapolation of the curve becomes less accurate. The curve curves. And just knowing the slope of that tangent line at that one point won't tell you how quickly your linear approximation becomes useless as you move away from that point. 

So this means that quasi-experimental methods have limited utility, but we can't really know how limited. Suppose we found out that minimum wage has a very small effect on jobs when you go from $4.25 to $5.05. How much does that tell us about how bad a $7.50 minimum wage would be? Or a $12.75 minimum wage? In fact, if all we have is a quasi-experimental study, we don't actually know how much it tells us. 

Quasi-experimental results come with basically no guide to their own external validity. You have to be Bayesian in order to apply them outside of the exact situation that they studied. You have to say "Well, if going from $4.25 to $5.05 wasn't that bad, I doubt going to $6.15 would be that much worse!" That's a prior.

If you want to believe that your model works far away from the data that you used to validate it, you need to believe in a structural model. That model could be linear or nonlinear, but "structural" basically means that you think it reflects factors that are invariant to conditions not explicitly included in the model. "Structural," in other words, means "the stuff that's really going on."

The weakness of structural modeling is that good structural models are really, really rare. Most stuff in economics is pretty complicated - there are a lot of ins, a lot of outs, a lot of what-have-you. When you make a structural model you assume a lot of simplifications away, and you assume that you've correctly specified the parts you leave in. This can often leave you with a totally bullshit fantasy model. 

So just test the structural model, and if the data reject it, don't use it, right? Hahahahahahaha. That would kill almost all the models in existence, and no models means no papers means no jobs for econometricians. Also, even if you're being totally serious and scientific and intellectually honest, it's not even clear how harsh you want to be when you test an econ model - this isn't physics, where things fit the data to arbitrary precision. How good should we even expect a "good" model to be? 

But that's a side track. What actually happens is that lots of people just assume they've got the right model, fit it as best they can, and report the parameter estimates as if those are real things. Or as Francis Diebold puts it:
A cynical but not-entirely-false view is that structural causal inference effectively assumes a causal mechanism, known up to a vector of parameters that can be estimated. Big assumption. And of course different structural modelers can make different assumptions and get different results.
So with quasi-experimental econometrics, you know one fact pretty solidly, but you don't know how reliable that fact is for making predictions. And with structural econometrics, you make big bold predictions by making often heroic theoretical assumptions. 

So why not do both things? Do quasi-experimental studies. Make structural models. Make sure the structural models agree with the findings of the quasi-experiments. Make policy predictions using both the complex structural models and the simple linearized models, and show how the predictions differ. 

What's wrong with this approach? Why should structural vs. quasi-experimental be an either-or? Why the academic food fight? If there's something that needs fighting in econ, it's the (now much rarer but still too common) practice of making predictions purely from theory without checking data at all.


from Noahpinion http://ift.tt/2mC4vBm