Lack of data-generating models. A problem ransacking macro-comparative research efforts. We got strong data-generating theories. Institutions, politics, materials, procedures, conflicts and symbols help explain macro-level events, like the introduction of social security or the onset of war. But we can’t model these theories as causes of the data we observe. There is far more theoretical complexity than there are countries.
There just aren’t enough countries to provide reliable measures of central tendency given theoretical complexities. Even if our theory were as simple as Y will be significantly different from some value (e.g., zero) given X, we should have at least 10 countries. If we were to actually conduct a power analysis it would tell us we need more like 200 countries depending on how big of a difference we expect in Y given X. I’m just gonna leave aside that countries are not a population in the typical sense (what is a country again exactly? exactly.).
What should we do?
- Option A: stick to qualitative case studies.
- Option B: continue running various country-level regressions and ignore the problem.
- Option C: wait until data from 180+ countries covering a period of 50 years becomes available.
Maybe we just ask a better question. Why do we need to do macro-comparative analysis if we already got good theory?
Answer: Reliability and utility.
Without systematic evidence we don’t actually have theories, only ideas or conjectures. Logic and in-depth case study help define data-generating theories for one country in one time period. So we want to test if this might apply to most countries in most time periods. In testing this we get Pomeranian.
Even with the same variables, regressions on country data over time go all over the place given only tiny changes to the estimation procedure. That was a finding of the CRI. The effect of immigration on social policy preferences could be anything.
To the right: Mr. Summerbottom showcases the size and direction of effects from macro-comparative regressions using the same data. Outliers cropped.
Even if we run all possible model configurations, and even if we consider Bayesian posterior distributions. We still don’t know if we have a correct model specification, because we cannot truly test it. We can only make sweeping statements like, ‘in all possible model combinations, variable X was not significant so it probably does not have an effect’. Too bad we might have some kind of suppression from an unobserved variable!
Don’t drink the water. There is no method that can substitute for human logic. Zombie modeling where the researcher lumbers aimlessly through thousands of models looking for blood doesn’t work. The answer is therefore Answer D, none of the above.
We need new school meta. We need to logically analyze researcher’s decisions. We can’t do much with the limited data that are out there for macro-comparative research. Thus, we need to get down with specifications. By comparing specification curves and researcher decision trees, we can identify the difference between critical and benign. Some programmers offer us R packages for this (thanks Joachim Gassen for rdfanalysis!).
Some models come to the same results regardless of whether the researchers apply weighting, use latent variables or correct for autocorrelation; others not. When we identify the decisions a researcher made that carry the potential to influence the results to the greatest degree, we simultaneously identify where to focus our theoretical work. These decisions cannot be improved by running more models, instead they require, as Andrew Abbott once put it, ‘more sitting in our offices and staring at the wall’. We need to dig in and use logic, reflection and mental energy to improve our theories in small-N macro-comparative research.
OpenEdition suggests that you cite this post as follows:
Nate Breznau (October 29, 2019). Help. We can’t know the data-generating model. Crowdid. Retrieved October 6, 2024 from https://doi.org/10.58079/ne83