New administrative datasets at ICPSR

33761 Analysis of Current Cold-Case Investigation Practices and Factors Associated with Successful Outcomes, 2008-2009

34681 Case Processing in the New York County District Attorney’s Office; 2010-2011


34903 Delivery and Evaluation of the 2012 International Association of Forensic Nurses (IAFN) National Blended Sexual Assault Forensic Examiner (SAFE) Training [UNITED STATES]

34922 Investigating the Impact of In-car Communication on Law Enforcement Officer Patrol Performance in an Advanced Driving Simulator in Mississippi, 2011

Doubling down on evidence

Academia isn’t like the public sector or firms or nonprofits. These days, people in those sectors are trying to read the tea leaves about what’s coming next. In a post-truth world, everything is negotiable, so it’s all about reading the fault lines of debates, figuring out who wants what.  

I became an academic because I believe in evidence. It’s easy for critics to wrongly claim that universities are full of informational relativism, but I don’t see it. Instead I see groups of people trying to find the best ways to discover evidence about truth. The most bitter fights are about how we assemble that evidence because it isn’t easy to demonstrate causality.  

Academics are also facing the decision of whether to invest time reading the political fault lines – or to double down on evidence. 

If I was gifted with reading those political tea leaves I would have run for office.  I’m not, so I’m doubling down on evidence. I’m doing so because post-truth, like other movements, is a fad. Assuming we survive it, after it fades, there will be a great demand for evidence. Somewhere, sometime, people will want evidence about how to make policy or manage organizations. 

In the end, this is the primary responsibility of academics – to double down on evidence, not to translate or write opeds or whatever. If we don’t discover, who will?

“Developing knowledge states: Technology and the enhancement of national statistical capacity”

New with Derrick Anderson of Arizona State University, this paper is now forthcoming at the Review of Policy Research. Here’s the abstract:

National statistical systems are enterprises tasked with collecting, validating and reporting societal attributes. These data serve many purposes–they allow governments to improve services, economic actors to traverse markets, and academics to assess social theories. National statistical systems vary in quality, especially in developing countries. This study examines determinants of national statistical capacity in developing countries, focusing on the impact of technological attainment. Just as technological progress helps to explain differences in economic growth, we argue that states with greater technological attainment have greater capacity for gathering and processing quality data. Analysis using panel methods shows a strong, statistically significant positive linear relationship between technological attainment and national statistical capacity.

Please feel free to contact me for a pre-publication version of the paper.

A word of caution about predictive analytics

This week’s events should be interpreted as a word of caution about predictive analytics. Clearly, many models didn’t predict the outcomes of the 2016 election. More importantly, the vast majority of models weren’t predictive. “Models of models” (averages across models) weren’t predictive. Even when models were built on data with high granularity (subnational polls, or polls taken at regular intervals, or polls taken by different houses using a variety of methods).

What’s the upshot? Humility. How much harder is it to get the predictions right when we’re developing policy for new and novel problems? 

Don’t believe those who say that big data will solve everything.