Science and technology are embedded in virtually every aspect of modern life. As a result, people face an increasing need to integrate information from science with their personal values and other considerations as they make important life decisions about medical care, the safety of foods, what to do about climate change, and many other issues. Communicating science effectively, however, is a complex task and an acquired skill. Moreover, the approaches to communicating science that will be most effective for specific audiences and circumstances are not obvious. Fortunately, there is an expanding science base from diverse disciplines that can support science communicators in making these determinations.
Communicating Science Effectively offers a research agenda for science communicators and researchers seeking to apply this research and fill gaps in knowledge about how to communicate effectively about science, focusing in particular on issues that are contentious in the public sphere. To inform this research agenda, this publication identifies important influences – psychological, economic, political, social, cultural, and media-related – on how science related to such issues is understood, perceived, and used.
Academia isn’t like the public sector or firms or nonprofits. These days, people in those sectors are trying to read the tea leaves about what’s coming next. In a post-truth world, everything is negotiable, so it’s all about reading the fault lines of debates, figuring out who wants what.
I became an academic because I believe in evidence. It’s easy for critics to wrongly claim that universities are full of informational relativism, but I don’t see it. Instead I see groups of people trying to find the best ways to discover evidence about truth. The most bitter fights are about how we assemble that evidence because it isn’t easy to demonstrate causality.
Academics are also facing the decision of whether to invest time reading the political fault lines – or to double down on evidence.
If I was gifted with reading those political tea leaves I would have run for office. I’m not, so I’m doubling down on evidence. I’m doing so because post-truth, like other movements, is a fad. Assuming we survive it, after it fades, there will be a great demand for evidence. Somewhere, sometime, people will want evidence about how to make policy or manage organizations.
In the end, this is the primary responsibility of academics – to double down on evidence, not to translate or write opeds or whatever. If we don’t discover, who will?
National statistical systems are enterprises tasked with collecting, validating and reporting societal attributes. These data serve many purposes–they allow governments to improve services, economic actors to traverse markets, and academics to assess social theories. National statistical systems vary in quality, especially in developing countries. This study examines determinants of national statistical capacity in developing countries, focusing on the impact of technological attainment. Just as technological progress helps to explain differences in economic growth, we argue that states with greater technological attainment have greater capacity for gathering and processing quality data. Analysis using panel methods shows a strong, statistically significant positive linear relationship between technological attainment and national statistical capacity.
Please feel free to contact me for a pre-publication version of the paper.
Gary Miller and I have written a short description of our recent Cambridge University Press book Above Politics: Bureaucratic Discretion and Credible Commitment. It is posted on the site of Osservatorio AIR, a center in Rome that specializes in research and studies on impact assessment, simplification, transparency, and participation as ways of improving regulation. The description can be found at http://www.osservatorioair.it/research-note-above-politics-bureaucratic-discretion-and-credible-commitment/.
This week’s events should be interpreted as a word of caution about predictive analytics. Clearly, many models didn’t predict the outcomes of the 2016 election. More importantly, the vast majority of models weren’t predictive. “Models of models” (averages across models) weren’t predictive. Even when models were built on data with high granularity (subnational polls, or polls taken at regular intervals, or polls taken by different houses using a variety of methods).
What’s the upshot? Humility. How much harder is it to get the predictions right when we’re developing policy for new and novel problems?
Don’t believe those who say that big data will solve everything.
- 34885 Police Human Resource Planning: National Surveys, 2011-2013 [United States and Canada]
- 36286 Longitudinal Study of the Second Generation in Spain (ILSEG)
- 36387 Impacts and Implementation of the i3-Funded Scale-Up of Success for All
- 36434 Current Population Survey, May 2010 – May 2011: Tobacco Use Supplement (TUS), 2010 – 2011 Wave