From a book review by Karl Weick in a recent ASQ:
But what’s missing are cases that show how learning is sustained during crises and how lessons learned after a crisis actually make a difference later. The problem with enumerating breakdowns is that it’s not obvious what drives them (e.g., stress, sensemaking, habit, perception, overload, decision making), nor is it obvious that breakdowns in learning trump everything else. Resilience is tested in novel environments, as the author says. And learning before and during novel events can promote adaptation in the face of novelty. The solutions by which people can build organizational learning seem to boil down to the creation of independent “Red Teams” that scrutinize previous breakdowns, try to cut through denials, and expose finer details of what really happened and how to prevent a recurrence. Such efforts can promote learning, but variations of this approach, in the form of after-action reviews, have been used for some time, and the associated learning can be situation-specific.
We ignore learning in organizations to our peril – especially because so many public organizations are “knowledge organizations” (full of expert professionals), and also because they frequently fail.
We tend to over-analyze the successes (which are rare) and under-analyze failures.
In a new paper in Economica, Leonardo Felli and Kevin Roberts show:
In an environment in which heterogeneous buyers and sellers undertake ex ante investments, the presence of market competition for matches provides incentives for investment but may leave inefficiencies, namely hold‐up and coordination problems. This paper shows, using an explicitly non‐cooperative model, that when matching is assortative and investments precede market competition, buyers’ investments are constrained efficient while sellers marginally underinvest with respect to what would be constrained efficient. However, the overall extent of this inefficiency may be large. Multiple equilibria may arise; one equilibrium is characterized by efficient matches, but there can be additional equilibria with coordination failures.
This is why the debate about “contracting out” will never go away. We are destined to see multiple equilibria – sometimes competition works, and sometimes it doesn’t.
From John Marvel and Robert McGrath, now on the Journal of Public Policy’s FirstView:
Federal agencies perform many important tasks, from guarding against terrorist plots to mailing social security checks. A key question is whether Congress can effectively manage such a large and influential bureaucracy. We argue that Congress, in using oversight to ensure agency responsiveness to legislative preferences, risks harming agency morale, which could have negative long-run effects on performance and the implementation of public policy. More specifically, we argue that oversight’s effects on agency morale are conditional on whether oversight is adversarial or friendly. We assess our claims using a novel data set of the frequency and tone of hearings in which federal agencies are called to testify before Congress from 1999 to 2011 and merge it with data on agency autonomy and job satisfaction. Our findings suggest that agency morale is sensitive to congressional oversight attention, and thus speak to questions regarding democratic accountability, congressional policymaking and the implementation of public policy.
The National Academy of Public Administration (the Academy) and ICF International will hold the release event for 2nd annual Federal Leaders Digital Insight Study on February 4, 2016 from 8:00-10:30 AM at the University Club in Washington, DC. The Study is based on a survey of Federal executives designed to solicit insight on their agencies’ adoption of digital technology, focusing on innovations, stakeholder engagement, security, and progress made since the 2014 study. A panel of the Academy’s expert Fellows developed the Study’s findings and recommendations. Please join us for a discussion of the 2015 Federal Leaders Digital Insight Study and an opportunity to engage with Lisa Schlosser, Deputy Federal Chief Information Officer (CIO), Deputy Administrator of E-Government and IT, Office of Management and Budget and our Academy’s Fellows.
I very much enjoyed serving on the panel overseeing this study. It’s illuminating to see how fast things are changing with regard to digital services in the federal sector.
Of course, then there’s this from Federal News Radio:
The Office of Management and Budget is planning to turn up the heat once again on agency commodity IT spending. First it was on desktops and laptops, and now it will be on mobile devices. The draft policy tells agencies “effective immediately, except as provided in this policy, all agencies are prohibited from issuing solicitations for new contract awards for mobile device [sic] and services, and should look to the existing governmentwide General Services Administration wireless solution.”
Debate continues in political science on the DA-RT. I recently offered a few random thoughts on the debate; here are a few more.
- The Journal of Applied Psychology requires that “all data in their published articles be an original use”. Specifically, “Any previous, concurrent, or near future use of data reported in a submitted manuscript must be brought to the editorial team’s attention (i.e., any paper(s) previously published, in press, or currently under review, as well as any paper(s) that foreseeably will be under review before an editorial decision is made on the current submitted manuscript)”. If it’s not original, it’s not eligible for publication in the JAP. Does DA-RT equate to importance?
- How should proprietary data be handled? Is it a violation of TOS for one to post data obtained from ICPSR (where people associated with member institutions are given access to the data)? It certainly violates the TOS to repost data from TRAC. The Swedish twins data can’t be accessed outside Sweden and only by certified Swedish researchers; such issues are often present in the case of data with biometric measures. The trick, of course, in circumventing DA-RT (even if there is a clause for proprietary data) is to only analyze proprietary data.
As with the implementation of most good ideas, the devil is in the details.
Not all journals will sign the DA-RT, though some non-signatories will encourage archiving, etc. (We aren’t signing it on behalf of the Journal of Public Policy.) Not all disciplines have DA-RT-like mechanisms. Overall, my impression is that the way it’s been handled / debated in political science says more about the discipline than there being a right way to do “science”.
Here are two predictions: (1) this will reduce the volume of papers hitting the journals (because people will find the process to be a hassle); and (2) this will increase the volume of papers (because more data are now available for use in papers – the PSID/NES/GSS effect). Which will hold true?
As an aside, if data are so precious or difficult to obtain, why don’t people treat them like proprietary business information?
New from Holona LeAnne Ochs of Lehigh University:
Research on poverty and research on governance currently exist as largely disparate literatures without a framework for building knowledge regarding how policies and practices compare as poverty alleviation strategies. In Privatizing the Polity, Holona LeAnne Ochs examines the evolution of the governance of welfare programs across the United States. Throughout the political spectrum the trend in recent decades has been towards welfare privatization, shifting the boundaries of poverty governance from public to private actors—whether they are foundations or social entrepreneurs—whose interests in poverty governance are more obscure. The analysis of more than eighteen years of data suggests that strategies of devolution and privatization make it more difficult for people to move out of poverty. At the same time the framework for understanding the governance structures, enactment practices, and social wealth leverage presented in Privatizing the Polity offers numerous opportunities for acquiring a deeper understanding of assumptions formerly taken for granted and redirecting the system to enhance poverty alleviation.
“Efficiencies and Regulatory Shortcuts: How Should We Regulate Companies like Airbnb and Uber?”, a new working paper from the Harvard Business School NOM Unit:
New software platforms use modern information technology, including full-featured web sites and mobile apps, to allow service providers and consumers to transact with relative ease and increased trust. These platforms provide notable benefits including reducing transaction costs, improving allocation of resources, and information and pricing efficiencies. Yet they also raise questions of regulation, including how regulation should adapt to new services and capabilities, and how to correct market failures that may arise. We explore these challenges and suggest an updated regulatory framework that is sufficiently flexible to allow software platforms to operate and deliver their benefits, while ensuring that service providers, users and third parties are adequately protected from harms that may arise.
This is one of the more difficult challenges in today’s regulatory environment.
The 4th edition, by Newcomer, et al., now out on Jossey-Bass:
The leading program evaluation reference, updated with the latest tools and techniques The Handbook of Practical Program Evaluation provides tools for managers and evaluators to address questions about the performance of public and nonprofit programs. Neatly integrating authoritative, high-level information with practicality and readability, this guide gives you the tools and processes you need to analyze your program’s operations and outcomes more accurately. This new fourth edition has been thoroughly updated and revised, with new coverage of the latest evaluation methods, including: * Culturally responsive evaluation * Adopting designs and tools to evaluate multi-service community change programs * Using role playing to collect data * Using cognitive interviewing to pre-test surveys * Coding qualitative data You’ll discover robust analysis methods that produce a more accurate picture of program results, and learn how to trace causality back to the source to see how much of the outcome can be directly attributed to the program. Written by award-winning experts at the top of the field, this book also contains contributions from the leading evaluation authorities among academics and practitioners to provide the most comprehensive, up-to-date reference on the topic. Valid and reliable data constitute the bedrock of accurate analysis, and since funding relies more heavily on program analysis than ever before, you cannot afford to rely on weak or outdated methods. This book gives you expert insight and leading edge tools that help you paint a more accurate picture of your program’s processes and results, including: * Obtaining valid, reliable, and credible performance data * Engaging and working with stakeholders to design valuable evaluations and performance monitoring systems * Assessing program outcomes and tracing desired outcomes to program activities * Providing robust analyses of both quantitative and qualitative data Governmental bodies, foundations, individual donors, and other funding bodies are increasingly demanding information on the use of program funds and program results. The Handbook of Practical Program Evaluation shows you how to collect and present valid and reliable data about programs.