Tim Duy has published an interesting commentary on how economists should respond to the incoming Administration’s plans to intervene in the economy to save manufacturing jobs. It takes on traditional economic thinking about trade, growth, and regional adjustments. It argues that frictions that resulted in losses – in jobs, livelihoods, in tears to local social fabric – are real, that they caused the outcome of this year’s election, and they can’t be ignored going forward.
Certainly. What do we do, though, about the demand for and supply of easy solutions? We could re-run history and ask what could have been done 10, 20, or 50 years ago to limit these frictions. But what does that get us?
Today we can ask whether those at the edges of the modern economy can be saved. We have considered that question for many years, and the solutions remain unclear. For example, what about subsidizing the costs of relocation to areas where workers are in greater demand? Is that an easy, simple, and effective solution?
One problem with our thinking here is that we assume that the system wants to solve hard problems. Imagine that instead politicians want to solve easy problems for which they can easily claim credit. Or that in the case of hard problems they know that symbolic gestures are often effective enough given short time horizons.
In the current environment it’s more important that academics remain humble about the limits of our proposed solutions. That we recognize that we’re better at documenting problems than fixing them. That the demand for simple solutions is great. And that politicians are better at solving political problems than fixing long-run structural problems in the performance of the largest and most dynamic economy in the world.
Martin Lodge of CARR at the London School of Economics and Political Science has assembled a great group of papers on the question of whether regulation scholarship is in crisis given recent events in a number of countries. Gary Miller and I contributed a small piece that describes what our recent Cambridge University Press book Above Politics: Bureaucratic Discretion and Credible Commitment has to say on the matter. You can find the entire document online at this link.
Gary Miller and I have written a short description of our recent Cambridge University Press book Above Politics: Bureaucratic Discretion and Credible Commitment. It is posted on the site of Osservatorio AIR, a center in Rome that specializes in research and studies on impact assessment, simplification, transparency, and participation as ways of improving regulation. The description can be found at http://www.osservatorioair.it/research-note-above-politics-bureaucratic-discretion-and-credible-commitment/.
That’s from a quote from Ryan Kavanaugh:
“The key is to embrace disruption and change early. Don’t react to it decades later. You can’t fight innovation.”
For one view on where we stand, consider these tweets from Ben Sasse (R-NE).
It’s hard to stop a tsunami.
Public managers at all levels face a new reality with a primary distinguishing characteristic: uncertainty. What do you do when the ground is shifting under your feet and you can’t see three steps ahead?
Those working on how cities prepare for and respond to natural disasters already know this feeling. That’s why they’ve so much time trying to understand what makes communities resilient at all levels – societal, administrative, logistical, infrastructure, etc. Even the National Science Foundation has invested heavily in research projects drawing from multidisciplinary studies of what helps make communities resilient.
The answer is that we are still learning. One lesson, though, is that grand planning doesn’t work. Instead, systems that emphasize features like adaptation and modularity perform better.
This thinking runs counter to what managers like best: rational, synthetic control. It’s more what you see in environments like currency trading. These are hallmarks of Bayesian decision making.
Many of our models assume that the environment is fixed – that we are in steady state. But we aren’t. And that requires a different kind of thinking.
This week’s events should be interpreted as a word of caution about predictive analytics. Clearly, many models didn’t predict the outcomes of the 2016 election. More importantly, the vast majority of models weren’t predictive. “Models of models” (averages across models) weren’t predictive. Even when models were built on data with high granularity (subnational polls, or polls taken at regular intervals, or polls taken by different houses using a variety of methods).
What’s the upshot? Humility. How much harder is it to get the predictions right when we’re developing policy for new and novel problems?
Don’t believe those who say that big data will solve everything.
The United States is at a crossroads. The future is unknown. None of us has a clear view of what happens now. Rather than focus on why we are at this juncture, over the next several days I will offer a few thoughts on the state of play.
Governing is hard. It isn’t like being in opposition. The stakes are different and the goalposts have moved.
How hard is it to nominate 4,000 people to fill the upcoming presidential appointments? Especially if you lack a thick DC network? Or if you’ve never served as a governor?
Once approved, how hard is it to monitor those appointees for compliance with your policy positions? Do you trust your monitors?
How do you decide whether one of them has crossed the line? Do you defend every breach of prudential action?
And these are some of the easiest decisions ….