fbpx

Statism III

In my first post on statism, I defined the concept as an excessive and harmful embrace of the power of the state.  In my second post, I attempted to show statism functions as a bias in economics – the most market oriented of academic disciplines.   In this third and final post, I attempt to discuss briefly how one might apply the newest insight into human behavior – cognitive biases – to government action by administrative agencies to show how their actions might turn out to be harmful.  One could then balance its effect on government and on the market to determine which was greater.

If one were seeking to understand government agency behavior in terms of cognitive biases, it would seem that a large number of these biases might be relevant.  For example, confirmation bias, self-serving bias, belief bias, attentional bias, the illusion of control, and the overconfidence effect all would seem to be relevant.  Unfortunately, though, the subject remains ones that is relatively unexplored.  (One article that does discuss it, however, is this one by Stephen Choi and Adam Pritchard.)

How would this approach be applied to administrative agencies?  This is a tough question given the lack of a work, but some answers seem straightforward enough.  Imagine, for example, that an administrative agency adopts a regulation, but that it turns out to be a bad regulation.  Is it likely the agency will recognize that it has made a mistake?

Under a rational actor model, the agency would be relatively likely to recognize that it made a mistake.  (Whether it admitted that fact to the world would depend on the payoffs for doing so.)  But if one takes cognitive biases into account, one would be tempted to conclude that the agency would not be likely to recognize its mistake.  First, there is, of course, a tendency not to admit that one has been mistaken.  This is especially the case when the mistakes involve ideological matters, which most regulatory issues do.   I suppose that the cognitive bias literature has focused on this most in terms of confirmation bias, where people tend to look much harder for evidence that confirms rather than contradicts their preconceptions.

Second, there is also the issue of what is seen and not seen, as articulated by Bastiat.  If the regulation produces certain benefits, but causes harm by preventing certain beneficial actions, the benefits will be seen but the harms may not be.  People will naturally focus on what is seen rather than unseen.  Perhaps this could be understood in terms of an availability bias.  Finally, one might see the self serving bias at work in terms of agency regulations.  Agencies would tend to view positive aspects of the regulated market as the result of their regulations, while viewing negative aspects as having other causes.

In the end, this would appear to be an area ripe for investigation.  Yet, economics and psychology appear to be in the grip of a statist bias that leads them to focus on market actors rather than government actors.  What is needed is another Buchanan and Tullock to set the world straight.

Update: Make sure to read the first comment.  Here is an excerpt:

The information that policy makers use to propound onerous and stupid regulations is no better than that used by the rest of us to make “irrational” decisions. This fact is hidden though, by calling the surrogates used to fill in official gaps in information “assumptions,” and the same processes used by the rest of us “biases.”