Statism III

In my first post on statism, I defined the concept as an excessive and harmful embrace of the power of the state.  In my second post, I attempted to show statism functions as a bias in economics – the most market oriented of academic disciplines.   In this third and final post, I attempt to discuss briefly how one might apply the newest insight into human behavior – cognitive biases – to government action by administrative agencies to show how their actions might turn out to be harmful.  One could then balance its effect on government and on the market to determine which was greater.

If one were seeking to understand government agency behavior in terms of cognitive biases, it would seem that a large number of these biases might be relevant.  For example, confirmation bias, self-serving bias, belief bias, attentional bias, the illusion of control, and the overconfidence effect all would seem to be relevant.  Unfortunately, though, the subject remains ones that is relatively unexplored.  (One article that does discuss it, however, is this one by Stephen Choi and Adam Pritchard.)

How would this approach be applied to administrative agencies?  This is a tough question given the lack of a work, but some answers seem straightforward enough.  Imagine, for example, that an administrative agency adopts a regulation, but that it turns out to be a bad regulation.  Is it likely the agency will recognize that it has made a mistake?

Under a rational actor model, the agency would be relatively likely to recognize that it made a mistake.  (Whether it admitted that fact to the world would depend on the payoffs for doing so.)  But if one takes cognitive biases into account, one would be tempted to conclude that the agency would not be likely to recognize its mistake.  First, there is, of course, a tendency not to admit that one has been mistaken.  This is especially the case when the mistakes involve ideological matters, which most regulatory issues do.   I suppose that the cognitive bias literature has focused on this most in terms of confirmation bias, where people tend to look much harder for evidence that confirms rather than contradicts their preconceptions.

Second, there is also the issue of what is seen and not seen, as articulated by Bastiat.  If the regulation produces certain benefits, but causes harm by preventing certain beneficial actions, the benefits will be seen but the harms may not be.  People will naturally focus on what is seen rather than unseen.  Perhaps this could be understood in terms of an availability bias.  Finally, one might see the self serving bias at work in terms of agency regulations.  Agencies would tend to view positive aspects of the regulated market as the result of their regulations, while viewing negative aspects as having other causes.

In the end, this would appear to be an area ripe for investigation.  Yet, economics and psychology appear to be in the grip of a statist bias that leads them to focus on market actors rather than government actors.  What is needed is another Buchanan and Tullock to set the world straight.

Update: Make sure to read the first comment.  Here is an excerpt:

The information that policy makers use to propound onerous and stupid regulations is no better than that used by the rest of us to make “irrational” decisions. This fact is hidden though, by calling the surrogates used to fill in official gaps in information “assumptions,” and the same processes used by the rest of us “biases.”

Mike Rappaport

Professor Rappaport is Darling Foundation Professor of Law at the University of San Diego, where he also serves as the Director of the Center for the Study of Constitutional Originalism. Professor Rappaport is the author of numerous law review articles in journals such as the Yale Law Journal, the Virginia Law Review, the Georgetown Law Review, and the University of Pennsylvania Law Review.  His book, Originalism and the Good Constitution, which is co-authored with John McGinnis, was published by the Harvard University Press in 2013.  Professor Rappaport is a graduate of the Yale Law School, where he received a JD and a DCL (Law and Political Theory).

About the Author

Comments

  1. z9z99 says

    Professor Rappaport performs a necessary service in pointing out that regulators are subject to the same biases from which they seek to protect the rest of us. This observation is a limited subset of the flaws underlying regulatory power grabs premised on the biases of the regulated. If one were to boil down the psychobabble being peddled by Kahneman and Tversky as economic insight, you reach a rather pedestrian set of principles: 1.) decisions improve with the quality of the information upon which they are based; 2.) if you bias the information that is used to make decisions, you bias the resulting decisions; 3.) In the absence of sufficient information, cognitively intact beings resort to surrogates, such as assumptions based on experience, inferences drawn from the behavior of others, intuition, and reliance on ball park estimates of risk. None of this is earthshaking stuff. As Ecclesiastes counsels, there is nothing new under the sun. What is new is the pseudoscience that makes regulatory power grabs seem legitimate based only on linguistic sleight of hand. The information that policy makers use to propound onerous and stupid regulations is no better than that used by the rest of us to make “irrational” decisions. This fact is hidden though, by calling the surrogates used to fill in official gaps in information “assumptions,” and the same processes used by the rest of us “biases.”

    Professor Rappaport is off to a good start, but his subject will never be wanting for lack of targets. It is legitimate to begin with the foundational query as to whether government busybodies are themselves afflicted with biases, but the investigation must not stop there. Every assumption, premise, simplifying model, and statistical sophism should be challenged starting with very basic things like:

    1.) Is liberty such a valuable thing in itself that it should not be infringed even if doing so results in more rational decisions?

    2.) What is the social overhead associated with governmental efforts to improve decisions, i.e. what affronts to liberty are necessary to statist decisional enhancement that could be avoided through other means?

    3.) Are the biases of bureaucracies time-dependent? Do they start out with one set of priorities, that gradually morphs into another, unintended set over time? At what point doo considerations of budget protection, political advantage, ego, risk aversion, and ease of enforcement corrupt well intentioned ” behavioral economic” policies and make them, on the whole, destructive?

    4.) What are the constraints that affect regulatory decision-making that are irrelevant to producing “better” economic decisions, e.g. due process concerns, religion neutrality, bureaucratic turf fights, federalism and preemption, etc.

    5.) Are “better” economic decisions self-executing, or does the government need to maintain a mechanism of enforcement and threat of force, to ultimately relieve us of the task of making decisions altogether?

    • says

      Guy, yes, people with nrotsg family ties are happier, but so are people with successful careers; the question is where to put more effort on the margin. And of course some people might care more about success beyond its contribution to happiness, and so be reasonably willing to sacrifice some happiness for more success.I’m not clear why you look to draw our attention to whether these are empirical, objective, or normative claims. Yes it is harder to find support (evidence or analysis) regarding some kinds of claims, and various particular evidence support some claims more than others. But all of these kinds of claims can be in error, and so there can be biases about them. Bias is just avoidable systematic error.

    • says

      My favorite ofisettfng set of biases is the Gambler’s fallacy, the notion that the law of averages has memory to induce negative correlations, countered by the hot streak fallacy, the notion that success breeds success, which induces positive correlation. And just as a proverb does not necessarily influence either thought processes or behavior, the cognitive biases of Gambler’s Fallacy or Hot Streak exist in someone who is already at the gambling tables, already has a stack of chips, a free drink and a beautiful woman. To the extent that either fallacy pops into the gambler’s mind, its likely to be selectively employed but most probably after the event!

  2. thomas says

    I’m glad you’re thinking about it. It is something most of us ‘know’ but have not articulated.

    There is also self interest. If you work for an administrative agency; will you want to give up funding and influence even if it should be clear your work does not produce net benefits? CARB in California comes to mind….

    • says

      A bias is a belief. If a prerovb influences belief, then it is a bias, but it’s not obvious that the existence of a prerovb influences belief, even if someone claims that it does. Hot streak’s and the gambler’s fallacy may be idle chatter that do not reflect the gambler’s thought processes. That is how they might not be biases. I think that’s too extreme: the gambler might stop if he ran out of excuses, so they do affect his beliefs, or at least his actions.But Carl Shulman is right that a better model is that he wants to keep gambling. If the prerovbs accomplish only that, we need not worry about them. If the leak out and affect his beliefs in other ways, they might be interesting as biases.

  3. says

    Struggling to get the gist of this post. Fundamentally, it looks like you’re pointing out a crisis of non-objectivity (or more concretely put: evasion – certainly evident everywhere today). I’m not denying the points you make, but I wonder if we’d be more productive in fighting statism with the recognition that it [control] is at war with man’s nature [a rational being that survives by exercising his own judgement]. Even if we had tweaks to statist-activities, designed to improve them b/c somehow the control-freaks became more objective in assessing their outcomes, they would still remain the basic evil of controlling.

    • says

      Well, we’re getting a tlltie off-topic here, but if you think people don’t actually believe the gambler’s fallacy, I think there’s good evidence to the contrary. Kahnemann and Tversky (Judgment Under Uncertainty, 1982) make it a subset of the local representativeness belief (at page 7). See also Lindman and Edwards (1961), Journal of Experimental Psychology. Ten minutes in a casino will lead to the same conclusion . Watch people who write down long strings of roulette results and watch what they do. Note that, contrary to Carl’s hypothesis, people who write down these strings, before they start gambling, would have no reason to bet for or against red on their first bet if there had been a long streak of reds. In fact, I’ve watched them, and their first bet under these circumstances is almost invariably black, which suggests that, if anything, the gambler’s fallacy is somewhat stronger than the hot hand fallacy. I have no doubt that people who want to gamble will create rationalizations for their behavior, but those rationalizations, so long as they are predicated on the notion that the odds can be turned in their favor in a purely mechanical game are pretty close to the definition of bias.

  4. Robert Arvanitis says

    Markets are efficient to the extent they allocate capital to highest and best use.

    When we view markets, too often we limit our consideration to traditional markets for goods and services.

    But if we’ve learned anything from crony capitalism, it’s that we must consider the extended markets in order to see true efficiency. By “extended” we mean to add in markets for coercive government, along with traditional goods and services. That government coercion is just another commodity to be bought and sold in over-the-counter markets.

    Thus we are getting proper price information, for example, from the fact it is cheaper to suborn the FDA than to invent new drugs…

    With that in mind, we can transfer and apply all the tools available in traditional markets, over to the extended markets.

    Recent advances in corporate strategy and risk management now include artificial intelligence simulations of game theory: we model not just corporations, clients, suppliers, and competitors, but also two new classes — elected politicians and appointed bureaucrats. All of these are independent players, with different agendas and goals, as well as unique options to cooperate, defect, ally and cheat.

  5. James Lindgren says

    Good stuff!

    I’m certainly not a Tullock or a Buchanan, but this is precisely what I am working on. I’m working on the redefinition of authoritarianism in a way that bizarrely excluded statism, a development that happened in the 1940s and 1950s and has continued to dominate political psychology to this day.

    I also remember my professor James Coleman, the most eminent and influential sociologist at U. of Chicago in the last half century, talk about approaching sociology with “methodological individualism” v. “methodological collectivism.”

    James Lindgren
    Northwestern University

    • says

      Robin, I was trying to snartlate some of your claims to my own conceptual framework, where they do make a genuine difference. I’m very happy to talk about normative error, and this means that there can certainly be normative biases (a primary use of the term bias’ is of course to refer to such cases). But normative and empirical biases would be supported by VERY different kinds of evidence. And the notion of error involved might be very different, and possibly much weaker, on certain accounts of normative, evaluative, and some empirical claims (e.g. historical ones). It’s common to mix these up, and to support a given kind of claim with the wrong kind of relevant evidence. And when a disagreement is at the level of brute’ normative intuitions ( deep human relations are simply more valuable than professional achievement’), it will often be extremely hard to impossible to establish that one side is suffering from a bias.To focus on this particular example, thinking about it from a normative perspective raises, I think, very different questions from questions about the statistics of regret. For instance, our relation to our family involves special moral duties and concerns. And whether it was acceptable (or worthwhile) to neglect these duties/concerns for the sake of professional success may depend on whether one is, in fact, ultimately successful, and in what way (this question being the immediate topic of Bernard Williams’s Moral Luck’).

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>