Bean Jars, Buffalo Herds, and Bubbles: The Impact of Free Markets on the Common Good (and Vice Versa)

Conservative Perspectives Conservatism

Bean Jars, Buffalo Herds, and Bubbles: The Impact of Free Markets on the Common Good (and Vice Versa)

February 21, 2024 37 min read Download Report
David Goldman
Columnist, Asia Times

Summary

What can economic theory contribute to the debate between the New Right and classical liberals? We often identify “free markets” (minimum of government interference) with “efficient markets” (capable of processing all available information in real time). But if we consider how markets form long-term expectations, we find that government often plays a beneficial and even indispensable role in promoting market efficiency. The bright line between New Right dirigisme and Old Right libertarianism is at least in part the result of an oversimplified application of economic and financial theory. A richer consideration of the relevant theory helps to identify what specific government activities foster market efficiency. This may help to build a bridge across the present ideological divide.

Key Takeaways

Classical liberals frequently identify market efficiency with non-intervention by government, but some form of intervention in the market is unavoidable.

The question is how that intervention contributes to or detracts from the salutary synthesis of market freedom and efficiency.

If we consider how markets form long-term expectations, we find that government often plays a beneficial, even indispensable role in promoting market efficiency.

Conservative Perspectives by The Heritage Foundation is a series reflecting thought leadership from across the conservative movement on emerging policy topics and debates. This series provides a forum for diverse perspectives to be articulated and discussed. Nothing written here is to be construed as necessarily reflecting the views of The Heritage Foundation.

What can economic theory contribute to the debate between the New Right and classical liberals? We often identify “free markets” (minimum of government interference) with “efficient markets” (capable of processing all available information in real time). But if we consider how markets form long-term expectations, we find that government often plays a beneficial and even indispensable role in promoting market efficiency.

The bright line between New Right dirigisme and Old Right libertarianism is at least in part the result of an oversimplified application of economic and financial theory. A richer consideration of the relevant theory helps to identify what specific government activities foster market efficiency. This may help to build a bridge across the present ideological divide.

Supply-side economics—now part of the inventory of classical liberalism—began with the concern that these things cannot be reconciled. Robert Mundell’s insight was that markets have trouble discounting the future income streams of households. Could state intervention of a certain kind actually increase both the freedom and efficiency of markets in regard to this point? Issuance of government bonds to cover an expected budget deficit following a pro-growth tax cut may increase the efficiency of markets by enabling markets to discount future household income streams in the form of higher future tax revenues.REF

Mundell’s argument when it first appeared was anathema to the classical liberal establishment of the 1970s. It was embraced by neoconservatives who found in it a justification in economic theory for elevated government spending.REF

Mundell’s insights about capital market efficiency find a complement in the late Robert Merton’s explanation of asset pricing taking into account multiperiod expectations as set forth in his Intertemporal Capital Asset Pricing Model (ICAPM).REF In an environment of rapid technological change, the price of hedges against changes in the investment opportunity set—the risk that new technologies will make today’s firms obsolete—under some conditions may become arbitrarily high. Market bubbles like the dot.com craze of the late 1990s can arise without government interference, as can market panics. Commercial markets are less prone to frenzies than financial markets are, but they can also generate perverse results. Network economies can produce natural monopolies with effects as harmful as monopolies created by government intervention.REF

I will argue that government support for research and development on the model of the Apollo Program and the Reagan Strategic Defense Initiative contributes to market efficiency by expanding the investment opportunity set. This result is consistent with the ICAPM proposed by Merton.

Market Freedom and Efficiency

Governments cannot help but influence long-term expectations. Classical liberals frequently draw a false dichotomy between intervention and non-intervention, but some form of intervention in the market is unavoidable. The question is how that intervention contributes to or detracts from the salutary synthesis of market freedom and efficiency. Bank regulation is a poignant example; I will argue below that regulatory changes allowing augmented leverage in bank portfolios contributed mightily to the global financial crisis of 2008, although they went almost unnoticed when enacted.

A simpler case is the problem of depreciation of capital assets for tax purposes. The Internal Revenue Service cannot possibly know in advance how fast manufacturers will depreciate capital equipment. Neither can manufacturers. According to the Tax Foundation, the present system of multi-year depreciation constitutes a major obstacle to investment in manufacturing and other capital-intensive industries.REF The tax system embodies an industrial policy, but one that is prejudicial against industry. We will always have an “industrial policy” of some kind; the object of economic theory is to make explicit the consequences of the policies we choose.

If classical liberalism overlooks the ways in which public policy shapes long-term expectations, the New Right often overlooks the importance of market efficiency. If industrial policy gives politicians and bureaucrats the power to pick winners, the results will be rent-seeking, corruption, diminished wealth, and diminished incomes for the majority of citizens. Governments can spend usefully on public goods such as infrastructure and frontier scientific research, but private capital must assume the risk of particular outcomes. The concept of market efficiency is indispensable to successful industrial policy.

There is an enormous literature on the benefits of free markets; the ways in which a well-ordered polity benefits free markets are more subtle. Governments that do not interfere excessively in private transactions, legislatures that make law transparently, administrators who do not abuse their mandate, central banks that preserve the purchasing power of money, and police who ensure public safety are obvious perquisites for well-functioning markets. Less obvious but just as significant are the ways in which governments pursuing the common good foster the functioning of free markets by helping investors to form long-term expectations.

I will discuss two examples that are highly relevant to our nation’s problems in productivity, innovation, and fiscal policy: the impact of government support for research and development (R&D) at the frontiers of science and management of the public debt.

What is a free market? No market is perfectly free. Society has a say in who can enter a market (licensing requirements for certain business and professions) as well as who can leave (bankruptcy law). If we passed a law requiring the public hanging of individuals who failed to pay their debts and assigned unlimited liability for corporate debt to stockholders, no one would borrow money and few would buy stocks. Some countries do not allow debtors a fresh start through bankruptcy and hence show less disposition to risky entrepreneurial ventures.REF

Governments interfere in markets in ways that may not be obvious. Below I will argue that an obscure regulatory move by the Federal Reserve set the stage for the 2008 financial crisis.

I will argue that a stylized concept of an efficient market is indispensable to the common good, even—indeed, especially—in heavily regulated areas of economic life. Regulation always brings the risk of advantage to special interests in the form of rent-seeking, suppression of entrepreneurial activity, the encouragement of herd behavior, and other undesirable results. The concept of a free market nonetheless provides a point of reference to minimize the damage of government interference in markets, even when regulation is necessary.

Proponents of free markets often are accused of fostering short-term results at the expense of long-term economic stability. We should investigate this claim to see what, if anything, it reveals about markets and regulatory policy.

Financial Regulation and the Common Good

An efficient market is one that accurately discounts the present value of future income streams. Obviously, no market is perfectly efficient; we do not, for example, assign a present value to our six-year-old’s lemonade stand. A vast venture capital industry attempts to assign values to emerging companies with varying degrees of success. The lion’s share of national income, moreover, accrues to households, whose individual future income streams are harder to discount than those of a corporation are.

I will argue that public policy plays a role in promoting the efficiency of markets and, thus, the common good by citing two examples: promoting technological innovation and managing the public debt.

How should we understand the workings of an efficient market? Portfolio-theory pioneer Jack Treynor explained the wisdom of markets in an often-cited 1987 article. Markets find the right price for securities, he argued, not because a few investors are smarter than the rest, but rather because the great majority of investors are allowed to make mistakes. It is the randomness of investors’ errors rather than the perspicacity of a minority that makes markets efficient. Although Treynor’s argument is familiar, it is worth citing at some length:

Market efficiency is a premise, not a conclusion…. The rationale asserts that investors aware of a discrepancy between price and value will expand their positions until the discrepancy disappears. The problem is that, as those positions expand, portfolio risk increases faster than portfolio return. Beyond a certain point, further expansion is irrational if the investors in question are risk-averse.
The standard rationale has another problem. It assumes that those investors who know the true value of a security expand their positions when that value exceeds the market prices, while those investors with a mistaken estimate of value don’t. But the latter also perceive a discrepancy between price and their estimate of value. In effect, the rationale assumes that those investors who are right know they are right while those investors who are wrong know they are wrong—an unlikely state of affairs.
Where does the accuracy of market prices come from, if not from a few determined investors who know they are right? It comes from the faulty opinions of a large number of investors who err independently. If their errors are wholly independent, the standard error in equilibrium price declines with roughly the square root of the number of investors.
But what assurance do we have that the investors’ errors are really independent?... Fortunately, the mechanism whereby a large number of error-prone judgments are pooled to achieve a more accurate “consensus is not confined to finance, or even economics….
The mechanism is present even in traditional “bean jar” contests, where observers are asked to guess the number of beans filling a jar. How accurate is the mean of the guesses? How much more accurate than the average guess? Do shared errors creep into the guesses, hence into the mean?
Results of bean jar experiments conducted in the author’s investment classes indicate that the mean estimate has been close to the true value. In the first experiment, the jar held 810 beans; the mean estimate was 841, and only two of the 46 guesses were closer to the true value. In the second experiment, the jar held 850 beans, and the mean estimate was 871; only one of 56 guesses was closer to the true value….
In a second set of bean jar experiments, the observers were cautioned to allow (after recording their original guesses) for first, air space at the top of the jar and, second, the fact that the jar, being plastic rather than glass, had thinner walls than a conventional jar, hence more capacity for the same external dimensions. The means of the guesses after the first and second “warnings” were 952.6 and 979.2, corresponding, respectively, to errors of 106.2 and 129.2. Although the cautions weren’t intended to be misleading, they seem to have caused some shared error to creep into the estimates.REF

An important policy conclusion emerges from Treynor’s example: Because the efficiency of markets stems from the randomness of investors’ errors, any outside influence that causes errors to become correlated—in Treynor’s case, the warning about the thickness of the jar and the air space at the top—can undermine the functioning of markets. The merit of Treynor’s example is that it shows how easy it is to prejudice the guesses in one direction or another and distort the outcome of guesses that are no longer random.

Regulation, tax policy, subsidies, and other government action can distort the market’s guesses about future returns on assets in the same way. Such influences may arise from government interference, of which countless examples come to mind. It is not too much of an exaggeration to assert that nearly all major market failures of living memory can be attributed to misguided government actions that led to correlated rather than random errors on the part of investors. Government intervention often harms the common good by impairing the efficiency of markets and the allocation of capital.

A relevant example is the financing of housing, which comprises about a third of household expenditure and supports the largest pool of lending to households. Before the 2008 great financial crash, the Federal Reserve allowed large banks to create Structured Investment Vehicles (SIVs) that allowed them to purchase AAA-rated assets with a paper-thin capital ratio. Banks were required to hold shareholders’ capital equal to 8 percent of their loan portfolios, except for supposedly default-proof AAA-rated securities, which required only 20 percent of the 8 percent capital requirement, or 1.6 percent. Banks could lever up AAA-rated securities 62.5 times (100 divided by 1.6).

As a Brookings Institution study of the origins of the financial crisis reports:

[B]anks looked for ways to circumvent the [capital] requirements. The favored means of getting around these mandated capital requirements became what were known as Structured Investment Vehicles (SIVs), an off-balance sheet SPV [Special Purpose Vehicle] set up by banks to hold MBS [Mortgage Backed Securities], CDOs [Collateralized Debt Obligations] and other long-term institutional debt as their assets. By dodging capital requirements, SIVs allowed banks to leverage their holdings of these assets more than they could on their balance sheets. To fund these assets, the SIVs issued asset-backed commercial paper (ABCP) and medium term notes as their liabilities, mostly with very short-term maturity that needed to be rolled over constantly. Because they obtained the legal title of “bankruptcy remote,” SIVs could obtain cheaper funding than banks could, and thus increased the spread between their short-term liabilities and long-term assets—and for awhile they earned high profits. SIV assets reached $400 billion in July 2007….REF

Jeffrey Friedman and Wladimir Kraus argue that the 2001 Recourse Rule “required banks to use five times as much capital for business and consumer loans as for mortgage-backed securities (MBS) that were rated AAA…. The Recourse Rule, in particular, appears to have encouraged U.S. banks to accumulate nearly a half-trillion dollars of triple-A MBS. This, we contend, was the proximate cause of the financial (i.e., banking) crisis.”REF

The problem is that true AAAs offered less yield than banks paid for deposits. Banks typically were rated single-A to double-AA and therefore paid higher rates than true AAA borrowers, but the banks created “synthetic” AAAs by packing home equity loans, junk bonds, and other low-quality paper into structured products in which the lower-rated tranches absorbed the majority of defaults, protecting the higher-rated tranches. Issuing ratings for structured products became the ratings agencies’ biggest source of income, and the ratings agencies accordingly tweaked the numbers to produce artificial AAA securities that yielded slightly more than banks’ cost of funds.

In 2015, Standard & Poor’s (S&P) paid a $1.375 billion fine to the federal government to settle lawsuits alleging that:

[I]nvestors incurred substantial losses on Residential Mortgage Backed Security (RMBS) and Collateralized Debt Obligations (CDO) for which S&P issued inflated ratings that misrepresented the securities’ true credit risks. Other allegations assert that S&P falsely represented that its ratings were objective, independent and uninfluenced by S&P’s business relationships with the investment banks that issued the securities.REF

The structured credit market transformed low-quality mortgages including so-called liar’s loans (where banks allowed borrowers to file false information about income) into AAA securities. In 2014, Bank of America paid a $16.65 billion fine to the federal government for fraudulent practices in this market. According to a Justice Department press release:

The settlement includes a statement of facts, in which the bank has acknowledged that it sold billions of dollars of RMBS without disclosing to investors key facts about the quality of the securitized loans. When the RMBS collapsed, investors, including federally insured financial institutions, suffered billions of dollars in losses. The bank has also conceded that it originated risky mortgage loans and made misrepresentations about the quality of those loans to Fannie Mae, Freddie Mac and the Federal Housing Administration (FHA).REF

Altogether, American banks paid $110 billion in fines for mortgage-related chicanery, and the impact on households was devastating. The mortgage delinquency rate for single-family homes remained between 0.4 percent and 2.5 percent between 1954 (the first year for which data are available) and 2007 but by 2010 had risen to 11 percent—one in nine single-family mortgages. The banks’ capital manipulation turned millions of Americans into criminals (filing false information on a mortgage application is a felony). According to one study, 58 percent of loans backing residential mortgage-backed securities not guaranteed by federal agencies were liars’ loans, and liars’ loans caused 70 percent of total losses in the mortgage market.REF

Of course, blaming the global financial crisis on a single regulatory tweak by the Federal Reserve is too simplistic. Other factors came into play, including the Clinton Administration’s demand that banks lower mortgage lending standards to accommodate minority borrowers with lower credit ratings. That was a contributing factor the importance of which was vastly exaggerated by Republican commentators eager to shift the blame for the 2008 crash onto the previous Democratic Administration.

In my judgment, the incentive to augment leverage was a primary cause of the crisis. From 2002–2005, I served on the management committee of Bank of America’s investment banking division. Our business began with a management mandate to achieve an 18 percent or higher return on equity. The structured securities fraudulently rated AAA by S&P and other ratings agencies were designed to pay 0.3 percent above our cost of funds. With leverage of 62.5 times, we could buy them and earn a return on equity of 18.75 percent (0.3 x 62.5). Other banks behaved identically. The banking industry began with a short-term constraint—current return on equity—and created derivative instruments.

In 2008, I was the strategist for a hedge fund that created structured credit products, and I warned of the impending crisis a year in advance of the event.REF The errors of the investors were no longer random, as in Treynor’s bean jar example. Bank loan officers didn’t make random mistakes in evaluating mortgages. Instead, the errors became highly correlated as bad regulation guided bad banking practices.

This example illustrates two important caveats about Treynor’s insight.

First, the kind of information that derandomizes errors in the bean jar example can have a far greater effect in real-world examples.

Second, the difficulties that bedevil regulators in attempting to maintain efficient markets in mortgage credit—one of the largest and most important areas of economic activity—are not easily guided by free-market principles. If the public does not trust the safety of banks, money will stay under mattresses rather than circulate. Mass withdrawals from money market funds in September 2008 nearly brought down the financial system, and a run on regional bank deposits in April 2023 nearly caused a crisis on a smaller scale. In both cases, the Federal Reserve intervened to restore trust.

Both public and corporate governance, though, have formidable obstacles to overcome. As global head of debt research at Bank of America from 2002–2005, I observed that bank managers had an incentive to disguise the risks they took. They were paid on an annual bonus cycle, while the risks might not emerge for several years. In effect, they defrauded the shareholders by front-loading their own compensation and back-loading risk. Bank management, mindful of market expectations for high returns on equity, actively discouraged the application of extant risk modeling technology that would have counseled greater caution and lower profits in the short term.REF

Just how should banks be regulated? Milton Friedman famously proposed to eliminate bank regulation altogether to the point of allowing each bank to issue its own currency as in the Jacksonian era of wildcat banking.REF A Citibank dollar might be worth less than a JPMorgan Chase dollar, depending on market perceptions of their respective creditworthiness.

As a thought experiment, Friedman’s example is a helpful reductio ad absurdum. Why not let your 10-year-old set up a card table on the sidewalk, take deposits, and issue her own currency? The public doesn’t have adequate information to make credit judgments about banks, and the additional costs of calculating the value of a plethora of different bank-issued currencies would create chaos. Banks require regulation, supervision, capital requirements. Otherwise, the Federal Deposit Insurance Corporation could guarantee a large part of their deposits. The Silicon Valley Bank run of 2023 demonstrated just how important federal deposit insurance is.

But private equity loan funds and other “shadow banks” can do the same thing that banks do—lend to businesses—without much regulation of any kind. There is no government guarantee for investors in shadow banks, which raise funds from institutional investors, from the stock market, or by issuing bonds (mainly speculative-grade bonds). Whether such entities should be regulated and the extent to which they pose a systemic risk to the financial system are difficult questions.

In summary, the government’s ideal role in financial markets to achieve bean jar experiment conditions—uncorrelated errors by investors—is unattainable. Regulators at best can attempt to limit the degree to which errors are correlated. By allowing the banks to take on enormous amounts of leverage, the Federal Reserve generated several sets of correlated errors: Banks showed high returns on capital by suborning fraud from ratings agencies as well as individual mortgage applicants. The result was a bubble and a crash.

The Friedman alternative—leave banks and their customers to their own devices—is impractical. The only practical alternative is for regulators to observe closely how their actions may lead to correlated errors on the part of market participants and to make corrections as required. Even though a free market in financial products is unattainable in practice, the concept of the market mechanism as presented by Treynor provides a point of perspective for financial regulation.

It is clear that Treynor’s emphasis on the randomness of errors is a powerful conceptual tool that can provide guidance for regulators even in areas of economic life where a perfectly free market is a practical impossibility. In the case of financial regulation, the greatest common good—avoidance of market crashes—is attained by making the market as efficient as possible: that is, by minimizing the “correlatedness” of investor errors in response to regulation.

The Capital Asset Pricing Model

Economists like simple one-period examples—like Treynor’s bean jar example—for purposes of illustration, but investment decisions in the real world look ahead through multiple periods. That poses difficulties for the concept of market efficiency that Treynor encapsulated in the bean jar example. This is particularly clear in the Capital Asset Pricing Model (CAPM), which Treynor first introduced in 1961 three years before William Sharpe, who won a Nobel Prize for it. The multiperiod, or intertemporal, CAPM helps to clarify the role of public policy in fostering market efficiency.

Treynor’s CAPM expresses the way an efficient market prices assets, provided that transaction costs are minimal and the assets available to investors—the investment opportunity set—do not change. The portfolio of all assets in the CAPM has an expected return in excess of the risk-free rate (the interest rate on default-free assets like short-term government bonds). The price of every individual asset depends on its co-movement, or beta, with the overall market.

Treynor’s idea as elaborated by Sharpe and others is so simple and powerful that it is hard for investors today to think about portfolio selection without it. Every equity analyst on Wall Street uses the CAPM to calculate the theoretical cost of capital for a given company, depending on the covariance of its stock returns with the broad market. As Will Kenton writes in Investopedia:

Investors expect to be compensated for risk and the time value of money. The risk-free rate in the CAPM formula accounts for the time value of money. The other components of the CAPM formula account for the investor taking on additional risk.
The goal of the CAPM formula is to evaluate whether a stock is fairly valued when its risk and the time value of money are compared with its expected return. In other words, by knowing the individual parts of the CAPM, it is possible to gauge whether the current price of a stock is consistent with its likely return….
The beta of a potential investment is a measure of how much risk the investment will add to a portfolio that looks like the market. If a stock is riskier than the market, it will have a beta greater than one. If a stock has a beta of less than one, the formula assumes it will reduce the risk of a portfolio.
A stock’s beta is then multiplied by the market risk premium, which is the return expected from the market above the risk-free rate. The risk-free rate is then added to the product of the stock’s beta and the market risk premium. The result should give an investor the required return or discount rate that they can use to find the value of an asset.REF

If we assume that investors want the highest return for the lowest risk, they will select portfolios that offer the highest risk-return ratio. Harry Markowitz had shown earlier that for every given level of risk (as measured by the variance of portfolio returns), there exists a unique “efficient portfolio” that minimizes variance at that level of risk. The set of efficient portfolios corresponding to each level of risk describes a curve, “The Efficient Frontier.” The CAPM tells us that the best risk-reward ratio is obtained in the market portfolio (the portfolio of all assets in the investment opportunity set).

CP02 Figure 1

 

As in the bean jar example, Treynor and other finance theorists proposed a simple and transparent idea that could not be reproduced in real life but nonetheless helped us understand markets better. As we saw, the messiness of market conditions subject to government regulation makes short work of Treynor’s premise that the randomness of errors made it possible for market prices to embody the information available to investors.

The only problem with the CAPM is that it appears to have no explanatory power for observed stock prices. There is an enormous literature on empirical testing of the CAPM, including a 2003 paper by the prominent finance theorists Eugene Fama and Kenneth French.REF Fama and French proposed to add additional factors to the CAPM in which beta (covariance with the overall market) is the single determinant of relative equity prices. These additional factors included market capitalization and value vs. growth. Finance theorists have offered any number of multi-factor models to enhance the CAPM, with mixed results.

There is a simple reason why the CAPM doesn’t explain stock prices. Not one of the original 30 components of the Dow Jones Industrial Average (DJIA) is still in the index. General Electric lasted until 2018. When the DJIA began in 1896, its original 12 members included American Cotton Oil, American Sugar, American Tobacco, Chicago Gas, Distilling & Cattle Feeding, Laclede Gas, National Lead, North American, Tennessee Coal and Iron, and other long-dead enterprises. Mice are always eating the dinosaurs’ eggs. None of the tech giants that dominate stock market capitalization today existed a generation ago. An efficient portfolio composed of stocks available for purchase in 1990 would have underperformed the market massively. We know that today’s winners will be tomorrow’s casualties, but we don’t know who the new winners will be.

In 1973, Nobel Laureate Robert Merton produced a theoretical explication of this simple idea. The CAPM is a one-period model that assumes the investible universe (the investment opportunity set) will be the same tomorrow as it is today, but a multiperiod, or intertemporal, model must take into account prospective changes in the investment opportunity set. In periods of rapid technological advance, changes in the investment opportunity set can be rapid and disruptive. Merton’s Intertemporal CAPM tells investors that they should own the market portfolio (as in the CAPM) and own a hedge against changes in the investment opportunity set.REF

For all of Merton’s elegant mathematics, his Intertemporal CAPM offers little practical advice to investors. Just how should investors hedge against changes in the investment opportunity set? In practice, changes in the investment opportunity set mainly mean changes in technology, which we can hedge by investing in tech startups, venture capital funds, and so forth. The only practical suggestion in Merton’s 1973 article is to “consider constructing a ‘man-made’ security (e.g., a long-term bond) which is perfectly negatively correlated with changes in the interest rate, and hence, by assumption, not correlated with any other asset, or the market.”REF In the CAPM, investors expect to earn the risk-free (short-term government bond) rate plus the equity risk premium. Long-term bonds hedge against changes in the short rate and thus provide a hedge against a change in at least part of the investment opportunity set: namely, the risk-free rate.

However, when we think of investing in hedges against changes in the investment opportunity set, the problem is that the winners start so small and grow so fast that it is impossible to identify them far enough in advance. Who knew when Amazon went public in 1997 that an unprofitable Internet bookseller with just 340,000 customer accounts would turn into America’s most powerful retailer and provider of computer services? Disruptive innovations are true singularities, not moments in a continuous process of the sort that finance theorists can put into mathematical models.

This recalls the story of the hot-air balloonist who is carried off by a storm and flies all night until he comes to rest in a cornfield. The balloonist hails a passerby and asks, “Where am I?” The passerby replies, “You’re in a cornfield.” The balloonist says, “You must be a finance professor: You’ve given me information that is entirely accurate but completely useless.” Retorts the passerby, “You must be a stock investor. You don’t know where you are, and you don’t know how you got here, but you want me to fix your problem.”

Think of a herd of buffalo under two conditions, normal and stampede. Under normal conditions, the grazing pattern of animals can be modeled as random (Brownian) motion.REF The individual ramblings of each animal seem random. Nonetheless, the herd must move in order to find new grass to consume. Somehow, a few individuals in the herd initiate movement, and a trend arises that guides the individual members of the herd to a new patch of grass. That is a close analogy to how finance theorists understand the way that the stock market processes information: A few investors with superior judgment of available information guide the market, so to speak, to greener pastures, which is possible when the errors of the vast majority of investors are random, as in the bean jar experiment.

That is the normal behavior of a herd. What happens if new information such as the appearance of predators or lightning suddenly confronts the herd? In that case, the movement of individual members of the herd becomes highly correlated, and all of the animals run in the same direction. That’s a stampede.

We observe similar stampedes in the stock market when an information shock confronts investors—for example, the dot.com bubble of the late 1990s or the generative AI fad of the spring of 2023. The first example of a stock market bubble associated with a technological shock is the so-called Railway Mania in the United Kingdom from 1843–1845 when the price of railway stocks doubled, leading to the Panic of 1847. Similar shocks can come from disease, as in the COVID panic of March 2020, and from civil unrest, natural disaster, or war.

In this case, the correlated behavior of investors in stock market bubbles and busts is not the result of government meddling in free markets, but rather is a market response to a technological shock. Fear of being left out—of failing to hedge against a change in the investment opportunity set—motivates a stampede into assets that appear to provide such a hedge. Merton’s Intertemporal CAPM under such conditions can generate arbitrarily high prices for certain tech stocks.

The failure lies not in the market (or in the herd of buffalo), but in the exogenous shock. When a single technology appears on the horizon as the agent of changes in the investment opportunity set, investors’ errors cannot help but be correlated. That is what happened during the dot.com bubble of the late 1990s. Suppose an entrepreneur invents a universal machine that can do everything that the rest of the economy does, only more cheaply—stamp metal, analyze data, assemble televisions, cook hamburgers, ship packages, and so forth. The capitalized value of that machine would approximate the enterprise value of all the firms in the world. Investors would sell all their stocks and buy shares of the Universal Machine Company. Nothing like that has ever happened, but the principle applies to the buying panics in Internet and generative AI stocks.

Public Spending and Technological Innovation

It is noteworthy that no such bubbles or panics were associated with other periods of rapid technological change. The 1980s saw the introduction of fast and cheap semiconductors, the personal computer, displays, optical networks, cellular phone service, compact discs, the camcorder, and other disruptive technologies. Arguably, the technological shock brought about by the PC and the cell phone denoted a high-water mark for American innovation. Yet the volatility of the stock market was notably lower during the 1980s than during the 1990s. S&P volatility reached new highs during the dot.com boom of the late 1990s, exceeded later by the 2008 crash and the COVID shock.

 

CP02 Chart 1

 

The most plausible explanation for the relative tranquility of the stock market during the 1980s is that the technological shocks were widespread and diverse, unlike the 1990s when a single technology (the Internet) drove a disproportionate share of investor interest. Returns to innovations are inherently uncorrelated: The market does not know the future value of products that do not yet have customers, let alone the correlation of their future value with other products that do not yet have customers. A broad range of innovation therefore introduces a high degree of randomness of investor errors, mimicking the conditions of Treynor’s bean jar experiment.

What accounted for the diversity of technological innovation between the 1960s and the 1980s? The imperative to leapfrog Russia in the space race and the development of military technology during the Cold War were the principal driver of scientific innovation. As I wrote in a 2023 monograph for the Claremont Institute:

The requirements of national defense have always been the great driver of American innovation. Every component of the digital age, including fast and inexpensive integrated circuits, plasma and LED displays, the GUI interface, optical networks, and the internet began with a grant by the Defense Advanced Projects Research Agency to a corporate laboratory.
CMOS chip manufacturing began with DARPA [Defense Advanced Research Projects Agency] grants to Fairchild Semiconductor and RCA Labs, originally with the aim of enabling weather forecasting in military aircraft. It became the standard process for chip manufacturing, used for 99 percent of integrated circuit chips by 2011. RCA commercialized the process in the late 1960s (when Dr. Henry Kressel was the corporate vice president in charge of RCA Labs). With a DARPA grant initially intended to improve nighttime illumination of battlefields, RCA Labs perfected the semiconductor laser as a low-power light source for optical devices. Vast increases in data transmission through optical networks became possible, launching several new industries including cable television and, eventually, the internet. The Graphical User Interface (GUI) was developed with a DARPA grant to Xerox Laboratories in Palo Alto. This made possible a new kind of software as well as the computer mouse, invented by Douglas Engelbart at the Stanford Research Institute….
The Digital Revolution teaches us one basic lesson: Industrial policy will fail if it directs public capital to specific, established technologies. None of the definitive technologies that made the Digital Revolution were understood except in embryo before DARPA funded them. Creative engineers and scientists discovered technologies that no one could have imagined prior to their discovery and launched multi-hundred-billion-dollar industries that no one could have envisioned before the technologies became available.REF

There are exceptions to the rule that defense drives major technological breakthroughs (for example Bell’s telephone), but these are harder to find during the 20th century. Every invention of the digital age began with government—mainly DARPA or National Aeronautics and Space Administration (NASA)—funding for basic research, but every one of these inventions led to outcomes that were quite different from the objective of the original project. The Internet was intended as an alternative communications system in the event of war. The semiconductor laser that made possible optical networks was first commissioned to illuminate battlefields for night fighting.

A bright line separated government funding for basic research and the commercialization of technologies by private firms. The government’s interest was not in picking economic winners, but in obtaining superior defense technology. As a byproduct of defense R&D, private investors embraced the new technologies and brought them to market at their own risk.

Well-functioning markets in Treynor’s framework—that is, markets that properly value the future earnings of firms—depend on the randomness of investor errors. In a multiperiod model incorporating technological change, the higher the rate of innovation and the more diverse the types of innovation, the better the functioning of markets. Government support for basic research is beneficial to market functioning provided that the government does not concentrate its efforts on a few favored sectors or pick winners among firms. In practice, military and space R&D has provided a technology driver for the overall economy. It demands frontier scientific investigation as well as practical results in fields as diverse as computation, electrical engineering, materials science, and hydrodynamics.

We have seen that a high degree of diversity in innovation helps to extend the randomized errors of the bean jar experiment to a multiperiod model that includes a hedge against the investment opportunity set. Government support for innovation can help to achieve this diversity if support for innovation is uncorrelated with existing industries. Historically, high-technology defense R&D has served this purpose best; it promotes innovation across a variety of scientific and engineering disciplines in a way that is orthogonal to the present investment opportunity set.

Public Debt, Income Growth, and the Common Good

Let us return to Merton’s example of long-term bonds as a hedge against the investment opportunity set in his Intertemporal CAPM. Governments can either impair or promote the public good through management of the public debt. Long-term government bonds are a hedge against changes in the short-term risk-free interest rate and, as such, serve a valuable portfolio function. Government bonds also provide an important element of portfolio liquidity. Merton’s understanding of the beneficial role of long-term bonds is consistent with Alexander Hamilton’s 1790 presentation of the benefits of a well-funded public debt and with Robert Mundell’s understanding of public debt issuance balancing potential revenue losses from a supply-side tax cut.

Households take the largest share of national income, but in an intertemporal context, discounting the present value of household income presents a difficulty. Households do not float initial public offerings to sell equity on the strength of their future capacity to earn or issue bonds as corporations do. The prospects of individual households are too uncertain. To some extent, the markets for consumer debt, especially home mortgages, fulfill this function, but they are subject to any number of constraints and distortions as we learned in 2008 at great cost.

However, government debt backed by future household tax revenues rests on a broad base of households; thus, uncorrelated contributions diminish the prospects for catastrophic financial failure. Mundell first offered in 1965 to measure the general economic effect of changes in tax and monetary policy according to the way they change the market’s willingness or capacity to discount future income streams. In Mundell’s framework, markets are always more or less imperfect because they are never able to discount all future income streams. If a tax cut leads to more economic growth and with it more household income, the Treasury’s tax revenues will rise. To finance the tax cut in the short term, the government may have to issue debt, but in this case, the debt is “well-funded” in Hamilton’s usage, and the increase in federal debt constitutes an increase in wealth.

That, Mundell argued, arises from the imperfection of markets: It is easy to assign a present value to the future cash flows of corporations in the form of corporate bonds but much harder to assign a present value to the future income of households. An increase in government debt that arises from government actions that cause an increase in future household income, such as a supply-side tax cut or spending on productive infrastructure, constitutes wealth just as bonds issued by a corporation for the purposes of financing productive investments do. The “artificial” asset class (Merton) created by governments makes markets more perfect.

This is another example of way in which the public good (supply-side fiscal policy) contributes to the functioning of free markets.

When fiscal policy promotes economic growth, the incremental debt generated by that fiscal policy is “well-funded,” in Hamilton’s language. In his 1790 Report Relative to a Provision for the Support of Public Credit, Hamilton does not use the terminology of 20th-century economics, but his meaning is clear: A properly funded national debt constitutes part of a nation’s wealth:

It is a well known fact, that in countries in which the national debt is properly funded, and an object of established confidence, it answers most of the purposes of money. Transfers of stock or public debt are there equivalent to payments in specie; or in other words, stock, in the principal transactions of business, passes current as specie. The same thing would, in all probability happen here, under the like circumstances.
The benefits of this are various and obvious.
First. Trade is extended by it; because there is a larger capital to carry it on, and the merchant can at the same time, afford to trade for smaller profits; as his stock, which, when unemployed, brings him in an interest from the government, serves him also as money, when he has a call for it in his commercial operations.
Secondly. Agriculture and manufactures are also promoted by it: For the like reason, that more capital can be commanded to be employed in both; and because the merchant, whose enterprize in foreign trade, gives to them activity and extension, has greater means for enterprize.
Thirdly. The interest of money will be lowered by it; for this is always in a ratio, to the quantity of money, and to the quickness of circulation. This circumstance will enable both the public and individuals to borrow on easier and cheaper terms.
And from the combination of these effects, additional aids will be furnished to labour, to industry, and to arts of every kind.REF

Capital is the discounted present value of future income flows. A country may have enormous potential income, but if capital markets cannot establish the present value of that income, it does not turn into wealth. As Hamilton observed, that was the predicament of the United States in 1790; it had vast potential wealth in the form of land, but the chaotic condition of post-Revolutionary capital markets had destroyed the present value of that asset. As he wrote in his 1790 report:

The effect, which the funding of the public debt, on right principles, would have upon landed property, is one of the circumstances attending such an arrangement, which has been least adverted to, though it deserves the most particular attention. The present depreciated state of that species of property is a serious calamity. The value of cultivated lands, in most of the states, has fallen since the revolution from 25 to 50 per cent. In those farthest south, the decrease is still more considerable. Indeed, if the representations continually received from that quarter, may be credited, lands there will command no price, which may not be deemed an almost total sacrifice.
This decrease, in the value of lands, ought, in a great measure, to be attributed to the scarcity of money. Consequently whatever produces an augmentation of the monied capital of the country, must have a proportional effect in raising that value. The beneficial tendency of a funded debt, in this respect, has been manifested by the most decisive experience in Great-Britain.REF

If the national debt is not properly funded, Hamilton added, it has a “contrary tendency.”

But these good effects of a public debt are only to be looked for, when, by being well funded, it has acquired an adequate and stable value. Till then, it has rather a contrary tendency. The fluctuation and insecurity incident to it in an unfunded state, render it a mere commodity, and a precarious one. As such, being only an object of occasional and particular speculation, all the money applied to it is so much diverted from the more useful channels of circulation, for which the thing itself affords no substitute: So that, in fact, one serious inconvenience of an unfunded debt is, that it contributes to the scarcity of money.REF

The national debt and the activity of the state that it supports may be helpful or harmful to the economy, depending on the purpose for which the debt was created.

Persuaded as the Secretary is, that the proper funding of the present debt, will render it a national blessing: Yet he is so far from acceding to the position, in the latitude in which it is sometimes laid down, that “public debts are public benefits,” a position inviting to prodigality, and liable to dangerous abuse,—that he ardently wishes to see it incorporated, as a fundamental maxim, in the system of public credit of the United States, that the creation of debt should always be accompanied with the means of extinguishment. This he regards as the true secret for rendering public credit immortal. And he presumes, that it is difficult to conceive a situation, in which there may not be an adherence to the maxim. At least he feels an unfeigned solicitude, that this may be attempted by the United States, and that they may commence their measures for the establishment of credit, with the observance of it.REF

How well has the U.S. government managed the public debt during the past 80 years? A very rough but helpful gauge is the volatility of the U.S. 10-year Treasury note. The portfolio function of long-term bonds is to stabilize returns, and good public debt management should in general correspond to low volatility.

 

CP02 Chart 2

 

Chart 2 shows the volatility of the 10-year note yield (the rolling annualized standard deviation of weekly percent changes) between 1963 and 2023. Yield volatility reached a then-record of 20 percent in 1979 after the Federal Reserve raised interest rates sharply to counter inflation. The Reagan tax cut and the drop in inflation due to tight monetary policy led to a long period of stable yields.

Yield volatility set another record in 2008 at 40 percent during the world financial crisis. Remarkably, an extreme peak of volatility—at 70 percent—followed the government’s response to COVID in the form of a massive fiscal stimulus and a similarly massive expansion of the Federal Reserve’s balance sheet. The Reagan tax cut, we may conclude, was an example of the issuance of well-funded public debt in Hamilton’s sense. The 2008 and 2021 cases were not.

Long-term expectations are difficult to form accurately and are easily subject to correlated errors that lead to market distortions and even apparent failures—the equivalent of a buffalo stampede in response to an exogenous shock. Good public policy plays a crucial role in allowing markets to function efficiently: that is, to randomize investors’ errors.

David P. Goldman is Deputy Editor of Asia Times, a Senior Writer at Law & Liberty, and a Washington Fellow at the Claremont Institute’s Center for the American Way of Life.

Authors

David Goldman

Columnist, Asia Times

More on This Issue