Good monetary policy helps Main Street America’s workers, retirees, and savers by ensuring that the economy does not stall due to an insufficient supply of money. It also helps Main Street by safeguarding against an excessive supply of money that could overheat the economy. To accomplish this task, the Federal Reserve (the “Fed”) needs to supply the amount of money the economy needs to keep moving, no more and no less, and it needs to do so in a neutral fashion, rather than allocate credit to preferred sectors of the economy. This standard dictates that the Fed maintain a minimal footprint in the market so that it does not distort markets, crowd out private credit and investment, create moral hazard problems, or transfer financial risks to taxpayers. Finally, the Federal Reserve should conduct monetary policy in a transparent manner, with maximum accountability to citizens through their elected representatives.
Throughout much of its history, and particularly since the 2008 financial crisis, the Federal Reserve has failed on all of these measures. The Fed’s misguided policies have distorted prices and interest rates, thus causing people to misallocate resources in ways that have exacerbated business cycles. The Fed’s failures have contributed to resource misallocation and increased moral hazard, thus fostering over-investment in areas that people would not have otherwise invested in, such as housing. After the recent crisis, the Fed failed to supply enough money when it was most needed, contributing to one of the worst crashes and slowest recoveries on record.
The Fed’s post-crisis policies have also contributed to interest rates on safe assets remaining at historically low levels, mostly harming retirees and others who depend on such assets for their income. Simultaneously, the Fed has essentially been paying large financial institutions to refrain from lending to Main Street businesses by paying them risk-free interest to sit on cash. The Fed has been able to conduct these experimental monetary policies largely because Congress has given the Fed so much policy discretion. To correct these problems, Congress must first recognize that the Federal Reserve should not be trying to fine tune the economy, much less protect lenders and investors from the consequences of their financial decisions.
The Fed Has Not Tamed the Business Cycle
Many economists take for granted that the Federal Reserve has tamed financial crises, business cycles, and inflation—but the Fed’s long-term track record suggests otherwise. The savings and loan crisis, as well as the Great Depression and the recent Great Recession—two of the worst slowdowns in U.S. history—all happened on the Federal Reserve’s watch. Many claims of Fed success depend on comparisons of pre-WWI (World War I) data to post-WWII (World War II) data, thus omitting six separate economic downturns that occurred under the Fed’s stewardship. Regardless of the Fed’s performance during the inter-war period, several studies suggest that: (1) data deficiencies induce artificial volatility in key pre-Fed-era data; and (2) there has been more economic instability, by some measures, than there was before the Fed’s creation.
Most modern macro-level data, as well as the procedures for compiling the data, did not exist before the Great Depression. The economists who began compiling these data series in the 1920s and 1930s did the best they could to estimate data from earlier time periods, and they clearly understood that their approximations were rife with potential errors. For the most part, however, their warnings have gone unheeded, as the conventional view that business cycles have been tamed solidified. Recently published research highlights the importance of those warnings. One study’s main findings can be summarized as follows:
- The official National Bureau of Economic Research (NBER) recession dates show a dramatic decline in the length of contractions over time. Accounting for data biases produces new dates that show the average length of recessionary periods in the post-WWII period is slightly longer than the average for recessions that occurred prior to WWI.
- The new dates suggest that the average loss of economic output is similar in the post-WWII era relative to the typical loss prior to WWI. However, the length of time it took for the economy to return to its previous peak level was nearly three months shorter in the pre-WWI period.
The new dates confirm that recessions were indeed more frequent in the pre-WWI era relative to the post-WWII time frame. However, when the entire Federal Reserve period is compared to the full pre-Fed period, the frequency of recessions does not decrease. Still, even excluding the inter-war period, the new dates suggest that economic contractions were shorter—and recoveries were faster—in the pre-Fed era than previously believed.
Another way of assessing stabilization policies is to examine the volatility in specific macroeconomic aggregates, such as unemployment and output, regardless of the official NBER business-cycle dates. Given the economic turmoil caused by the two world wars, many economists argue that the inter-war period should be ignored. Consequently, the post-WWII figure is typically used as evidence that stabilization policies—both monetary and fiscal—have reduced economic volatility. Published research suggests, however, that the apparent decline in post-war volatility (in both output and employment) is “a figment of the data.” Although many researchers use various pre-war data sets as if they were consistent with their post-war counterparts, newer studies have shown that doing so is unwise because the methods used to construct these pre-war data series accentuate cyclical movements.
Alternative Aggregates: Gross National Product
The standard pre-war Gross National Product (GNP) series is the Kuznets series, published in 1961. Another widely used pre-war series derives nearly all of its cyclical movements from the Kuznets series. The chief problem with the Kuznets series is that it derives pre-war GNP (for 1869 to 1919) by relying on disaggregated commodity output data. Kuznets assumed that the percentage deviation of GNP from its trend in any given sector of the economy was equal to the percentage deviation from trend-in-commodity output for a corresponding sector. As time progressed, it became possible to better evaluate this assumption, and research shows that correcting this issue results in new pre-war GNP estimates that are only slightly more volatile than the official post-war series.
For instance, the original Kuznets GNP series shows a standard deviation from trend of 4 percent for 1893 to 1927. This figure is roughly twice as volatile as the 2.1 percent variation in the U.S. Commerce Department’s official GNP series from 1951 to 1980. The estimates that adjust to account for the data bias, on the other hand, exhibit only a 2.8 percent standard deviation in GNP from trend between 1893 and 1927. Including the inter-war period in these comparisons shows a post–Federal Reserve economy that is much more volatile (5.7 percent variation from trend) than it was in the pre-Fed period.
It also is true that the data shows less overall volatility beginning in the mid-1980s. In fact, the period from Fed Chairman Paul Volcker’s second term (beginning in August 1983) through the Alan Greenspan–led Federal Reserve (ending in 2006) is typically referred to as “the great moderation.” From 1984 to 2009, for instance, the official GNP series exhibited a standard deviation from trend of approximately 1.7 percent.
Alternative Aggregates: Unemployment Rates
The standard pre-war unemployment series, published in its completed form in 1964, is the data set constructed by Stanley Lebergott. There are several sources of excess volatility in these estimates, such as the reliance on disaggregated employment data for various sectors and types of workers. Lebergott also relied on the assumption that deviations from trend in employment were perfectly correlated with deviations from trend in output, an assumption that (it is now known) does not hold in the post-war data.
Correcting some of these issues results in unemployment rate estimates that are much less volatile than the original data set indicates. For instance, the original Lebergott series shows a standard deviation from trend of 2.5 percent for 1893 to 1927. The corrected estimates exhibit only a 1.4 percent standard deviation from trend between 1893 and 1927. The corrected figure is only moderately more volatile than the 1 percent variation from trend in the U.S. Bureau of Labor Statistics’ (BLS) official post-war unemployment rate series from 1951 to 1980.
Alternative Aggregates: Industrial Production
The main pre-war industrial production series, another measure of economic output, was compiled by Edwin Frickey for 1860 to 1914. Similar to standard pre-war GNP data, the Frickey series suggests that economic volatility has greatly declined in the post-war period. However, the Frickey series is based on a relatively small sample of commodities compared to the Federal Reserve’s official (post-war) industrial production series. Many studies have used the Frickey series as if it were the pre-war version of the Fed’s industrial production series, but research shows that these data sets are too different to combine in this manner. Alternatively, an “apples to apples” comparison of pre-war to post-war periods that uses a consistent data series “[d]oes not reveal the dramatic damping of business cycle fluctuations apparent in the inconsistent series.”
Without making any adjustments for the data deficiencies, the standard Frickey series suggests that output volatility fell from 8.84 percent between 1866 and 1914, to 6.43 percent between 1947 and 1982. On the other hand, a replication of the Frickey series in the post-war period shows that the standard deviation of output growth rates fell from 8.84 percent between 1866 and 1914, to only 8.62 percent between 1947 and 1982.
Overall, these metrics show “the common belief that the cycle has become more protracted over time is simply not borne out by either the old or the new pre-war estimates of GNP and unemployment.” Put differently, this line of research “challenge[s] the common belief that cycles in the forty years before the Great Depression were decidedly more severe than those in post-war era.”
Another Look at the Fed’s Record on Inflation
The Bureau of Labor Statistics was not around in the 1700s, but the best available estimates suggest that the standard deviation of the consumer price index (CPI) was 5.96 percent from 1790 to 1912, and then fell to 4.96 percent between 1913 and 2013. However, the average rate of the CPI itself went from 0.22 percent to 3.35 percent, calling into question whether the 1 percentage point reduction in variability was worthwhile. Similarly, while the variability in inflation declined after the Fed received a formal price-stability mandate in 1977, the average rate of inflation has actually increased. For instance, the standard deviation in the CPI was only 2.78 percent from 1979 to 2013, but the average CPI was 3.74 percent during this period, even higher than its long-term average.
The annual price data also shows that from 1790 to 2013, not counting the Civil War years, the single highest inflation rate in the nation’s history—20.49 percent in 1917—occurred on the Fed’s watch. The (nearly indistinguishable) pre-Fed maximum rate of 20.02 percent occurred in 1813. An alternative data series, consisting of quarterly inflation rates from 1875 to 2010, also shows that the highest rates of inflation in the U.S. occurred after the founding of the Fed. Some of the highest inflation rates in recent history occurred between 1973 and 1975, and between 1978 and 1982, but these rates (ranging from 6 percent to 13 percent) did not exceed the high rates of the early Fed era. From 1917 to 1920, for instance, annualized inflation rates from some quarters approached 40 percent.
Price Stability and High Inflation
The Federal Reserve currently focuses on the Personal Consumption Expenditure (PCE) index to gauge inflation, but it relied on CPI inflation prior to 2000. Regardless, high rates of inflation dilute the value of peoples’ cash holdings and are associated with stifled economic growth. Nevertheless, there is no objective measure of what constitutes “high” inflation, and the Fed officially “judges that inflation at the rate of 2 percent…is most consistent over the longer run with the Federal Reserve’s mandate for price stability and maximum employment.” In general, price stability refers to inflation that is low or stable enough so that people can ignore inflation when they make economic decisions, but the concept of price stability also lacks an objective measure.
In 1996, Fed Chairman Alan Greenspan stated that price stability means zero inflation “if inflation is properly measured.” Because many economists believe that official inflation numbers are biased slightly upward, Fed officials have set a positive value for its inflation target. In other words, if “true” inflation is zero, the official inflation numbers would still indicate some positive level of inflation, perhaps a bit higher than 1 percent.
Thus, consistently low rates of inflation are one type of price stability, although no particular statistical value precisely denotes low inflation. Similarly, low rates of variation in inflation are a type of price stability, but no specific value—regardless of which variability measure is used—objectively signifies that inflation is stable. Regardless, higher rates of inflation reduce purchasing power as time goes on, unless wages and rates of return adjust along with inflation. Evidence suggests that, on average, income does tend to rise along with inflation over time, although distortionary short-run effects cannot be ignored.
Relatively lower rates of inflation are clearly closer in spirit to price stability, even though there is little agreement on whether, for example, 1 percent or 3 percent is sufficiently low to declare inflation stable. Thus, many economists have no problem with the fact that the average inflation rate in the Federal Reserve era is a few percentage points higher than it was prior to the Fed’s founding. In fact, Fed policy has openly aimed at creating predictable “low” inflation to prevent a fall in the price level (deflation). Because the full Federal Reserve era includes many unique economic problems between the two world wars, many economists focus only on the post-WWII economic data.
Post-WWII vs. Post–Dual Mandate
By the end of WWII, explicitly “dealing with inflation” was a key component of the Fed’s macroeconomic stabilization policies, but the Fed did not operate under a formal price-stability mandate before 1978. Splitting the post-WWII time period into pre-mandate and post-mandate time frames, the CPI data reveal higher average inflation and a small reduction in variability after the mandate. The average inflation rate was 3.56 percent from 1948 to 1978, and 3.74 percent from 1979 to 2013. Variation fell from 3.03 percent to 2.78 percent in the post-mandate period. Thus, there was an increase in the average rate of inflation and a decline in variability after Congress formally directed the Fed to focus on price stability.
As these newly “stable” rates of inflation became the norm after WWII, a complicating factor known as persistence appeared in the inflation data. Generally speaking, this term indicates that any external shocks tend to influence future changes in inflation for a longer time than would be expected in the absence of persistence. This trait has important implications for monetary policy because it means that it has become very difficult to improve upon a basic, naïve forecasting model, which predicts that next period’s inflation will be equivalent to last period’s inflation.
In particular, the ability to predict inflation with various macroeconomic variables, such as “the unemployment rate, commodity prices, capacity utilization, the money supply, and interest rates,” has drastically declined since the mid-1980s. That is, there is little empirical support for using anything other than inflation itself to guide forecasts. More broadly, the persistence issue is “part of the general debate on whether the relatively stable inflation that characterized the so-called Great Moderation period (1985 until the Great Recession) was due to lower volatility of the shocks (better luck) or less persistence in the effects of the shocks, which could be partly attributed to better policy.” Regardless, this statistical trait means that the Fed has not, since at least the 1970s, had a solid empirical basis for trying to exploit a trade-off between inflation and unemployment.
Deflation Is Not Synonymous with Depression
A falling price level can be particularly harmful when, for example, a drop in demand leads to a sort of deflationary spiral (widespread, rapid price decreases) from which businesses are unable to recover. Therefore, many economists argue that central banks should target positive inflation rates specifically because doing so helps to avoid deflation. While this view conflates deflation with depression, evidence shows that deflation and severe economic contractions are separable. One study that surveyed nearly 20 countries documents “many more periods of deflation with reasonable growth than with depression, and many more periods of depression with inflation than with deflation.” This finding is consistent with broader price theory because deflation can be the byproduct of a healthy, growing economy.
As business owners take advantage of new technology, for example, they produce more products at a lower cost, thus enabling consumers to buy more goods at lower prices. In the U.S., average prices have rarely fallen since WWII even though the Fed did not have a formal inflation target until 2012. In fact, the annual CPI has fallen from its previous level only twice since 1950 (in 1955 and 2009). Thus, to whatever extent the Fed has successfully influenced inflation, it has done so by virtually eliminating deflation—even the kind that is fully expected in a growing economy. Despite these long-term results, many economists argue that the Fed should target a higher inflation rate.
One such argument is that higher inflation helps to increase employment because it reduces inflation-adjusted (“real”) wages. According to this view, while nominal wages rarely fall, inflation lowers the “real” cost of hiring workers, thereby “greasing the wheels” of the labor market. A second argument for targeting higher inflation is that it can provide a central bank more flexibility to stimulate the economy with interest rate cuts when nominal interest rates are near zero. In the case of very low/near-zero nominal rates, this theory holds that inflation-adjusted (“real”) interest rates can be pushed down to negative values, even if the central bank simply raises the expected level of inflation.
There are several problems with these ideas. First, the Federal Reserve does not have precise control over interest rates. The Fed can certainly influence interest rates but, as the last crisis shows, it can easily lose the ability to influence even the policy rate over which it has the most influence. Aside from the question of how high nominal rates might have to be to ensure the Fed could still influence rates downward during a crisis, the Fed clearly followed rates downward after September 2007 when it began lowering its target federal funds rate from 5.25 percent to 1 percent in little more than one year. The Fed then had to scrap the idea of a single target rate in favor of a target range (from 0 percent to 0.25 percent) and nearly abandoned interest rate targeting altogether.
If the Fed did have tight control over interest rates, there would have been no such sudden drop in rates: The Fed would have prevented them from falling in a manner that jeopardized its core approach to monetary policy. Furthermore, if a nominal federal funds rate exceeding 5 percent provides insufficient room for the Fed to stimulate the economy and head off a downturn, short-term rates would have to (somehow) be kept well above their long-term average. The fact that the Fed does not have precise control over interest rates suggests that such a policy is a recipe for, among other problems, high inflation.
Another problem is that, over time, average compensation tends to rise with productivity, which suggests that nominal wages do not need to fall in order help labor markets function smoothly. The grease-the-wheels story also ignores the possibility that higher inflation might have the opposite effect on other aspects of the labor market, thus cancelling out any possible benefit from inflation. That is, inflation could also put “sand in the wheels” of the labor market by distorting other prices. Though this issue is not completely settled, there is evidence that these two effects—grease-in-the-wheels versus sand-in-the-wheels—may largely cancel each other out in labor markets.
It is clear that the long-term purchasing power of the dollar has dramatically declined, so it is natural that anyone not lucky enough to receive a compensating salary increase every year does not focus on the reduction in inflation variability as a great improvement. People in Main Street America understand that the Wal-Mart business model of low prices benefits them, so they question a policy of steadily inducing inflation. Many citizens rightly view a policy of constantly creating inflation as one that prevents them from enjoying the good type of deflation that a growing capitalist economy would normally produce.
Even Constant Low Inflation Policies Harm Main Street
All of the arguments for constantly imposing inflation on the economy ignore that even if the Fed could consistently hit a 2 percent (or higher) inflation target, it would still distort prices throughout the economy and harm Main Street Americans. Aside from the fact that all workers do not automatically receive wage adjustments for inflation, choosing the “right” inflation target depends on supply-side factors that dictate whether the overall price level should rise or fall. If, for instance, an oil shortage causes higher prices throughout the U.S. economy, it would make little sense for the Federal Reserve to shrink the money supply in hopes of lowering the inflation rate.
This type of productivity setback, due to higher input costs, and the corresponding shortage of goods at higher prices, calls for an opposite movement away from the Fed’s long-term inflation target. To tighten, rather than loosen, the money supply at such a time would exacerbate the shortage for the sake of getting to a lower inflation rate. On the other hand, if a drastic improvement in computer technology leads to lower prices throughout the economy, it would be unwise for the Fed to expand the money supply in hopes of raising the general price level. In such a case, productivity gains due to lower input costs allow firms to drop their prices, and the corresponding surplus of goods at lower prices calls for an opposite movement from the Fed’s long-term inflation target.
To expand the money supply at such a time would exacerbate the surplus of goods for the sake of getting to a higher inflation rate. Expanding the money supply in the face of such productivity gains would likely lead to inflated profits and a corresponding overinvestment in certain sectors of the economy that, eventually, would exacerbate a downward economic cycle when expected additional demand fails to materialize. It appears that the Fed made exactly this mistake in the early 2000s, exacerbating the downturn in the national housing market that began in mid-2006.
Excessively Easy Monetary Policy: Early 2000s
The Fed has based its monetary policy on targeting the federal funds rate for years, and one key consideration in this process is where the Fed sets its target relative to the natural (or neutral) federal funds rate. The natural rate represents an equilibrium rate, whereby the supply and demand for investments and assets are in balance. Thus, pushing interest rates above (below) the natural interest rate can cause people to make fewer (more) investments/asset purchases than they would have made, therefore throwing the economy out of balance and exacerbating business cycles. If the Fed achieves a neutral policy stance, where the federal funds rate is equal to its natural rate, monetary policy will contribute very little to either booms or busts. Aside from the fact that the Fed cannot simply adjust interest rates as it sees fit, a major problem for policymakers is that the true natural rate can only be estimated.
Based on various estimates of the natural rate, evidence suggests that the Fed kept its federal funds rate target below the natural federal funds rate in the early 2000s, thus contributing to the housing boom. During this period, the Fed recognized the exceptionally strong productivity gains in the U.S. but chose to be overly accommodative with its policy stance. Rather than allow prices to fall, the Fed expanded the money supply in the hope of being able to further boost the economy while also avoiding higher inflation. Essentially, the Fed believed the downward pressure on prices gave it a free pass to further expand the economy without causing too much inflation. Former Fed Chair Alan Greenspan explained this strategy in a 2004 speech at the American Economic Association meetings:
As a consequence of the improving trend in structural productivity growth that was apparent from 1995 forward, we at the Fed were able to be much more accommodative to the rise in economic growth than our past experiences would have deemed prudent. We were motivated, in part, by the view that the evident structural economic changes rendered suspect, at best, the prevailing notion in the early 1990s of an elevated and reasonably stable NAIRU [non-accelerating inflation rate of unemployment]. Those views were reinforced as inflation continued to fall in the context of a declining unemployment rate that by 2000 had dipped below 4 percent in the United States for the first time in three decades.
An exchange between Kansas City Fed President Thomas Hoenig and Fed economist David Stockton, during the December 9, 2003, Federal Open Market Committee (FOMC) meeting, further elaborates what FOMC members were thinking:
We think that, going into 2006, we will have some continued acceleration in underlying potential output that is being driven by the speed-up in investment spending that we expect to get over the next two years. So we believe we can enter that year with a below-equilibrium funds rate and still not generate any acceleration of inflation until later in 2006.
The FOMC was clearly aware that it was overly accommodative due to the extraordinary increase in productivity, and it was clearly willing to maintain that policy stance so as long as inflation stayed (in its view) under control. Thus, the Fed’s policy mistake was that, in an effort to further boost the economy, it failed to tighten in response to productivity growth in the early 2000s.
While it would be unfair to place all of the blame for the housing crash on the Fed’s monetary policies, it is clear that the Fed accommodated the increased credit that was used to fuel the housing boom. Thus, the Fed bears some responsibility for the housing crash and its collateral damage, namely massive unemployment, millions of home foreclosures, and billions of dollars in lost wealth. So many resources—including labor—were directed into housing and housing-related markets during the boom that it has taken years for people to assimilate into other sectors of the economy. The BLS estimates that:
Demand for residential construction grew from supporting 5.5 million jobs, or 4.2 percent of all U.S. employment, in 1996, to 7.4 million jobs, or 5.1 percent of total employment, at the peak of the cycle in 2005. As the housing market crashed, residential-construction related employment fell substantially; it was at 4.5 million in 2008, accounting for only 3.0 percent of total U.S. jobs.
From January 2008 to December 2008, total non-farm payrolls fell from approximately 138 million to 134 million, meaning that roughly 75 percent of the drop in employment was housing related. Perhaps worse, the Fed compounded its earlier policy mistakes when the crisis hit, worsening the downturn.
Excessively Tight Monetary Policy: The Late 2000s
Pundits commonly claim that the Fed’s interest rate target cuts, which the central bank started in September 2007, prove that monetary policy could not have been too tight during the financial crisis. Such claims are simply incorrect. Although there is a stubborn fascination with interest rate target decreases and increases, even among some economists, interest rate target changes alone cannot signify whether monetary policy is excessively loose or tight. In general, the extent to which monetary policy is loose or tight simply cannot be determined only by observing changes in the fed funds target, the level of nominal interest rates, or the growth rate in the various monetary aggregates.
Nominal interest rates depend on both the demand and supply of credit, and monetary aggregates can grow too slowly or quickly depending on the growth in demand for various types of assets. In other words, simply looking at the growth in interest rates or monetary aggregates without respect to the public’s demand for real assets provides a misleading picture of what the monetary authority may have accomplished. Regardless of whether the Fed’s policy rate is above or below the natural interest rate, the Fed’s job is to prevent an economic collapse (a precipitous drop in aggregate demand) by providing system-wide liquidity, and if it tightens in any way during a crisis it would most likely worsen the downturn.
In fact, tightening at the wrong time is one mistake that the Fed has made repeatedly. Milton Friedman once observed that: “After the U.S. experience during the Great Depression, and after inflation and rising interest rates in the 1970s and disinflation and falling interest rates in the 1980s, I thought the fallacy of identifying tight money with high interest rates and easy money with low interest rates was dead. Apparently, old fallacies never die.” Even a cursory look at the previous trend in the Fed’s interest rate target suggests that the Fed’s policy stance could have been excessively tight. The Fed started raising its target rate in the middle of 2004, and did not lower it again until September 2007 (it rose from 1 percent all the way to 5.25 percent). Importantly, the growth rate of nominal gross domestic product (GDP), a measure of overall demand in the economy, started a downward trend in 2006, ultimately turning negative in the first quarter of 2008.
The mere fact that the Fed started lowering its target rate in September 2007 does not indicate that the policy stance was sufficiently accommodative, and the fact that aggregate demand started dropping suggests that it was not. Furthermore, there was no dramatic decline in the monetary base (currency plus reserves) from 2005 through August 2008, but the monthly rate of growth in the base was below the long-term average in 34 of 44 months (the rate turned negative in almost half of these months). Similarly, the rate of growth in the St. Louis Fed’s M1 Divisia index—an additional monetary aggregate—was below average in 38 of 44 months. Again, these sorts of measures only supply a superficial gauge of whether monetary policy was too tight or loose because they ignore the public’s demand for monetary assets, but aggregate demand did begin to fall during this period.
Beyond these measures, other Fed actions suggest that the central bank’s policy stance was excessively tight at exactly the wrong time, thus prolonging the recession. In particular, the Fed’s decision to begin paying interest on excess reserves in October 2008, a policy that was admittedly designed to “sterilize” the expansionary effects of asset purchases, was ill-timed and ill-advised. Indeed, given the Fed’s objective of preventing a deep recession (a collapse in aggregate demand), the decision to begin paying interest on excess reserves at this time was nothing short of bizarre.
In August 2007, at some of the earliest signs of a crisis, the Fed made the right move: it made net purchases of Treasury securities to ease credit conditions (that is, to avoid a general contraction in bank lending). Subsequently, through September 2008, the Fed made approximately $300 billion in emergency loans, but it chose to sterilize these loans so that an increase in bank reserves would not expand bank lending. That is, for every dollar it made in loans to financial institutions, it simultaneously sold a dollar of assets from its portfolio of Treasury securities. It did so for the sake of maintaining its federal funds rate and inflation targets. As a result, the Fed’s policies provided credit only to select firms rather than providing liquidity to the entire banking system, failed to prevent a collapse in aggregate demand, and likely prolonged the recession.
Government Credit Allocation Helps Some at the Expense of Others
In December 2008, the Fed began the first of three rounds of quantitative easing (QE), large-scale asset purchasing programs whereby the Fed purchased long-term Treasuries and the mortgage-backed securities (MBS) of Fannie Mae and Freddie Mac that were (at that time) held by private financial institutions. By the end of 2014, the Fed had expanded its balance sheet by purchasing approximately $2 trillion of long-term Treasuries and MBS, respectively. The Fed took its balance sheet from less than $1 trillion to nearly $5 trillion.
These purchases, ostensibly, were designed to inject liquidity into the banking system, thus preventing a collapse in bank lending and a simultaneous collapse in the economy. However, as these purchases created excess reserves in the banking system, the Fed chose to pay interest rates on these excess reserves. As a result, instead of creating new money through additional lending and preventing (or lessening the severity of) a recession, the Fed’s QE policies expanded only the amount of excess reserves in the banking system. Banks mostly held onto the cash that the Fed gave them when it executed all those securities purchases, so it is rather difficult to argue that these Fed policies did much of anything to expand the economy or prevent a collapse. The Fed now projects that it will pay $27 billion in interest on these excess reserves in 2017 (mostly to very large banks), with the amount rising to $50 billion by 2019.
These policies have allocated credit to the housing and government sectors: By the end of the QE programs, the Fed held roughly 25 percent of outstanding Treasuries and nearly one-third of outstanding MBS. For a bit of additional perspective, the commercial-banking sector’s combined holdings of MBS and Treasuries is about $1.7 trillion, almost half the amount held by the Fed. Any private financial institution that undertook such an expansion would come under intense scrutiny by the Federal Reserve, the primary regulator of all bank-holding companies. At the very least, the Fed’s actions have distorted prices in the housing market as well as the broader financial markets.
Because an increase in demand for Treasuries, all else constant, puts upward pressure on their price, it also puts downward pressure on their interest rates. Thus, the Fed’s policies, which increased the demand for low-risk financial assets, have contributed to the low-interest-rate environment experienced since the financial crisis. For instance, three-month certificate of deposit and one-year Treasury rates have been lower for the past decade than at any point since 1970. Individuals with low-risk asset preferences, therefore, have suffered lower returns than normal partly because of the Fed’s policies.
This balance sheet expansion by the Fed has diverted hundreds of billions of dollars in credit from the private sector to the federal government, a twofold problem because the private sector allocates credit more efficiently than the government and because it does so without directly placing taxpayers at risk for financial losses. Aside from distorting interest rates in credit markets, these policies have not made housing prices more affordable, and it does not appear that they have appreciably decreased mortgage interest rates.
These policies exemplify why a neutral central bank, rather than an independent central bank, is desirable. For a central bank to remain neutral, it must keep a minimal footprint in the private sector. A central bank that, for instance, purchases nearly one-third of an asset class cannot remain neutral. There is a fundamental speculative nature to all financial activity, a fact that further dictates that government agencies, including central banks, should undertake as little market activity as possible to maintain liquidity in the banking system. Although the Fed has episodically adhered to providing only system-wide liquidity, the Fed’s lending policies have gone against such a sound prescription for the bulk of its history.
Failure of Lender-of-Last-Resort Policies
Judged against the classic prescription for a lender of last resort (LLR), the Fed’s long-term track record is rather poor, and it has frequently jeopardized its operational independence and placed taxpayers at risk. During the recent crisis, the Fed allocated credit directly to a select few firms and did so indirectly through several broad lending programs. For instance, the Fed provided a $13 billion loan to Bear Stearns, one of the Fed’s largest primary dealers, on March 14, 2008. The loan was repaid in days, but then the Fed provided a $30 billion loan to facilitate J. P. Morgan Chase’s acquisition of Bear Stearns. Shortly after this deal was completed, former Fed Chairman Paul Volcker remarked that this loan was “at the very edge” of the Fed’s legal authority.
Separately, the U.S. Government Accountability Office (GAO) estimates that from December 1, 2007, through July 21, 2010, the Federal Reserve lent financial firms more than $16 trillion through its Broad-Based Emergency Programs. To put this figure in perspective: Annual gross domestic product reached $16.8 trillion in 2013, an all-time high for non-inflation-adjusted GDP in the U.S. During the crisis, the Fed created more than a dozen special lending programs by invoking its emergency authority under Section 13(3) of the Federal Reserve Act. One example of the emergency-lending programs carried out by the Fed in the wake of the 2008 crisis is the Primary Dealer Credit Facility (PDCF). By 2010, the PDCF provided nearly $9 trillion in overnight cash loans to primary dealers against “eligible collateral,” as defined by the Fed.
While Bear Stearns did use the PDCF before the Fed facilitated the Bear Stearns–J. P. Morgan merger, three other primary dealers—(1) Citigroup Global Markets, Inc.; (2) Merrill Lynch Government Securities, Inc.; and (3) Morgan Stanley & Co., Inc.—relied on the PDCF for more than double the amount that Bear Stearns borrowed. Of more than 20 primary dealers, almost 80 percent of all the lending through the PDCF went to just these four firms. Furthermore, the Fed made special concessions on the type of collateral accepted for these loans, and it provided PDCF loans at below-market rates. Evidence also suggests that the Fed provided favorable rates on most of its other emergency-lending programs. Bloomberg Markets magazine estimates that the Fed’s total emergency loans from 2007 to 2010 charged $13 billion below-market rates.
Charging below-market rates to select firms, on suspect collateral, is the exact opposite of the classic LLR prescription. The goal should be to lend widely, as safely as possible, at high rates so that firms have every incentive to stop relying on the Fed for funds. Instead, the Fed effectively provided financial institutions with a source of subsidized capital for up to several years. These policies encouraged more risky behavior than would have otherwise taken place because the government accepted much of the downside risks for private firms (the well-known moral-hazard problem), and they also crowded out private alternatives as the Fed essentially became a lender of first resort.
The Fed’s Failure as a Regulator
The Fed’s actions leading up to the 2008 crisis also highlight the central bank’s failure as a financial market regulator. The U.S. central bank has been involved in banking regulation since it was founded in 1913, and it became the regulator for all holding companies owning a member bank with the Banking Act of 1933. When bank-holding companies, as well as their permissible activities, became more clearly defined under the Bank Holding Company Act of 1956, the Fed was named their primary regulator. Under the 1999 Gramm–Leach–Bliley Act, the Fed alone approved applications to become a financial holding company—and only after certifying that both the holding company and all its subsidiary depository institutions were “well-managed and well capitalized, and…in compliance with the Community Reinvestment Act, among other requirements.”
Although it would be unjust to place all of the blame on the Fed, the fact remains that the United States experienced major bank-solvency problems during the Depression era, again in the 1970s and 1980s, and also during the late 2000s. At best, the Fed did not predict these crises, but it appears the Fed was completely unaware of any major problems. In 2008, for example, Fed Chairman Ben Bernanke testified before the Senate that “among the largest banks, the capital ratios remain good and I don’t anticipate any serious problems of that sort among the large, internationally active banks that make up a very substantial part of our banking system.” Simply being mistaken about banks’ capital is one thing, but the Fed played a major role in developing these capital ratios used to measure safety and soundness.
In the 1950s the Fed developed a “risk-bucket” approach to capital requirements, and that method became the foundation for the Basel I capital accords, which the Fed and the Federal Deposit Insurance Corporation (FDIC) adopted for U.S. commercial banks in 1988. Under these capital rules, U.S. commercial banks have been required to maintain several different capital ratios above regulatory minimums in order to be considered “well capitalized.” According to the FDIC, U.S. commercial banks exceeded these requirements by 2 to 3 percentage points, on average, for the six years leading up to the crisis. The Basel requirements sanctioned, via low-risk weights, investing heavily in MBS that contributed to the 2008 meltdown. Furthermore, the Fed was directly responsible for the recourse rule, a 2001 change to the Basel capital requirements that applied the same low-risk weight for Fannie-issued and Freddie-issued MBS to highly rated private-label MBS.
Though any one of the other federal financial regulators could have made the very same mistakes, a central bank does not need to be a financial regulator in order to conduct monetary policy. Allowing the Fed to serve as a financial regulator increases the likelihood that policy decisions will be compromised as the Fed’s employees become embedded in the financial firms they are supposed to be overseeing. The fact that Dodd–Frank imposed a nebulous financial stability mandate on the Fed only increases this possibility. Aside from these recent changes, it is completely unnecessary for the U.S. central bank to serve in a regulatory capacity, and removing the Fed from its regulatory role would leave at least five other federal regulators that oversee U.S. financial markets. The Fed is now micro-managing even more firms than it was prior to the 2008 crisis, despite the fact that the central bank has repeatedly failed to predict, much less prevent, financial turmoil.
Five Steps Congress Can Take to Fix Monetary Policy
The Federal Reserve has not fulfilled the long-term promise of taming business cycles, and its overall track record on inflation is not much better. These facts alone require Congress to question the Fed’s mission and role. Given that the Fed’s credit allocation policies, regulatory failures, and monetary policy mistakes—after 100 years to gain experience—worsened the most recent boom-and-bust cycle, Congress would be derelict in its duty if it allowed the Federal Reserve to continue operating under its existing ill-defined statutory mandates. To fix the nation’s monetary policy, so that it works for Main Street Americans rather than a select few firms, Congress should, at the very least, take the following five steps.
- Normalize and end experimental policies. In 2008, the Fed began aggressively expanding its balance sheet by purchasing large quantities of long-term Treasuries and mortgage-backed securities. These asset-purchasing programs lasted for five years, ballooned the Fed’s balance sheet to almost $5 trillion, and spawned the use of experimental monetary policy tools because the flood of excess reserves stalled the federal funds market. Congress should require the Federal Reserve to announce (and enact) a specific plan to normalize its operations by shrinking its balance sheet, ending the payment of interest on excess reserves, and closing down its overnight reverse repurchase facility. Each of these actions can be undertaken with minimal disruptions over, for example, a five-year period. Reversing these crisis-era polices will restore balance in credit markets and, in particular, allow market forces to once again set rates in the federal funds market.
- Replace existing liquidity operations with an open process. The Fed conducts its open-market operations—buying and selling Treasury securities to implement monetary policy—with a limited number of financial firms known as primary dealers. The current primary dealer framework was created in the 1960s when a centralized open-market system in New York offered clearer advantages. The current system requires the Fed to depend on a small number of large financial institutions, thus making system-wide liquidity provision needlessly cumbersome and perpetuating the too-big-to-fail problem. Congress should require the Fed to conduct open-market operations with all counterparties currently eligible for discount window loans, and to do so in a single flexible auction framework that preserves system-wide liquidity during financial emergencies and also in normal times. Such a facility would draw on recent experience in both the U.S. and Europe.
- Restructure the Fed’s monetary policy mandate. Congress should hold the Fed accountable for maintaining a stable inflation rate, where the target rate is conditional on the rate of productivity growth, so that inflation rises above its long-run rate only when there are productivity setbacks, such as adverse supply shocks, and falls below its long-run rate only when there are exceptional productivity gains. Congress should not require the Fed to maximize employment or moderate interest rates. At best, monetary policy can have a short-term impact on such variables while, in the process, overly politicizing the central bank. For similar reasons, Congress should remove any financial-stability mandates beyond the Fed’s role of providing system-wide liquidity.
- Reduce implicit and explicit guarantees. Congress should reduce both explicit and implicit government guarantees in financial markets by ending the Fed’s emergency-lending authority and ending the Fed’s role as a financial regulator. Allowing the Fed to serve as a financial regulator increases the likelihood that policy decisions will be compromised as the Fed’s employees become embedded in the financial firms they are supposed to be overseeing. Regardless, a central bank does not need to conduct regulatory policy to conduct monetary policy, and at least five other federal regulators currently oversee U.S. financial markets. Throughout its history, the Fed’s emergency lending has allocated credit to select firms, rather than provide system-wide liquidity, helping give rise to the concept of “too big to fail.” The Fed can provide system-wide liquidity without separate emergency-lending authority, and reforming the open-market operations process only strengthens this point.
- Allow competition to improve money. Congress should ensure that all federal policies, including those of the Federal Reserve, remain neutral with respect to whichever mediums of exchange people decide to use. Nothing can provide as powerful a check on the government’s ability to abuse money as allowing competitive private markets to provide it. Suppressing such competition, if history is any guide, only deprives citizens of beneficial innovations in the means of payments. Allowing people to hold and use whatever money they prefer will not solve all economic problems, but neither will legal restrictions and government monopoly. There is no doubt that the full record of government stewardship over money is poor, and that competitive market forces push entrepreneurs to innovate and improve products—even money—to satisfy their customers.
It is difficult to argue that the Fed’s recent policy actions accomplished anything other than saving a favored group of creditors at the expense of all others. Rather than hold the Federal Reserve accountable for these mistakes, policymakers appear to have put even more faith in the Fed’s ability to influence interest rates and inflation, tame business cycles, and ensure the safety and soundness of financial markets. Meanwhile, economic growth remains anemic and people depending on low-risk assets for their income remain in a precarious position. Monetary policy under the current framework is clearly not working, and it is Congress’ duty to fix it.
—Norbert J. Michel, PhD, is Director of the Center for Data Analysis, of the Institute for Economic Freedom, at The Heritage Foundation.