The existence of business cycles — the boom-and-bust patterns — has puzzled economists. While Marx might have created a theory of why depressions occur, nonetheless there was nothing in Marxism to explain the boom period that preceded the bust. Furthermore, there was nothing in Marxism that could (at least coherently) explain the economic recoveries that came after the busts.
William Stanley Jevons of England, an outstanding economist who developed a theory of marginal utility (using calculus) in 1871, had surmised that business cycles could come about because of sunspots. Jevons held that sunspots might cause changes in agricultural productivity, which then would lead to boom and bust cycles. While that might seem to be intriguing, given that economies at that time were agrarian in nature (although England and the United States were rapidly industrializing), his theories never gained traction and today are seen as having significance only in their historical context.
The first true systematic theory of boom and bust came from Ludwig von Mises in his book The Theory of Money and Credit, published in Vienna, Austria, in 1912. Mises was part of a line of economists from Vienna, beginning with Carl Menger in 1871, who solved the gnawing problem of the source of value and, at the same time, developed a systematic theory of economics that integrated money, consumption, production, and capital.
The classical economists of England held to a “cost of production” theory of value in which value in the short run was determined by supply and demand, but in the long run, the cost of production was the value source. Menger and his followers made short work of that fallacy by pointing out that the factors of production gain their value from how consumers value the final products that the factors create. In his 1871 book, Principles of Economics, Menger noted that if demand for all tobacco products were to cease, then the factors that were used to create tobacco goods either would lose their value altogether or would have to be shifted to other uses.
Value, the Austrians argued, was determined by the marginal utility of a good, that is, the use for the available good that people might have. For example, Adam Smith was puzzled by the diamond-water paradox in which water, which is essential for life itself, had a low value in the marketplace, while diamonds, which at that time were ornamental at best, had very high values.
The Austrians solved this paradox by pointing out that the values of which Smith spoke were the values of those goods on the margin. In other words, the value for an available unit of water tended to be low because such units were available in large amounts. Diamonds, on the other hand, were rare, and therefore the available diamonds tended to be expensive.
Mises extended the theory of marginal utility to money itself (something left out of most modern monetary theories), which enabled him to gain insights into the intricacies of a non-barter economy where standard money is used as the main medium of exchange. Money, Mises argued, was absolutely essential to modern civilization and the economies they supported, but manipulation of the money in circulation would create problems, including the boom-and-bust cycles.
The results of government monetary manipulation, from the great Roman inflations to periodic bouts with inflation in England and Europe, were well known, but Mises was able to derive a much larger picture of what happened during inflations. Economists generally understood that increasing the amount of money in circulation would result in higher money prices for goods. Indeed, when William Jennings Bryan in his 1896 U.S. presidential campaign ran on a platform of massive coinage of silver, he did so in the belief that such actions would raise agricultural prices and farm income, thus making the American agricultural sector stronger, and a stronger agricultural sector would make the country as a whole more prosperous.
What Mises did in Money and Credit,F however, was to examine the intricate effects of monetary expansion on the factors of production, and noted that increases in the supply of money would change the relative values of those factors. That meant that the profitability of certain lines of production would increase during inflation, while others would decrease. That was a breakthrough because the dominant way of thinking was that while inflation might result in higher prices, the results of inflation on production were mostly neutral. In other words, an increase in the supply of money might raise consumer prices but have only minimal effects on the decisions that producers would make.
By claiming that inflation also would drive the economy into different lines of production, Mises was able to develop a comprehensive analysis of how booms and busts came about. Far from the Marxian (and later the Keynesian) view that recessions were the result of systematic underconsumption that is endemic to a market economy, Mises noted that when government monetary authorities, and especially central banks (such as the Federal Reserve System), sought to expand the money supply by pushing down interest rates,istinct pattern would emerge, a pattern of what Austrians call malinvestment.
The Austrians define malinvestments as capital investments driven by inflation and borrowed money that cannot be sustained; the reason they are unsustainable is that consumer preferences point investment in a different direction than the inflation has moved them. At some point, producers find that they have invested in error and the capital investments they have made must either be abandoned or applied to uses that consumer choices can sustain.
For example, the United States experienced a huge boom in the late 1990s, as investors jumped on the dot-com bandwagon, assuming that the new Internet startups were going to take off and be enormously profitable. When that unalloyed optimism spread to the stock market, a number of dot-com firms had initial public offerings in which the overall value of their companies was well out of kilter with market fundamentals. In a word, their stocks became overpriced, but during the boom, people made huge amounts of money simply betting that future stock values would be higher.
However, that pattern clearly was unsustainable, as the companies ran out of investor money to spend and consumer preferences had not changed as the investors had anticipated. The dot-com boom turned into the “tech bubble” and ultimately a bust by 2001. The Austrian patterns of boom and bust were clear in this bubble, just as they would be clear in the development of the housing bubble that would dominate the first decade of the 21st century.
Time and again, Austrians in the tradition of Mises, Murray Rothbard, and F.A. Hayek (who won the 1974 Nobel Prize in Economics for his version of the Austrian theory of the business cycle) have identified the patterns of boom and bust, even if that recognition has been ignored by most of the economics profession and certainly modern politicians and journalists. However, Austrians have not only recognized the cause of those patterns, but have also prescribed the best cure for the economy once a crisis has begun.
Indeed, if Austrians are despised for pointing out that government monetary authorities are largely responsible for engineering the boom-and-bust cycle, they are even more hated for their advice on how to make the economy recover. In a recent article in Slate, Matthew Yglesis claimed that the Austrian prescription was “despair,” because Austrians say that government actions, such as pumping more money into the economy, are doomed to make things worse.
In fact, Austrians do believe that once governments have created an economic crisis through intervention, even more intervention will only make things worse. However, one can hardly call such realistic assertions episodes of “despair.” Instead, Austrians simply recognize that one does not cure a patient who is bleeding to death by drawing out even more blood.
Austrians versus Keynesians on economic recovery
When Keynes wrote The General Theory of Employment, Interest, and Money in the 1930s, he borrowed heavily from Malthus, even acknowledging him along with other 19th-century underconsump-tionists, such as Silvio Gesell. However, instead of calling Gesell a crank, given that he believed the economic solution to “underconsumption” was for government to issue currency with an expiration date, Keynes called him a prophet.
Like his predecessors, Keynes believed that recession was the ultimate end for a market economy, but he added a new term, “liquidity trap.” If an economy fell into a “liquidity trap” in which interest rates were low but there was little borrowing, then the only way the economy would ever find recovery was through massive amounts of direct government spending. To pay for its spending binge, Keynes argued, governments must borrow money and spend it directly until the economy would find its way back to recovery.
It was no surprise that Keynesian theory was a hit both with politicians (and especially politicians with a “progressive” bent) and academic economists. The popularity of Keynesianism with politicians goes without saying: by doing what they always want to do — spend tax money — they can become heroes. They can “put people back to work” and “create jobs” by the simple authorization of spending. Furthermore, their efforts are magnified by the praise coming from the progressive mainstream media.
The Keynesian conquest of academic economics certainly was not due to its brilliance or even its actual plausibility. Instead, academic economists jumped on the Keynesian bandwagon because it helped raise their own status from simple theorists and analysts of economic activity to the lofty position of advising politicians, especially the president of the United States. Legislation in 1946 created the president’s Council of Economic Advisers, which is highly prestigious.
Furthermore, Keynesian theory gave the academic economists employed by the Federal Reserve System new status, as they became the people who helped pull the levers of the economy in order to help ensure full employment. Instead of issuing general statements about the economy (such as “Tariff rates are too high”), academic economists advising politicians could trot out mathematical models complete with multipliers and other tools that would demonstrate exactly how new policies would affect the economy (usually by helping it grow).
Over time, leadership of the Federal Reserve System itself fell to academic economists trained in Keynesian thought. For example, Ben Bernanke is a Ph.D. economist who once was the department chair at Princeton University, and Alan Greenspan and a number of previous Fed chairmen have held doctorates in economics. With the exception of Greenspan, who tended to lean with the political winds, all were well-versed in Keynesian economics from their graduate school years.
Over the years, the Austrians have emerged as the most vociferous and principled critics of the Keynesian paradigm. Mises roundly attacked it in Human Action, first published in 1949, while Henry Hazlitt a decade later wrote a detailed line-by-line refutation of Keynes’s General Theory, noting acidly that “what is true” in the book “is not original,” and what was original “is not true.” In other words, far from holding that Keynes wrote a groundbreaking treatise which uncovered economic truths, Austrians have contended from the beginning that the emperor has no clothes.
First, Austrians have pointed out that economies in past depressions, including those in 1837, 1873, 1893, and 1921, did recover, despite the fact that governments did not ramp up spending or massively intervene. That is hardly trivial, for Keynes argued that the downward spiral was contagious and self-perpetuating, so that without intervention it was inevitable that a depression would last indefinitely. Yet history did not back up his claims.
Mises, Hayek, and Rothbard noted that during the crisis, the malinvestments ultimately would be liquidated or transferred to profitable uses, and entrepreneurs would then find those areas of production that could be sustainable. As the factors of production got back into an economic balance, there would be a recovery, and we see that historically recoveries would be quite strong despite the lack of government participation.
Second, the one downturn that lasted for a decade — the Great Depression of the 1930s — also featured the highest amount of government intervention that ever had occurred in U.S. history. One cannot exaggerate just how wide the interventions became, as the federal government first tried to organize the entire U.S. economy into a series of self-governing cartels, complete with administered prices and output, and then switched to vigorously enforcing antitrust laws and attempting to have most of the U.S. workforce unionized.
In fact, it can be said that there was very little economic freedom in the United States during the Great Depression and World War II. Governments at all levels ramped up spending and regulation according to Keynesian dictates, yet the nation’s unemployment rate just before the United States entered World War II was still in double digits. It fell from a high of 25 percent at the beginning of 1933 to about 15 percent in 1935 and back to 20 percent by the end of 1938. The Keynesian prescriptions to “fight unemployment” never came close to bringing the economy to full employment.
While Keynesians like to claim that World War II “ended the Great Depression,” it did no such thing, according to economist Robert Higgs. In an often-quoted 1992 paper in The Journal of Economic History, Higgs pointed out that during the war years, millions of American men were shipped to battlefields in Europe and the Pacific to engage in combat and support roles that would hardly fall into a category of desired employment. Furthermore, he noted, the war years were a time of great deprivation on the home front, which meant that while people might have had jobs and money in their pockets, there was little they could spend it on and their consumption choices were severely limited.
Like people who supported communism, especially after World War II, Keynesians tend to view prosperity as something determined by government statistics. If a nation’s reported GDP figures are high and its reported unemployment numbers are low, then that economy officially is declared to be “prosperous.” I saw that firsthand during a locally televised debate 30 years ago in Chattanooga. On one side was someone who had written his doctoral dissertation under Mises and on the other side was a Marxist who taught at a local state university.
When the free-market economist noted the very real and widespread deprivation that he observed during a recent tour of Romania, the Marxist professor retorted, “But there’s no unemployment there.” In other words, it did not matter that people in Romania were desperately poor and were living under a truly despotic and murderous regime, or that their economy functioned no better (and probably worse) than what one would expect in the Third World.
No, the only thing that mattered to the Marxist was that Romania’s government reported that everyone had a job. Therefore, on that point alone, the Marxist based his entire view of economics. Unfortunately, Keynesians who claim that World War II created “prosperity” in the United States and that all that is needed to end an economic downturn is a massive explosion in government spending really are no better than Marxists in their economic analysis. Whether the barbarians at the gate are Krugman’s imaginary space aliens or real armies carrying real weapons does not matter. All that matters to Keynesians is that someone somewhere is spending money and “creating jobs.”