Depression, War, and Cold War: Studies in Political Economy by Robert Higgs (Oxford University Press: 2006); 240 pages; $35.
During the run-up to the Iraq war, along with all the other myths circulating about U.S. foreign policy, economic misconceptions abounded. Some suggested the war might be generally good for the American economy. This included both proponents and critics of the war, interestingly enough, and the pattern was not a new one. Hawks will sometimes argue that, along with securing justice and peace, a war will give a much-needed boost to production and thus bolster the economic health of the country. Leftist cynics will also sometimes say war benefits the economy, as if the United States were a classic imperial power that wages war primarily to loot resources and divert them to Americans. This cynicism is not hard to understand — Secretary of State James Baker once said that the Gulf War was mainly about “jobs, jobs, jobs.”
This concept — that a large government project such as a war can be good for the economy — is neither new nor unique to the area of foreign policy. We often hear that government must spend more on this or that program in order to create jobs. After Hurricane Katrina and the subsequent flooding of New Orleans, some newspaper pundits even suggested that the cleanup effort might increase employment and spur productivity, as though it is better for the economy to suffer a natural disaster and then spend billions cleaning it up than it would be never to have endured the disaster in the first place. You’d think if such reasoning were sound, the best policy would be for the government to spend large sums of money on expensive goods and rocket them off into space, or even spend money destroying American infrastructure, in order to “create jobs” and “boost productivity.”
The seen and unseen
Such naive reactions to spending due to war and natural disasters are perfect examples of what the great French economist Frédéric Bastiat described as the broken-window fallacy. Bastiat asks his reader to imagine a delinquent boy throwing a rock through a store window, about which some presumptuous onlooker comments that it might indeed be good for the economy. The glazier will make money replacing the window, which he will use to buy bread from a baker, who in turn will buy a new pair of shoes.
The economic activity will snowball and lead to greater general prosperity. (In modern times, Keynesian economics has favorably referred to this as “the multiplier effect.”) What this ignores, as Bastiat explains, is the unseen costs: what the storeowner could have done with that money had he not had to spend it on the glazier, but rather on something he would value higher had his window been left intact.
With government spending, the same principles apply. Money seized from the private sector — from those who know how to make productive, profitable economic decisions — and transferred to government programs does indeed produce jobs, but to focus on this ignores what the wealth could have been used for had it not been forcibly transferred. As the humorist Dave Barry so succinctly put it,
See, when the government spends money, it creates jobs; whereas when the money is left in the hands of taxpayers, God only knows what they do with it. Bake it into pies, probably. Anything to avoid creating jobs.
And yet, while the logically impeccable insights of Bastiat and the sardonic wit of Barry should probably have settled the matter, the myth of government spending as an economic boon persists. Why is this? The most likely reason is that despite the soundness of the critique of government spending as an economy booster, people cling to what they see as concrete examples of the phenomenon in action. The most common examples are probably the New Deal and especially the Second World War, which are credited with ending the Great Depression, effecting general prosperity, and finally repudiating America’s devotion to laissez faire, replacing it with a regular commitment to central administration, military strength, and federal-spending projects that have in combination allowed America to maintain its economic superiority for the last 60 years.
Unfortunately, even many free-market thinkers have shied away from taking on this argument. Certainly, the mainstream conservative dedication to free enterprise, if it exists at all, is not so robust as to challenge the economic fallacies underlying World War II. On the Left, Right, and Center, the idea that Franklin Roosevelt dragged America out of its economic rut is by now as American as apple pie. But World War II, whatever else can be said of it, was probably the largest government program in American history, and if those of us who favor free markets and believe government spending to be generally deleterious to economic growth are not willing and able to effectively impugn this sacred cow, we have pretty much lost the battle. It is awkward, after all, to insist that a measly food-stamp operation is going to kill the economy if Roosevelt’s incredible nationalization of the U.S. economy during World War II was its lifesaver.
Thank goodness for Robert Higgs, a historian of political economy and senior fellow at the Independent Institute. (Disclosure: I serve as a research analyst at the Independent Institute.) His recent collection of academic work, Depression, War, and Cold War: Studies in Political Economy, somewhat of a sequel to his academic masterpiece of 20 years ago, Crisis and Leviathan, fills the gap in empirical evidence where we have previously had primarily theory to guide us. He thoroughly examines the political economy of the United States from the Great Depression to the end of the Cold War, and explains in detail why modern American prosperity has an explanation of which even Bastiat could approve. In his area of focus, he has almost surely delved further than any economist before him.
The myth of the New Deal
Libertarians have long argued that it was not laissez faire that caused the 1929 stock-market crash and the Depression that followed but rather a series of government interventions, most especially the Federal Reserve’s irresponsible inflationary monetary policy and the outrageously high protectionist tariffs of the Republican administrations in the 1920s. Herbert Hoover, in fact, was a much more interventionist executive than the conventional accounts recognize. Murray Rothbard’s classic work, America’s Great Depression, is probably the best treatment on how big-government policies on Hoover’s watch brought on and then exacerbated the calamity.
Higgs has focused on another, equally important matter: Why did the Great Depression last as long as it did? The man on the street will often credit the New Deal for ending the Depression, but in fact, as any mainstream economic historian will concede, the economy remained sour until at least the end of the 1930s. Why did America suffer for a decade or more, especially with a New Dealer in charge?
Higgs’s answer is, to put it simply, because a New Dealer was in charge. In the first chapter, describing what the author has aptly called “Regime Uncertainty,” Higgs explains with meticulously gathered evidence that the erosion of property rights under the New Deal resulted in an environment where businessmen and investors were unsure of what to expect, and so abstained from investment. According to Higgs,
For the eleven years from 1930 to 1940, net private investment totaled minus $3.1 billion. Only in 1941 did net private investment ($9.7 billion) exceed the 1929 amount.
Higgs offers evidence of investors expressing their fears that the New Deal might destroy free enterprise and with it their invested capital. Surveying such policies as antitrust, tax law, and labor regulation, Higgs shows that while
Roosevelt, we now know, never became a dictator along the lines of his contemporaries Stalin, Mussolini, and Hitler … the possibility that the United States might undergo an extreme regime shift seemed to many investors in the late 1930s and early 1940s not only possible, but likely.
Part 1 | Part 2