Economist Ludwig von Mises argued (1920) that real prices arise only from exchanges of privately owned goods; having abolished such prices, socialist systems could never calculate rationally. Economist F.A. Hayek agreed with Mises that central planning would produce poverty and totalitarianism, but made the use of knowledge in society the central weakness of socialist calculation. In his view (1945), “If we possess all the relevant information, if we can start out from a given system of preferences, and if we command complete knowledge of available means, the problem which remains is purely one of logic … the answer to the question of what is the best use of the available means is implicit in our assumptions.”
Thus full-scale socialism would fail because “‘data’ for the whole society … are never ‘given’ to a single mind.” (Italics added) Instead, the data needed for economic calculation are “bits of incomplete and frequently contradictory knowledge,” held locally and dispersed among persons. (The single mind reflects René Descartes’s fondness for the despotic Baroque monarchy.)
Somewhat later, Michael Polanyi and Paul Craig Roberts, reasoning from “polycentricity,” produced a third critique of socialist calculation. (These critiques, rightly ordered, are probably complementary.) Defending socialism, Polish economist Oskar Lange replied that central planners would oversee total social investment while letting markets set prices for consumer goods through trial and error. In the 1960s he claimed that high-speed computers had settled the controversy in favor of socialism.
Social knowledge misread
Philip Mirowski believes that Hayek unnecessarily complicated his account. Taking cybernetics too seriously, Hayek increasingly viewed the market as a giant quasi computer accessing dispersed knowledge. But despite our reservations here, his treatment of “the use of knowledge in society” raised very important problems that not even U.S. military planners and secret services can escape. Parallels are many, and to solve the planners’ current problems, Lange’s computers are back, including computers surveying all computers everywhere. Getting at the militarist knowledge problem will require several steps.
First, this problem, like the one Hayek addressed, reflects the whole intellectual context of Western modernist thought and its quest to realize Sir Francis Bacon’s power/knowledge project of predicting and controlling literally everything. Here we find such old friends as scientism, mathematical Platonism, and logical positivism, along with faith in statistically indicated scientific “laws” and denial of natures, essences, and natural necessities. Quite happy to discuss means and matter all day, scientific practition-ers have seen ends — along with qualities — as subjective and beneath consideration. (Ends – dismissed and repressed — return as goals set by the scientists’ corporate or state employers, and Freud smiles.)
Next, America’s military inheritance provides specific historical context. The habit of waging Total War, which began with the Indian Wars and spread into all other U.S. wars, commits the U.S. military to high-tech vandalism — to obsession with means, production of destruction, and decumulation of enemy capital. Even a “defensive” war, should one ever occur, will involve a comprehensive, offensive assault on the enemy’s entire society.
Third, a permanent American war party began its institutional life around 1938. World War II was its riotous adolescence. Thence, all through the long Cold War, military-industrial planners systematized American scientific research along pernicious lines. Collaboration between military planners and scientists favored theoretical trends such as operations research, game theory, and communication theory. Statistically sophisticated mathematical Platonism took great leaps into somewhere, as these new fields arrayed their problems around some smallest abstract unit susceptible to indefinite mathematical manipulation — choices, bits, signals, transactions, operations, individuals, points, and atoms (and perhaps the bodies in “body counts”). Improved computing technology raised reductionist hopes skyward. Unleashed on the world, cold abstractions deeply militarized American thought across scientific fields. (Sociologist C. Wright Mills called it “crackpot realism.”) The resulting paradigm confounds human beings with machines, and confuses thought with programmed code.
The problem stated
If socialist calculation stands under a cloud (along with “feudal” and “corporatist” calculation, as Kevin Carson points out [The Freeman, June 2007]), then, as already hinted, similar problems must likewise plague any grandiose project of social engineering and total control. Nowhere is this more apparent than in schemes for total planetary surveillance ordered to the Lone Superpower’s ends. A new, non-Kantian categorical imperative arises, under which every material substance in the world, sentient or otherwise, owes its inwardly held “data” to U.S. agencies. Under the current permanent emergency, those agencies may buy, steal, or torture needed “data” out of recalcitrant substances wherever those entities and their (unjustly hidden) “data” are found. All of reality is under American subpoena.
The NSA is hard at work with big claims and criminal methods, which of course are secretly “legal.” Reversing the normal order of “search and seizure,” it wants to know all by collecting everything there is, first, and then evaluating the totality — somehow. Where Hegel failed, they foresee success. Evaluation involves some makeshift filters, including a bureaucratic division of labor and mysterious logarithms of which we hear tall tales. Organization and computers will, it seems, provide a world of information not naturally given to a “single mind,” and also stand in for such a mind — and (biggest claim) make this “data” usable.
Lost metric of Rumsfeld
At this point the complexity and scale of the task do remind us of Soviet economic planning. So how in detail have our overlords (clothed in an impressive “mantle of science”) decided to solve their knowledge problem? Even if they seize all they want, is the raw “data,” however analyzed, likely to resemble socially usable knowledge of any kind? They can find your pizza receipts, but can they complete a practical syllogism? Can they use it for their own purposes, much less ours? Do they actually know anything? (Historian Edward Ingram finds that early 19th-century field reports often “reveal more about what the British thought went on in Imperial Rome than what was going on in post-Mogul India.”)
Defense Department sciences mentioned above (e.g., operational planning) stress statistics and probability. But economic historian Fritz Redlich observes that “figures are not identical with any process whatsoever” but only “stand for … the result of a process.” Applying the views of mathematician Richard von Mises (brother of Ludwig), economist Murray Rothbard noted (1961) that statistical frequency, as illustrated by tosses of dice, does not correctly predict single throws; he concluded that “mathematical probability theory can never be applicable to economics, or to any other study of human action.” Elsewhere (1979) he described forecasts of human actions as “subjective estimates of future events.”
Criticizing recent U.S. “Signature Targeting,” military analyst Franklin C. Spinney invokes Col. John Boyd’s Orientation Theory to show “why an algorithmic template-based artificial intelligence is so stupid, and so easy to game….” Officers in the field insert current observations into an Artificially “Intelligent” program containing behavioral “indicators” to get “a subjective probabilistic calculation” resting on “deeply buried assumptions.” This “mechanistic” and “pseudo-scientific” operation is “done mathematically” with “artificial intelligence algorithms” using “statistics,” even unto the famous Bayesian ones.
Unsurprisingly, the “objective” output (= whom to kill) reflects programmers’ original preconceptions. Forgoing (human) orientation, ignoring change, and confounding “analysis with synthesis,” the intelligence is artificial and “one-sided”; worse, the template “requires the analyst to know every possible pattern of a target’s behaviour before the fact.” Here is a doomed attempt to combine “godlike omniscience” with “mechanistic statistical analysis,” which treats “the ‘target’ [as] an unthinking automaton…. But human beings are adaptive, thinking, creative, and therefore unpredictable creatures with a will to live.”
With such science-based hubris, “terrorists” are discovered and American V-1 buzz bombs drone forth to kill — no doubt, artificially.
Thus, except for its lethal consequences, U.S. military practice often resembles Lange’s “trial and error” socialist price-setting. (A much simpler “algorithm,” sometimes used, states, after the fact, that the dead bodies on hand “must have been” enemy combatants.) Statistics-based targeting relies on speculative prediction resting on swatches of recent, barely “historical” data. Yet General and former CIA and NSA director Michael Hayden proudly helped choose human targets on such terms. In fair trade, I offer a highly reliable, predictive statement from Daniel Ellsberg (1972): “If you invite us [Americans] in to do your hard fighting for you, then you get bombing and heavy shelling along with our troops.” Ungrounded on statistical analysis, Ellsberg’s proposition falls short of being a probabilistic physical “law” of science, much less pretentious “metadata,” but it is damned likely, given things we know about Americans at war. His superior generalization derives from real-worldly empirical, historical evidence.
But how do historians find their “facts”? Does a historian of the United States use as his chief sources a mountain of Chicago phonebooks from 1896 to present? Do old ticket stubs from Midwestern county fairs often count as “facts” for historical purposes? (No and no.) As Charles A. Beard tried to explain in the 1930s, historians begin work within interpretive frames of reference which, by providing context, help them decide what items — out of potentially everything — are useful facts for historical purposes.
And now we have come back to filters. Assessing his famous psychedelic experiment, Aldous Huxley (1972) quoted Cambridge philosopher C.D. Broad: “The function of the brain and nervous system is to protect us from being overwhelmed and confused by this mass of largely useless and irrelevant knowledge, by shutting out most of what we should otherwise perceive or remember at any moment, and leaving only that very small and special selection which is likely to be practically useful.” Mescaline removed those filters, Huxley says, leaving him overwhelmed with in-world “data.” For everyday purposes the filters were necessary and good. They helped provide context within which knowledge was relational and meaningful and therefore useful. Their absence was unnerving.
Perhaps with a sufficient number of very clever algorithms acting as filters, all the emails, phone calls, and every last electron in the world can become — for U.S. bureaucrats — effective sources of usable, goal-directed knowledge. More likely, our minders will find themselves drowning in information, like Huxley on mescaline, and flailing about wildly like Nixon’s “pitiful, helpless giant.”
And now we can spot some crucial differences separating sciences of human action from physical sciences. Very messily, human beings show free will and other traits that greatly hamper scientific prediction of their actions. This is very rude of us, but we persist, despite repeated urging from scientists to behave as if we live inside those artificially closed systems on which science thrives and where experimental variables can be controlled. But humans and human societies are probably not such systems.
King Midas and the data
Let us assume (quite impossibly) that Science, the NSA, or Some Other Agency, has acquired all the “facts” or “data” across some entire range of human endeavor (or all endeavors). What can such people do then? (My suggestion: They might found an Interdisciplinary Review of Knowing Everything, edited from the ruins of Carthage by Donald Rumsfeld, since illusions of omniscience survive all disappointment.)
Here let us recur to our analogy. If it was impossible in principle to plan a whole economy, what did socialist “planners” in fact do? Soviet bureaucrats pretended: they allocated and misallocated resources, “met” supposed quotas, got shoddy results, and probably knew they weren’t planning an economy. On this precedent, what might we expect militarist planners to do with their specialist know-it-all-ism? Perhaps they can speculate, as Spinney seems to imply, that 30 percent of all men drinking coffee outside a mosque will react badly to a U.S. invasion. Combined with the coffee, further information “linking” these “subjects” to their cousins (very shocking) may suffice to make them targets. They need not actually do anything. And for this meager result we must let U.S. securitarians steer their cyber-trawler through all the oceans of communicative life.
Here we have moved beyond the availability of massive data to some bureaucratic single mind (as per Hayek), only to arrive at Mises’s view that the data, even if collected, do not constitute useful knowledge at all. (I thank David Gordon for this crucial point.)
Now if military-security bureaucracies do not quite know what they claim they know, what are the uses of their efforts? Alas, they are many — but mostly negative and counterproductive. Grasping for dead-certain knowledge, these people will do great harm and be happy doing it. Falling far short of their goals, they can make life very difficult for (or unavailable to) many people for unknown time to come.
Here is a test that seems “scientific” enough: since 1945 all the king’s explosives and all his big science have not saved his many overseas projects. Historical knowledge of earlier adventures would have been a better guide to new ones offered us. This seems rather decisive, practically, and if moral terms intrude, even more so.
So why have U.S. securitarians assigned themselves a hopeless task? Apparently they cannot yet give up American empire and the habit of intervention. Our science-based militarists’ grasp after total power/knowledge is bad enough, conceptually, morally, and legally. As the advertised “solution” to problems arising from a foreign policy itself problematic (conceptually, morally, and legally), such Cartesian-militarist overreaching can only add novel abuses to existing madness.
This article was originally published in the February 2015 edition of Future of Freedom.