Wednesday, July 9, 2008

The threat of nonlinear history

I've often wondered, in a lazy sort of way, in what way or to what extent political science is supposed to be a science. From doing some cursory looking around on the internet--and from casting my memory back to that one undergraduate PoliSci course that I took--it seems like wide swaths of political science look like your standard social science, doing things like finding correlations between levels of wealth and frequency of political revolutions and the like. But then there are the other parts of political science that get more play, the ones that boldly impose models on politics and history and try to make long-run predictions. So, for instance, you have a debate like Huntington's "Clash of Civilizations" (the world will fracture along cultural lines) versus Fukuyama's "Last Man" (liberal capitialism will win History), where two different theories offer competing outlooks for the next several decades (or more), and these are the kinds of views that inform the decision-making of the world's foremost power-wielders.

Now, I think it's fair to say that a whole lot of the heavy lifting that these theories do is done by the particular understanding of history and causation in history that they are founded on. For example, Fukuyama's contention that liberal capitalism is the final historical phase of humanity relies on the notion that history is a procession of battles between ideologies, and with this conception comes all of the theory-concepts (e.g., "ideology", "an ideology triumphing and continuing through history") and models he needs to identify historical patterns and then extrapolate those patterns into the future, forming (something like) a prediction (the "ideology" liberal capitalism has "triumphed" over Fascism and Communism, and so it "is the big winner of History"). And Huntington's idea about cultural conflict relies on notions about how human beings behave on a large scale, about how their cultural identity governs this behavior. And so Huntington will have a different set of theory-concepts going to work for him, not looking at "ideologies" so much as "civilizations" and their various "features", and perhaps having a different idea about what constitutes a "conflict" between them. But the point of all this is to show that though these theories impose different models on the world that make use of different sets of concepts, and that these concepts causally interact with each other in the model in different ways, and thus offer not only different predictions of the future but different sets of conceptual levers of history for power-wielders to pull, it is still the case that both theories have concepts, that they both have some causal scheme for how these concepts interact, and that they both make long-term predictions of some kind.

The thing I want to explore is: what kinds of assumptions are made by these theories such that they can have these formal features about them (theory-concepts or models, causal scheme, long-term predictions)? To me, it seems like they necessarily view history as something like a linear system that can be lassoed with broadly applicable concepts, that these concepts interact with each other in a regular mechanistic way, and that this mechanistic motion of broadly applicable concepts is what provides the basis for extrapolations of the system's behavior into the future (i.e., long-term predictions). When I say a "linear system", I mean a system that has this property: if you start the system in nearly the same state this time as you did last time, then the results will be nearly the same this time as they were last time (or at least, the difference in results between the two runs will be predictable/regular and proportional in some way to the difference in the starting states). A system's being linear is critical for the formation of broadly applicable concepts, because of the pragmatic limitations of being able to precisely repeat an experiment over and over again. For example, take Newtonian physics. At some point someone wanted to be able to predict how far a ball would travel given that it was thrown with a certain force at a certain angle. Of course, we know now that there is an elegant little algebraic formula that will yield the right answer. But for that first person doing the investigation, he had to start off by taking a ball and propelling with some force at some angle, and measuring how far it travelled. And he ran this same experiment over and over, and recorded all the measurements. Of course, the initial conditions for all of the different runs were not precisely the same. With each throw, there were arbitrary small variances in amount of force applied and the precise angle thrown brought about by shaky hands or uneven springs or whatever. But because of the linearity of the system, this didn't matter too much--the differences in starting state were small, and so the differences in end states was also (commensurately) small, and these small differences could be cleverly disappeared, either by deliberately reducing the granularity of the measurements so that the differences could not be detected ("significant figures"), or by leveraging the randomness of the differences by averaging the results together, thus cancelling out the "noise" and converging on the "true answer". What this means is that, at the end of the day, our experimenter has a long list of what can for all intents and purposes be considered the same exact test run over and over again. And he can take the regularities found here, and see if they hold with other objects--and if they still hold, he can try tweaking different variables in the initial state (halve the force, double the angle, etc.) and see if the results are altered in a predictable and proportional way. The broader the set of scenarios that he finds his regularities in, the more general and broadly applicable will be the concepts that eventually populate his finished theory. And in this case of Newtonian physics, the concepts are very general and broadly applicable indeed: he gets to say that "an object" weighing W will travel X distance when propelled with Y force at Z angle. But of course, none of this would be possible if Newtonian physics of this type were nonlinear--that is, if even imperceptible differences in starting conditions caused erratic and unpredictable differences in the result. If this were the case, then on the first test run, the ball might go 5 feet--and on the next run, might fly 18 feet, all due to a very slight difference in the number of air molecules that happen to strike the ball as it accelerated through the air. And then on the next test the ball might fly a measly .2 inches, because on the last run a few molecules were shaved off the ball when it hit the ground, and so the weight variable changed ever so slightly. Under these conditions, there are no tricks the experimenter could deploy to cancel out the "noise" of the results--because of the imperfect fidelity of each test to the previous one and the system's nonlinearity, his results would amount to nothing more than a list of random numbers (reflecting the randomness of the variances in the initial conditions of each test), and he could no more average the results than one can average the results of a television screen full of static and get an image. With no regularities to extract from his results, he cannot broaden his theory by generalizing it to other scenarios, and so cannot form an appropriately general theory with appropriately general theory-concepts, like "an object" and "X distance". Rather than a very useful and interesting generally applicable statement of a causal relation that can be used in the future to make predictions in all sorts of different situations involving objects being propelled through space, the experimenter has a very not-useful and uninteresting dead historical record that describes some meaningless events (ten separate occasions of some ball being thrown in some guy's backyard) that will have no bearing or impact on anything that comes after it. He will have not a theory, but a collection of arbitrary data. And of course, with no broadly applicable and causally interactive concepts with which to formulate a theory, there can be no mechanistic fast-forwarding of a model to make predictions about the future (indeed, if Newtonian physics were nonlinear, the world would be craaaaazy).

So you can see where I'm going with this. Could it be that the engine of history--the vast system of human beings that interact with each other, each with its own idiosyncratic motives and behaviors and tendencies--is essentially nonlinear? Could it be that if you were to somehow model all of humanity on a computer and run it in fast forward, that if you went and removed, say, some insignificant English peasant in 1132 AD, then the result in 2008 would be vastly different, Sliders-esque world with totally different leaders, countries, and dominant ideologies? Or we can go even smaller: if a butterfly gives its wings an extra flap in 540 BC somewhere in Greece, does the same kind of wild and erratic variation in world-makeup occur? You may think: well what does a butterfly have to do with human affairs? But as we all know by now, the weather is famously sensitive to changes of even this scale. If the butterfly did this, it would unleash a whole alternate history of weather patterns, weather patterns that would effect human society in countless different ways over the centuries.

How can we tell if human history is, indeed, massively nonlinear? This I do not know. Maybe the surest way we would ever be able to tell is by somehow coming up with a computer model of humanity that everyone could agree really does model humanity accurately. Then we could simply experiment by running the model (a computer model has the benefit that it can be run in precisely the same way as many times as you like. If the model is deterministic, then even a nonlinear system will spit out the same result over and over). But it seems to me that you could also make a fairly educated guess at humanity's nonlinearity by applying some armchair common sense. Just by reflecting on your own life, you can easily see how incredibly and delicately contingent are so many basic features of your life. For example, when my mother was a little girl and her family was moving out West, they were originally planning on settling down in Seattle. However, things changed when they decided that they would take a detour to Los Angeles to take the kids to Disneyland--they ended up taking a liking to the then-undeveloped San Fernando Valley, and lived there ever since. So if it wasn't for the decision to visit Disneyland--and if it wasn't for Disneyland being located in LA--and if it wasn't for Walt Disney having the idea for Disneyland--and if it wasn't for Walt Disney's mother missing a train and having to share a taxi with a charming fellow by the name of John Disney--and on and on and on, branching through history in a million parallel branches going as far back as you want to go--if it wasn't for all those meaningless contingencies, I wouldn't have been born. And I think when you take this and multiply it by every little event and fact and all the people in all of history, and factor in on top of this other nonlinear systems like the weather that affect everyone all throughout the world through history, I think there can only be one sobering conclusion: that humanity, that history, is massively, terrifyingly, unendingly, unsolvably nonlinear.

But hold on--all is not lost. Because some theory-concepts just so happen to be durable in the face of nonlinearity. For example, it may be impossible to predict a year from now exactly which days will be rainy in May in San Francisco do to the nonlinearity of weather, but it is possible to make a prediction like, next year there will be more rainy days in May than in September. And certainly we can safely predict that it will be colder here in the summer than in the fall. So it is not as if nonlinearity automatically equals total unpredictabliity. There could be some local parts of the system that show regularities, perhaps because some outside thing is significantly constraining the possible outputs of the nonlinear system (in our case, it would be the shape of the land and the affects of ocean temperature on barometric pressure and the fog and whatnot). And so the question for political scientists is: are the concepts that populate your theory durable in the face of nonlinear history? Is a "winning ideology" subject to the frail infinitude of contingencies, like my birth? Or will that "winning ideology" stubbornly remain no matter what the contingencies are, like cold summers in San Francicso?

Perhaps any theory in political science is, ultimately, a statement about what holds steady in the face of the mass contingencies of history. Still, though, one can worry about the potential damage nonlinear history can wreak on our finely tuned theories, and wonder if there might be some way of verifying that we are indeed engaging in some kind of prediction-yielding science, and not an obscure form of historical interpretation that comments on past events without shedding light on future ones.

1 comment: