The First WWW
Before there was the Worldwide Web, there was the Worldwide Weather Watch. The Worldwide Weather Watch resulted from climate scientists and weather forecasters who were involved in scientific cooperation that arose from Cold War competition. As Edwards describes in Chapter 9, the WWW was not the inevitable result of new scientific technologies (such as satellites or computers) but new social technologies: the “successful transfer of standard-setting and coordinating powers from national weather services to a permanent, globalist intergovernmental organization” (Edwards 2010:242). Implicit in his argument is that centrally-planned systems do not scale well; instead, a patchwork of smaller systems that grow through greater connectivity centered around an installed base, complete with its tensions, frictions, etc., can lead to the development of infrastructure. Edwards calls this the internetwork strategy:
“a network of networks that behaves, at least for many relevant purposes, as if it were a single unified system. An internetwork gives you reliability and redundancy, checks and balances. It permits some parts to fail without affecting the performance of the whole. It also gives you tensions: friction among the many connected systems; a parliament of instruments, individuals, and institutions, all with their potentials for disagreement, resistance, and revolt” (2010:229).
The problem is how to piece together heterogeneous systems into a seamless infrastructure. This process (what Edwards calls infrastructural globalism’ worked because it did not go top down, but instead was “kludged” together systems into a single system (btw, kludge is a former jargon term from early computer science). Top-down systems inherently do not work. There is too much friction to overcome due to individual idiosyncrasies, cultural patterns, etc. The problem is ‘the machine in the middle.’
While the WWW was not top-down, it was hierarchical. There were three World meteorological centers (US, Soviet Union, Australia) that reflected geopolitics of the time; parity between the two superpowers, one that was part of ‘the south.’ They gathered together meteorological data, then re-distributed the data to regional/national weather organizations. The WWW did not displace national systems.
The strategy for the WWW was lightweight central coordination combined with decentralized design, testing, and implementation. While there were standards and guidelines from the World Meteorological Organization, they functioned more like gateways reducing friction in the global data network. This lightweight coordination with decentralized implementation should sound familiar – this is essentially the same approach used by the WWW that you use more often, the WorldWideWeb.
Global Atmospheric Research Program (GARP) was the research arm of the WWW. FGGE (First GARP Global Experiment), an experiment conducted by GARP, was referred to as ‘the largest scientific experiment ever conducted’ (Edwards 2010:245). The key point is that GARP led to the institutionalization of experimenting with models.
Something interesting to note about this discussion of GARP and FGGE was the use of satellite imagery. When it first came out, we didn’t really know how to use them for analysis; that came much later, as more scientists used satellite imagery and found more analytical uses for them. Satellite imagery, however, was more politically-useful than technologically-useful. The results of GARP and FGGE show that satellites did not help observation-saturated areas, but only helped in data-sparse areas (i.e., the “south”).
Making Data Global
Earlier, we talked about making global data. In Chapter 10, Edwards instead focuses on making data global. A simple definition of what Edward means by making data global is: building complete, coherent, and consistent global data sets from incomplete, inconsistent, and heterogeneous data sources (2010:251). This is a top-down project, shaped by friction. Making data global means making standards, which are applied or actualized locally, from below. New standards must therefore always struggle against some combination of the following:
- institutional inertia
- funding constraints
- technical difficulties of application
- problems of integration with other instruments, systems, and standards
- operator training deficits leading to incorrect implementation
- differences among local interpretations of the standard
- passive and/or active resistance from organizations and individual practitioners (Edwards 2010:251-252)
This makes the perfect implementation of standards unlikely. But if standards are so hard to implement, how do they ever get implemented? Edwards argues that in actual practice, people find ways to ignore or work around minor deviations so that large-scale project can succeed.
This is how computer models became the way to make data global. Computer models are an idealized depiction of an ideal Earth (not a real world): the world became placed in a machine. This was possible because of the professionalization of meteorology that we talked about earlier that led to quantitative methodologies.
“Human pattern recognition, a principal basis of all earlier forecasting techniques, would give way to mathematics. Collectively, these processes began subtly to change the way in which meteorologists understood both the meaning and the purpose of data analysis” (Edwards 2010:259).
Another way to interpret the use of models and its relations to data is through an anthropological explanation of cosmology in the study of religion: the ‘turtles all the way down’ issue. Some people call it the infinite regress problem, but as an anthropologist, I like to see this issue the way Clifford Geertz described it in “Thick Description: Towards an Interpretive Theory of Culture.” An Englishman hears an Indian religious explanation of the world, which is on a platform that is on the back of an elephant, which is on the back of a turtle. The Englishman then wants to know what the turtle is on, and is told another turtle. And when he asks what’s underneath that turtle, he hears “Ah, Sahib, after that it is turtles all the way down.” For us, it’s models all the way down: statistical interpolation and models of data – this is what makes data global.
reproductionism: “Traditionally, scientists and philosophers alike understood mathematical models as expressions of theory-as constructs that relate dependent and independent variables to one another according to physical laws. On this view, you make a model to test a theory (or one expression of a theory). You take some measurements, fill them in as values for initial conditions in the model, then solve the equations, iterating into the future ….Yet from the point of view of operational forecasting, the main goal of analysis is not to explain weather but to reproduce it. You are generating a global data image, simulating and observing at the same time, checking and adjusting your simulation and your observations against each other” (2010:279-280).