Saturday, August 22, 2009

Location of Golgotha (Part 1)

One of the most important continuing items in Cryptonomicon is the location (latitude and longitude) of the entrance to Golgotha, the excavation at Bundok Site, in which a large quantity of gold was stored near the end of World War II. On pages 789-792 is a description of the method, used by Lieutenants Goto Dengo and Ninomiya, to determine that location in 1944. On pages 1039-1040, Randy Waterhouse recalls the values he had decrypted from the Arethusa intercepts: "In the seconds figure, the Golgotha numbers have one digit after the decimal point, which implies a precision of ten feet. GPS receivers can give you that kind of precision. Randy's not so sure about the sextants that the Nipponese surveyors presumably used during the war."


This stated precision of the Global Positioning System (GPS) was consistent with Randy's earlier experience. On page 601, Randy is told the coordinates of a different site, accurate to hundredths of a second. On page 633, Randy mentions this encounter in an email to his colleaues, saying "... , implying a maximum positional error on the order of the size of a dinner plate." On page 655, Randy reports finding a stack of gold bars there, with the help of his GPS receiver.


On page 1064, Randy and Goto Dengo verify that they both know the coordinates of Golgotha, as of 1944. On pages 1089-1090, Randy reaches that point, as shown by his new GPS receiver.


Let us start out with the simplest question: What distance on Earth corresponds to one second of arc? The original definition of the the meter was that the Paris meridian, from pole to equator, should measure ten million meters. (They didn't get it quite right.) That quarter circle contains 90 x 60 x 60 = 324,000 arc-seconds, so that one arc-second corresponds to 30.9 meters or 101.3 feet. There is no point in keeping more decimal places in this discussion, because Earth's surface is approximately an ellipsoid of revolution. The polar radius b is shorter that the equatorial radius a. [This relationship is often expressed by the flattening parameter f, in the form b = a (1 - f).] The distance corresponding to one arc-second depends upon the latitude, and upon the direction of the displacement. Thus Stephenson was reasonable in both of his statements about the precision of GPS.


Prior to the development of GPS, there were two ways in which the position of an arbitrary point could be determined. One method involves astronomical observations, to measure the latitude and longitude directly. It works at any place on Earth, on land or water. The other method is by survey, to measure the distance and direction to that arbitrary point, from some point whose latitude and longitude are already known. It works just as well in an archipelago such as the Philippines, as on a continent such as North America, so long as water gaps can be spanned by lines of sight between triangulation stations on land.


There are two levels at which one can ask whether the 1944 measurement was possible, with that stated precision. At the higher level, what must you have and what must you know, in order to measure latitude and longitude to 0.1 arc-second, without using GPS, but matching GPS? At the lower level, what precision could Goto and Ninomiya reasonably have attained, with their equipment and technique?


The problem of measuring longitude by astronomical observations, has historically involved measuring the time of some event with sufficient accuracy. [Stephenson even mentioned "the longitude problem" in The System of the World (e.g., pages 345-348).] In order to know longitude to 1/10 arc-second, the time must be measured to the nearest 1/150 second. This is completely impossible, if human reaction time is involved.

The best astronomical clocks before World War II were typically based on a vibrating quartz crystal in a controlled environment. The frequency of the crystal was divided electronically, and used to control an electronic alternating-current source at some convenient frequency. The alternating current drove a synchronous electric motor, and reduction gears from the motor shaft drove analog second, minute, and hour hands. Such a clock was much more accurate over long periods than any clock with a mechanical escapement, but it is not obvious how one could pull out the times of external events, to this desired precision.


Purely electronic clocks, which essentially count the oscillations of some atomic or molecular system, are a product of the development of radar during World War II. It is almost trivially easy to pull out the time, to much better precision than this, without disturbing the clock itself. However, such clocks did not exist in 1944.


The astronomical event itself must appear to be no larger than 1/10 arc-second. That typically means that it must involve a star, which acts as a point source to be viewed by a telescope. The standard relationship, for the diffraction pattern produced at a circular aperture, is that the first zero occurs at the angle such that the path difference, through points across the diameter of the aperture, is 1.22 wavelengths. For light of wavelength 550 nanometer (the peak of the response of the human eye), and an angle of 0.487 microradian (1/10 arc-second), the aperture must exceed 1.38 meter (54 inches). This is a major astronomical instrument.


The event would typically be the passage of the star across the local celestial meridian, defined by the local vertical and the celestial pole. The longitude of the telescope would be determined by the time of passage, as the star image moves behind a cross-hair, which is aligned with the meridian. The latitude of the telescope would be determined from the angle of elevation of the star image above the local horizontal, or equivalently, by the angle between the local vertical and the star image.


Of course, the coordinates of the star (right ascension for longitude and declination for latitude) would have to be known to a precision of 1/10 arc-second. This was completely unavailable in 1944. According to my Encyclopedia Britannica, star atlases even in 1989 (the date of publication) were typically good to only 1/4 arc-second.


One remaining problem in making very accurate positional measurements by astronomical observations, and comparing them to GPS measurements, is hidden in the above mentions of "local celestial meridian" and "local horizontal or vertical". (The following discussion is taken from the texts which I used for teaching an introductory course in Earth Science.)

If the mass of Earth were distributed with rotational symmetry, and with density decreasing from the center to the surface, then Earth's surface could indeed match the reference ellipsoid of GPS. At any point on such a homogeneous planet, the local vertical (as revealed by a plumb line) would be perpendicular to the reference ellipsoid. The local horizontal (as revealed by an undisturbed liquid surface) would be tangential to the reference ellipsoid. The local celestial meridian would be defined by the axis of the reference ellipsoid and the point itself.


In the actual Earth, the mass distribution has considerable lack of homogeneity. The scale of the inhomogeneities ranges from continents versus oceans, to mountain ranges versus oceanic trenches, to ore bodies versus petroleum deposits. One effect of inhomogeneity is gravitational anomalies. Directly above a region of greater (lesser) density, the measured acceleration of gravity would be stronger (weaker), than on a homogeneous planet. Another effect is variation of sea level. The ocean water would tend to pile up near a positive anomaly, but would tend to sag near a negative gravitational anomaly.

The remaining effect is deflection of the vertical. The acceleration of gravity g would tend to point toward a region of greater mass density, and away from a region of lesser mass density. This effect is obviously involved in position determination. Any north-south component of g would cause the astronomical latitude to differ from the GPS latitude, and any east-west component would similarly affect the astronomical longitude.


All of these effects can be combined in the concept of the 'geoid'. This is defined as a surface of constant gravitational potential, which matches Earth's mean sea level at every point. It can be specified by its elevation, at every point, relative to the reference ellipsoid. A plumb line is everywhere perpendicular to the geoid. The geoid is the zero for measuring elevations using 'bubble' instruments. Once the geoid is known, then g can be calculated for any point on or outside it.


Early attempts to determine the geoid were based on gravimetric surveys, in which the magnitude of g and the elevation were measured at a grid of points on Earth's surface. A complete determination would have required the grid to extend over the entire surface of Earth. However, that requirement was eased with the launch of artificial satellites in near-Earth orbits. The orbit of a satellite depends upon the exact strength and direction of g at every point of the orbit. When satellites have been tracked in enough different orbits, that information can be combined with gravimetric surveys to give a complete geoid, typically in the form of an expansion in spherical harmonics.


The deflection of the vertical at any point could be found from the slope of the geoid there (relative to the reference ellipsoid), but it can also be found directly from the expansion of the gravitational acceleration g in spherical harmonics. I have not found online any report of determination of the deflection of the vertical for the Philippines. Such a report is available for survey stations in Canada (see http://www.geod.nrcan.gc.ca/hm/pdf/evaluationofegm08_e.pdf ), where both the north-south and east-west deflections are as large as 23 arc-seconds.

A contour map of the gravitational anomaly, which is nearly equivalent to that for the geoid, was recently published [O. Andersen et al, Physics Today 62, 4, 88 (April 2009)]. It shows a texture near the Philippines comparable to that across Canada, so that I would expect the deflections of the vertical there to be comparable. Of course, the deflection must itself be known to the same precisi0n as the astronomical position to be adjusted, here 0.1 arc-second.


The final problem in comparing astronomical positions to GPS positions, is that the two systems do not share the same 'datum', or coordinate system. In particular, a GPS receiver does not read longitude zero, in the fundamental datum of GPS (WGS 84), when it is at the meridian telescope of the Greenwich Observatory. The position of that instrument historically defined zero longitude, for astronomical determinations.

Essentially, each system is compatible within itself, but it should not be expected to be compatible with the other system.


Even if all of the above problems could have been anticipated in 1944, so that the location of Golgotha was correctly known to within three meters, it still might not have been found there fifty years later. The notion of continental drift, or plate tectonics, had been proposed earlier, but the evidence to support it was developed after World War II. I don't know the actual speed of the Philippine platelet, but that distance and time represent a speed of 6 centimeters per year. That is exactly in the range of speeds reported for other plates, e.g., India colliding with Asia, to produce the 2008 earthquake in China.


All in all, I am forced to conclude that it was impossible, that any astronomical method in 1944 could have produced a position for Golgotha, which matched that given by GPS later.


A separate posting will consider how well Lieutenants Goto and Ninomiya might actually have done, in determining the location of Golgotha.

Friday, August 21, 2009

Tides in the Mediterranean

Jack Shaftoe saw a "high tide mark" on the beach at Algiers (page 4 of the Confusion). Months later, "the tide was quite low", when he visited Malta (page 209). When I was reading Julius Caesar's De Bello Gallico in the original Latin in 1944, I learned that there were no significant tides in the Mediterranean Sea. Caesar's unfamiliarity with the phenomenon of tides caused him problems with military operations, on the Atlantic coasts of Gaul and of Britain. [Undoubtedly, that lack of knowledge also contributed to the failure of his attack on Qwghlm (page 256 of Cryptonomicon).] I cannot recall now whether the absence of Mediterranean tides was mentioned by Caesar himself, or only in the classroom discussion. For sure, there is no Latin equivalent to the English word 'tide'. (See an online English-Latin dictionary, such as http://www.freedict.com/onldict/lat.html .)

As a matter of fact, there are noticeable vertical tides at the head of the Adriatic Sea (near Venice), and at the 'corner' of the Gulf of Gabes. Both places were nearly uninhabited at the time of Caesar. Vertical tides have been measured at Malta, but they are inches at most. The scuba diving establishments there advertise: "Throw away your tide tables." (See http://www.aquatours.com/gozo/malta_bugibba-diving.htm .) There are also significant horizontal tidal currents near Malta, and especially in the straits of Messina and of Sicily.

Unfortunately, Neal Stephenson built this 'invisible' tide into the story. It is only implicit at Algiers, in that the galley, on which Jack was a slave, had been beached to have its barnacles scraped. That is very easy to do on a tidal beach, but without tides, the slaves would have to pull it out of the water by hand. When finished, they would either drag it back into the water, or dig a channel to bring the water to the galley. None of this was mentioned by Stephenson. At Malta, he explicitly needed the piers to be higher than the galleys, in order to advance the story.

Surely, if the basin of the Mediterranean were different (in length, width, or depth, or in some combination), there could be significant tides almost anywhere in it. My knowledge of proper oceanography is so scanty, that I can't even start working on this problem.

Thursday, August 20, 2009

Introduction

This blog will analyse various topics of science and technology, which Neal Stephenson incorporated into his four books of historical science fiction. That composite genre offers pitfalls to the unwary author, as well as opportunities to the readers. Typical science fiction is set in the future, so that authors can invent almost any new scientific "facts", and their technological applications, that they want. However, Stephenson chose to put part of the action of Cryptonomicon in World War II, and all of the action of his trilogy, The Baroque Cycle, in the period 1655 t0 1715. In both of those eras, actual technologies existed, and were recorded and preserved, along with the science that inspired or explained them.

Any author, who puts himself into this position, should avoid displaying mistakes in history or in science/technology, or in the combination. The combination errors are often anachronisms, which may be trivial if the scale is a few years, but may be more serious if the scale is centuries. Of course, the author can also score successes, where everything fits perfectly.

For the readers, the opportunities are new inspirations to learn or re-learn science and technology. One can do a free association while reading, asking mental questions such as: Does this fit with what I already know?"; or "Would it really work that way?"; or "Did it actually happen that way?" For me, it has meant that I have read each one of these books several times, and I have learned something new every time. If fact, for some of the "case studies" I have considered for this blog, I have changed my mind about what I had learned, after a subsequent rereading. On a sobering side, I now wonder how many students I had managed to baffle, while trying to teach this stuff. But most importantly, I have enjoyed every one of Stephenson's books, every time I have read it.

As my UserName implies, I received a B.S. degree in Engineering Physics in 1951. In that era, introductory engineering courses typically included lots of historical material. I had also paid attention to things which happened during World War II. All of those memories were available as I did my own free associations. Unfortunately, I have disposed of my personal professional library, so that most of the analysis in these case studies is done "off the top of my head". Please feel free to catch me up in all the mistakes I may make here.

The subject matter in this blog will be at the introductory undergraduate level in physics, mathematics, astronomy, earth science, and branches of engineering. Specific topics will include optics, oscillatory systems, planetary motions, tides, surveying, deflections of beams, machinery, weapons, and units of measure.

I will cite pages in Stephenson's books as they become involved. To make it easy for me, I will refer to the editions which I happen to have. My copy of Cryptonomicon is in the first printing of Avon Books (HarperCollins) paperback. My copies of Quicksilver, the Confusion, and The System of the World, are all in first printings of First Editions of William Morrow (HarperCollins) hardcover.