Friday, December 18, 2009
The analysis of the first observation requires estimates of two different times. One of these is the elapsed time after sunset. The first bout happened soon after sunset, in that the galley slaves could still see light in the sky as they were walking to the banyolar, but the courtyard had to be illuminated so that the crowd of spectators could see the activities. The first bout lasted perhaps 20 to 30 minutes, what with the preliminary activities, the fighting itself, and the settling of bets afterward. If the “long series” included about eight bouts of about the same length, then the lunar observation took place about three to four hours after sunset. The “merest crescent” moon would have been waxing after New Moon, visible in the west to southwest after sunset. An equivalent waning crescent moon would be visible in the east to southeast before sunrise, or well after midnight.
The second time to be estimated is the 'age' of the new moon on the evening in question, or equivalently, the angle between moon and sun as viewed from earth. Unfortunately, the term “merest crescent” is poetic but imprecise. A rough table relating age, angle from sun, fraction of the apparent radius illuminated at the western limb, and time between sunset and moonset follows.
Age ``` Elongation ``` Illumination ``` Time in Sky
days ``` degrees ```````fraction ````````hours
1 ``````` 12 ``````````` 0.023 ````````` 0.8
2 ``````` 24 ``````````` 0.09 `````````` 1.6
3 ``````` 37 ``````````` 0.19 ``````````` 2.4
4 ``````` 49 ``````````` 0.31 ``````````` 3.3
I would say that a four-day-old moon is already a fairly “fat” crescent, but a three-day-old moon may qualify as “mere” if not “merest”. Unfortunately, it would probably already have set before the galley slaves reached the roof. There would be no real problem if the moon had been described as “setting”, rather than “creeping across the sky”.
Stephenson’s description of the “half-moon” is also poetic, and rather good. It would be about 90 degrees east of the sun, and could be seen until about midnight. At that time of year, the moon’s declination would be well south of the sun’s. As viewed from the latitude of the gulf of Cadiz, the illuminated half of the moon would be the lower right, rather than exactly “its underside”. The problem here is the exact date. The site http://eclipse.gsfc.nasa.gov/phase/phasecat.html offers tables of eclipses and lunar phases over a 6000 year period. In 1690, First Quarter actually happened on 11 August Gregorian [1 August Julian]. Throughout The Baroque Cycle, Stephenson uses Julian dates, as used in Great Britain during those years. On 5 August Julian, the moon would actually have been about midway between First Quarter and Full Moon, with about 3/4 of its face illuminated.
Sunday, December 6, 2009
When Günter Bischoff announced the birth weight of Günter Enoch Bobby Kivistik (page 1057 of Cryptonomicon), he almost certainly did not say, “Eight pounds, three ounces – superb for a wartime baby.” He would have used the metric units that he, Rudy von Hacklheber, and Otto Kivistik used every day. He would say, "3,714 grams", or "3.7 kilograms". I doubt that there was a scale anywhere in Norrsbruck (where the baby had been born), calibrated in English units, and who would have bothered to convert the weight? (Bobby Shaftoe would have wanted English units, but he had died without meeting Günter or Enoch in the Philippines.)
Saturday, December 5, 2009
A few days later, Daniel arrived at Woolsthorpe to assist Isaac Newton in observations of Venus (pages 150-156). The house, “made of ... soft pale stone”, had a “clear sunny exposure” at its southern end, with “almost no windows there–just a couple of them, scarcely larger than gunslits, ...” As seen inside the house, “The southern half–with just a few tiny apertures to admit the plentiful sunlight–was Isaac’s: ...” “The sun was going down, and they were preparing for Venus to wheel around into the southern sky.” “Isaac had worked out during which hours of the night Venus would be shining her perfectly unidirectional light on Woolsthorpe Manor’s south wall, and he’d done it not only for tonight but for every night in the next several weeks.” “When Daniel looked, he realized that he could see not only the spectrum from Venus, but tiny, ghostly streaks of color all over the wall: the spectra cast by the stars that surrounded Venus in the southern sky.” “The earth spun and the ribbons of color migrated across the invisible wall, an inch a minute, ...”
Finally, in the early evening of 28 July 1714 (page 539 in The System of the World), Lord Ravenscar is with Lord Bolingbroke, preparing to use Bolingbroke’s telescope. Bolingbroke says, “Presently night shall fall and Venus shall shine forth.”
Let us consider whether or not Venus could have done those things at those times.
Venus, like Mercury, is an inferior planet relative to Earth, i.e., closer to the Sun. The first thing that means is that Venus, as viewed from Earth, can never get very far from the Sun. The direct result of that, is that the simplest description of its motion is 'synodic', i.e., relative to the Sun, rather than relative to the fixed stars. The plane of the orbit of Venus is inclined at an angle of about 3.4 degrees from the plane of Earth’s orbit. Therefore, Venus is never very far from the ecliptic, the path which the Sun appears to follow throughout the year. If the point of maximum excursion of Venus from the ecliptic happens to coincide with its closest approach to Earth, then it can appear to be as much as about 9 degrees north or south of the ecliptic.
To get a reasonable approximate description of Venus’s motion along the ecliptic, let us make the simplifying assumptions that Earth and Venus move in circular orbits (rather than the actual ellipses), and at constant speeds (rather than faster when closer to the Sun). We can also ignore the inclination of Venus’s orbit, so that everything of interest happens on one plane. Earth’s sidereal period of 365.25 days (i.e., relative to the stars), and Venus’s sidereal period of 224.7 days, combine to produce a synodic period of 584 days. During that period, Earth performs 1.599 revolutions, and Venus performs 2.599 revolutions, or exactly one additional revolution. Thus, the original configuration of Sun, Venus, and Earth is exactly reproduced after any integer number of synodic periods.
In evaluating the synodic description, Earth and Sun are considered fixed at a separation of one astronomical unit (AU). Venus revolves eastward (counter-clockwise as viewed from ecliptic north) in a circle of radius 0.723 AU. Its angular speed is 0.6164 degrees per day, so that it goes once around the circle in 584 days. The 'elongation' of Venus, which is its angular displacement from the Sun as viewed from Earth, can be found at any time by solving the Sun-Earth-Venus triangle.
Let us start with zero elongation, increasing eastward. Venus is then directly on the far side of the Sun from Earth ('superior conjunction'). At that time, it has its maximum velocity in elongation, at 0.259 degrees per day. The apparent angular speed decreases continuously for about 217 days, at which point the line of sight from Venus to Earth is tangential to Venus’s orbit. This is the condition of 'greatest eastward elongation', and Venus is 46.3 degrees east of the Sun. (Note that sine 46.3 degrees = 0.723, the ratio of orbit radii.) Venus’s synodic motion is very slow at this time. It spends about 24 days within 0.5 degree of greatest elongation. (That angle is the apparent width of Sun or Moon.)
After greatest eastward elongation, Venus moves westward (toward the Sun). Its speed in elongation increases continuously for about 75 days, until it is directly between Earth and Sun ('inferior conjunction'). The maximum westward speed is about 1.609 degrees per day. During this half-synodic-period of 292 days, Venus is east of the Sun, and is visible in the western sky after sunset (an 'evening star'). As viewed through a telescope, it exhibits phases, similar to those of the moon. Near superior conjunction, it appears 'full' (circular). It is 'waning gibbous' until greatest eastward elongation, when it appears 'half full'. Thereafter it is 'waning crescent', until it is 'dark' near inferior conjunction.
The second half-period of the synodic motion is essentially the mirror image of the first half-period. For about 75 days the apparent angular speed decreases continuously until Venus reaches greatest westward elongation at 46.3 degrees west of the Sun. It then starts moving eastward, until after another 217 days of continually increasing angular speed it reaches superior conjunction again. During this entire time it is visible in the eastern sky before sunrise (a 'morning star', or the “Dawn star”). Its phases are successively 'dark', 'waxing crescent', 'half full' (at greatest westward elongation), 'waxing gibbous', and 'full'.
The motion of Venus relative to the stars can be found by superimposing the Sun’s average motion (about 0.986 degrees per day eastward along the ecliptic) upon this synodic motion. The effects of the eccentricity of the orbits of Venus and Earth are to change the angles of greatest elongation by a degree or two, and to change the time intervals between unique configurations by a few days.
It is obvious that the first described appearance of Venus is quite impossible. Any planet which is about 180 degrees from the Sun must be superior to Earth, i.e., further from the Sun. That configuration is 'opposition', and it is the time at which such a planet is closest to Earth, and hence appears brightest. The two best candidate planets are Jupiter and Saturn, because they appear white, like Venus. Mars is less bright, and appears reddish rather than “blazing”. I tried online to find where the superior planets were at that time, but I gave up (perhaps too soon) on finding a free historical ephemeris. Apparently there is still so much money to be made in astrology, that the proprietors of ephemerides insist on being paid.
The description of Woolsthorpe Manor, given by Stephenson, is not a very good match to the photographs easily found in a search on that place name. (See e.g., Wikipedia.) The photos show several good-sized windows on the sunlit broad front of the house. However, there could have been extensive alterations in the intervening centuries.
The first problem with viewing the spectrum of Venus lies in getting the light through the wall of the house, as described by Stephenson. My guess is that the masonry might be eight inches thick. If the apertures were made like “gunslits”, they might be relieved at an angle of perhaps 45 degrees on each side. In such a case, light from only those directions between south-east and south-west could get in.
Normal window openings, perhaps two feet wide, would relieve the problem almost completely. Opaque panels over the sashes could darken the room. If such a panel were near the midpoint of the thickness of the wall, then an aperture in the center of the panel would admit light from any direction less than about 71 degrees from south.
It is difficult to reconcile Stephenson’s statement, about Venus wheeling around into the southern sky, with the above description of Venus’s synodic motion. If Venus is an evening star, then at sunset it is already as far south as it will ever be during that night. (The only possible exception is if greatest eastward elongation nearly coincides with winter solstice. At that high latitude (about 52.8 degrees), the sun then sets well south of west. Calculations of the azimuths of sunset and of Venus at that time are left to the interested reader.)
On the other hand, if Venus is a morning star, then sunset has nothing to do with the situation. The observers should go to bed early, so that they can get up before sunrise, when Venus is sufficiently south of east.
Newton’s calculations of Venus’s visibility on various nights were no big deal. Except close to inferior conjunction, Venus moves so sedately that times for visibility would change by only a few minutes over the course of a week.
In order to see a spectrum of Venus’s light with the unaided eye, Venus should be as bright as possible. Its brightness depends upon both the phase, which determines how much illuminated area is exposed, and the distance from Earth, which determines the solid angle subtended by that illuminated area. The maximum brightness occurs between greatest elongation and inferior conjunction, but is only slightly greater than the brightness at greatest elongation.
The really big question remains: where was Venus relative to the sun in the early spring of 1666? Fortunately, the free site http://www.fourmilab.ch/images/venus_daytime/ presents the article “Viewing Venus in Broad Daylight”, by John Walker. It includes a calculator for dates of greatest elongations of Venus, for arbitrary years. It does not offer dates of conjunctions. The dates which span the time of interest are as follows.
1665 Sep 11```46.461 deg W
1666 Nov 29```47.473 deg E
Midway between those two dates was 1666 Apr 21. That should be within perhaps 5 days of the true date of superior conjunction. Unfortunately, that website does not identify which calendar is used for those dates. I would guess that it is Gregorian (“New Style”), which was then used in most of continental Europe. In Britain, they still used the Julian calendar (“Old Style”), and called it April 11.
I don’t know exactly when apple trees bloom in Lincolnshire, but it almost has to be rather close to April 21. Unfortunately, superior conjunction is the worst possible time to make observations of Venus. The planet is as far away from Earth as it ever gets, so that it has only about one-third its maximum brightness. It is lost in the glow of the twilight sky, until it gets far enough away from the sun. At a guess, it should be at least 10 degrees away, in order to give any useful viewing time. That would take about six weeks after conjunction. As the final blow, Venus would be very close to directly west at sunset. Its light would not shine on the south wall of Woolsthorpe Manor at all, during hours of darkness.
Finally, the dates of greatest elongation which span midsummer of 1714 are as follows.
1713 Aug 29```45.459 deg W
1714 Nov 16```47.473 deg E
The rather close match of these dates to the previous case arises because a difference of 30 synodic periods amounts to just less than 48 Earth years. The superior conjunction for this case happened close to 1714 April 8 (Gregorian).
The evening in question [28 July (Julian) or 8 August (Gregorian)] was 122 days later, and Venus’s elongation would have been about 30.5 degrees Eastward. It was indeed an evening star on that date, and would have been clearly visible in the western sky after sunset. (Unfortunately, the seeing was probably very bad after all the bonfires were started.)
Thus Stephenson put Venus in the right place one time out of three.
Saturday, November 21, 2009
During the production process, the gold stock was carried on a leather "skid" between work stations, apparently to protect it from damage. The ultimate pieces were called variously foils, leaves, sheets, and cards, names which have connotations suggesting different thicknesses. These statements should raise the questions: What were the mechanical properties of these pieces? In particular, how massive and sturdy were they? They were asserted to be made of "Solomonic gold". In the absence of detailed information about the properties of such material, let us assume that the specific gravity was 19.3, the same as normal pure gold.
The only dimensions given in The System of the World were the thickness, as "thinner than a fingernail", and the edge length, as "about a hand-span" (pages 412, 413). In Cryptonomicon, the dimensions were in mixed units: "maybe eight inches on a side and about a quarter of a millimeter thick, with a pattern of tiny neat holes punched through it, like a computer card" (page 569). Thus each had a volume of about ten cubic centimeters and a mass of about 200 grams. They were indeed heavy and valuable. Each contained enough gold to make more than two dozen gold guinea coins of Queen Anne (see Wikipedia). At $400 per troy ounce ($12.86 per gram), as was current in Cryptonomicon (page 655), they provided long-term data storage at about $20 per byte.
These overall dimensions allow an estimate of other dimensions of the cards. The maximum spacing of the grid lines for the holes would be 6 millimeters, leaving about 8 millimeters between the outer grid lines and the edges. The holes could be 2 millimeters in diameter, thereby looking small, if not "tiny", relative to the spacing between them. This diameter is indeed rather small, because the sensing rods in the ultimate mechanical "Logic Mill" would have to be even smaller in diameter, to ensure free passage despite possible misalignment. The rods could almost be called 'needles'. On the other hand, if the holes were much larger in diameter, they might seriously weaken the cards.
For diameter 2 millimeters, each "bit" of gold punched from a card would have a volume of about 0.8 cubic millimeters and a mass of about 15 milligrams. That would be easy enough to weigh on a balance scale of the day, but another factor should be considered, in using total weight to count the bits (page 423 of The System of the World). The problem arises because Stephenson does not state, specifically enough, how many bits were to be weighed at once. If the bits from only a single row were weighed, the problem does not arise, because the bits would almost certainly have been the same mass within a few percent. However, if all the bits from an entire card were weighed together, there might be hundreds of them. It is not clear that the technology of 1714 could have guaranteed that all bits had the same mass within a fraction of a percent. Let us start by estimating how many how many holes might have been in an average card.
The information punched into the cards resembled random prime numbers (page 711 of the Confusion). One row of 32 binary digits can hold any integer between 0 and 4,294,967,295. From the leading term of the prime-number theorem, that range includes about 190 million primes. It seems unlikely that Daniel Waterhouse had assembled that many logical concepts, during his years in North America.
An alternative approach is to consider the number of cards in the set. There were "thousands" of cards, which were in five crates when transferred from Gertrude to V-Million. If we assume 1000 cards per crate, each crate would contain 200 kilograms of gold, for a total of one tonne. For a very small crew in Gertrude, that might have taken "a whole day to load in." Either they had to empty and refill each crate, carrying the cards in small batches, or they had to work slowly and carefully, so as not to drop a full crate through Gertrude's bottom.
These 5000 cards could hold 160,000 prime numbers. This seems rather on the small side for Daniel Waterhouse’s years of work. If those primes had few duplications or omissions, the largest must be about 1.9 million, from the leading term of the other form of the prime-number theorem. It would take 21 binary digits to express these largest primes, and the average number of significant binary digits over this entire set of primes is more than 19. These significant binary digits would average about half ones and half zeros, so that a typical card should yield about 300 bits.
Note that doubling the number of cards would double the amount of physical work for anyone who ever had any contact with the cards. However, it would increase the maximum and average numbers of binary digits by only about one. The only way to use all 32 binary digits would be to change the nature of the information they were recording. Either they were not prime numbers, or else Daniel Waterhouse skipped over many more primes than he included.
The bits could have different masses for two reasons: the 32 punches in the machine at Bridewell could have different effective diameters; and different cards could have different thicknesses. The difference in diameters might tend to average out for different patterns of holes across a row, but a thin (thick) card would consistently seem to have fewer (more) freed bits by weight than by actual count. For simple weighing to be correct, the card thickness (about 250 microns) would need to be reproducible to better than 0.8 micron. The hand-cranked rolling mill in the "Court of Technologikal Arts" (described on page 412 of The System of the World) demanded discussion about its usage, even without this concern about the consistency of its output. It may not be enough that Daniel Waterhouse claimed "perfectly uniform thickness" (page 424).
It is extremely unlikely that a single pass through any mill could reduce the thickness and increase the area of the original plate by as factor of about 12 (from an eighth of an inch to a quarter of a millimeter). If the mill could be made to feed at all, it would probably just tear pieces off the leading edge. Even a modern rolling mill, working hot metal, typically reduces thickness by perhaps ten percent at a pass, so that many passes would be required. For the total reduction wanted for this gold, 16 passes would each be -15 %, 20 passes would each be -12%, or 24 passes would each be -10%.
I consider multiples of 4 so that the stock could be rotated 90 degrees between each pass, thereby keeping the stock roughly square, if it started that way. A rolling mill mainly increases the length of the material, with much less effect on the width. If Daniel Waterhouse’s original "squarish plate" was 16 inches x 18 inches (one third of a full plate from Minerva) (see below), it would end up about 57 inches by 64 inches. If the rollers weren’t long enough to handle this size, some of the crosswise passes through the mill would have to be replaced by extra lengthwise passes. The frame of the skid to carry the rolled gold would indeed be "the size of a dining table" (page 412).
Yet another complication is that this cold-rolling would work-harden the gold. It would have to be annealed rather often, perhaps after every two passes, in order to restore its ductility. Eventually this would involve a rather large oven. At any rate, rolling out the gold would require resetting the rollers of the mill many times over several days.
Achieving the desired accuracy for the final pass through the mill has two separate phases. First of all, the cylindrical surface of each brass roller would need to be made with radial run-out of a fraction of a micron, over its entire length and circumference. Otherwise, the sheet of gold could not possibly come out with the same thickness throughout its entire area. This is hard to believe for the technology of 1714. However, if that could be done, it might thereafter be possible to reset the final spacing between rollers accurately enough. The rollers could be closed down on a feeler gauge, until a specified force (measured with a spring balance) was needed to pull it out. That measurement should be repeated at many points on the rollers, to verify that the small run-out has been maintained. The final resetting of the rollers might take more time than the actual rolling of the last pass.
Note that the use of feeler gauges would make setting the intermediate spacings rather easy, even without knowing the exact thickness of any gauge. For example, to achieve a reduction of thickness by 20%, one need only require that a stack or four identical thicker gauges had the same total height as a stack of five identical thinner gauges. Those intermediate passes need not be set so carefully as the final pass.
This emphasis on uniformly reproducible thickness may be overkill for counting bits by weighing. However, it would help insure that the mechanical Logic Mill could shift its cards around without jamming. Even an "Electrical Till Corporation" card reader demanded cards of uniform thickness. The area of each golden card was easier to control by the technology of 1714, than was its thickness. The fence on the "shearing-machine" that cut the cards could possibly be reset to an accuracy of 0.1 millimeter (one part in 2000 of the edge length), so that every card had the same area, to one part in 1000. However, the jaws of the shear would have to be quite rigid, in order to control distortion over their length of several feet. The average thickness of an unpunched card would be proportional to its mass, to sufficient accuracy. Thus one needed only to compare the total mass of the bits to the initial mass of the particular card from which they came.
The mass of the cards was also involved in their packaging. The inner container resembled "... a hat-box, about a foot in diameter and half that in height" (page 801), so that each had a typical volume of about 11 liters. They were said to "... float, at least for a little while" (page 802), so that the contents could not exceed 11 kilograms. One must allow several kilograms for the wood shavings, so that perhaps only 8 kilograms of gold could be put in each box. Those 40 cards would be a stack only 1 centimeter thick, easy to be "all wrapped in paper". The outer barrels were rather small, with each holding only six hat-boxes. Something as mundane as salt cod, the intended disguise for this gold (page 804), might be shipped in larger barrels. Note that the 5000 cards we have assumed, needed only 125 hat-boxes and 21 barrels.
Pieces of almost any metal, rolled or beaten to this thickness, can be mutilated rather easily bare-handed. Because this gold is so heavy, one should ask whether it can be handled at all, or whether it 'mutilates itself' under its own weight. We could pose almost any simple problem involving one of these cards as a beam, and calculate the resultant deflection.
The minimum model calculation requires equilibrium at each point along the beam, between the external torque applied by loads and reactions, and the internal torque across a section of the beam, due to elastic stresses arising from the curvature of the beam. At the inside of the curve, the material has been shortened from its free length, and there is a compressive stress in it. At the outside of the curve, the material has been extended, and there is a tensile stress in it. Again we will admit ignorance of the mechanical properties of "Solomonic gold", and assume the value 79 giga-pascal for Young’s modulus, the same as for normal pure gold.
As a particular simple beam problem, let us consider placing one of the cards across a smooth horizontal rod. The center of the card face would be touching the rod, and the edges of the card would be parallel and perpendicular to the rod. (This problem is mathematically identical to that of cantilevering half the length of the card out of a horizontal clamp.) The effect of punched-out holes upon the response of the beam would depend upon the pattern of the punching, so we will consider only an unpunched card.
The solution for this problem (to be verified by the interested reader) is that the free edge of the card would be 6 millimeters below the support, and the slope of the card there would be -0.08. That value for the slope is small enough to validate the linear approximation used, i.e., that the horizontal position of any point in the card is indistinguishable from the distance measured along the curve of the card. It also means that overall tension in the card can be ignored as helping to hold it up.
It is possible to model the 'feel' of one of these cards, by using some different material cut to the same edge length. The expression for the deformation of the card incorporates the information about the material in the combination: Young’s modulus x square of thickness / mass density. I found that an 8 inch square of cardboard, cut from a file folder labeled "11 pt stock", behaves rather similarly to the golden square calculated above, each under only its own weight. It is obvious that the cardboard has an initial 'set' and a 'grain'. The deflection depends upon which side is up, and which edges are parallel to the rod. It averages about 4 millimeters, so that this cardboard acts somewhat stiffer than the gold.
This square of cardboard can then be handled in various ways, and should assume about the same deformed shape that one of the golden squares would. Of course, the mass to be held up is only a few grams, rather than 200 grams. The square can be picked up between the fingertips of both hands, at opposite edges. (You can’t do that with a piece of paper that size.) It can be held by one hand in many different positions, without bending appreciably. It seems appropriate to call one of these golden squares a 'card'; it acts like a card. (It also happens to be close to the thickness of an IBM card. A dial caliper showed my last remaining IBM card to be 1/5 millimeter thick [0.oo8 inch].)
An obviously dangerous maneuver is to try to hold the cardboard square approximately horizontal, by grasping a single corner between thumb and forefinger. It can be made to assume a curve with radius as small as about one centimeter, next to the fingers, but it flattens out again when released.
For a very ductile material like gold, the yield point is apparently poorly defined, or at least, hard to find online. Wikipedia does give an "ultimate strength" of 100 mega-pascal, which would occur at a strain of about 1/8 percent. For the thickness of 1/4 millimeter, this would be a radius of curvature of about 10 centimeters. A gold card, bent to a radius of one centimeter like the cardboard, would almost certainly be permanently deformed in this maneuver.
Several other points about these golden cards were suggested by the description of the arrival of Tsar Peter "the Great", accompanied by Solomon Kohan (pages 601, 602 of The System of the World.) One of these was that assay samples had been cut from corners of the cards of the original (incomplete) set, which had been sent to Leibniz at the Tsar’s court.
An important omission from the functionality of the cards was an orientation marker. These square cards, with a square pattern of hole locations, have eight-fold symmetry, but only one proper way to be inserted into the Logic Mill. Of course, it would be possible to determine the orientation by a close inspection of each card. The punches probably raised burrs on the backside of the card, which could be seen or felt. If the data indeed had the many place-holding zeros as mentioned above, it would be obvious which way to read across the rows.
In comparison, Hollerith/IBM cards are rectangles, which offer four-fold symmetry. The proper orientation for them is indicated by a cut-off corner. (I have never seen ETC cards, but I assume that they were rather similar.) The same sort of asymmetric corner cut would similarly show at a glance that golden cards in a stack were all oriented correctly. A related problem would be to show that the golden cards were in the proper order.
The other problem is most charitably treated as a typographic error. One of the golden plates, from which Daniel Waterhouse produced the cards, had just been carried out of Minerva by a barefoot seaman, as a burlap-wrapped bundle. "The package was perhaps a foot and a half wide, four long, and an inch thick." The last dimension should read "..., and an eighth of an inch thick." Even Isaac Newton had learned that the Solomonic gold entered England as hand-hammered sheets of that thickness (page 145 of The System of the World). The difficulty is obvious: a full inch thickness of gold with that area would have a mass of about 270 kilograms. That would be about double the mass of Peter "the Great", but with his great strength he might have been able to handle it. However, it would be about four times the mass of the barefoot seaman, and beyond his capabilities. One eighth of that mass could be a one-hand load for Peter, or a two-hand load for the seaman.
This eighth-inch plate of gold offers another example for beam problems, to determine how sturdy it was. However, a far more interesting question concerns the nail holes in it. They must have been there, although not mentioned at this point in the story. This plate, and its mates, had been the sheathing on the hull of Minerva (page 796 of the Confusion). To hold such a plate in place, against its weight, the drag of the water, and the working of the hull planks as the ship pitched and rolled, would require at least dozens of nails. As a piece of such a plate was passed through the rolling mill, did its nail holes tend to disappear, or did they tend to become larger?
Personally, I feel that both Leibniz and Daniel Waterhouse showed some conceit when they argued for using gold as the storage medium (page 711 of the Confusion and page 424 of The System of the World). Indeed gold is ductile and does not tarnish, but the information is in the holes, not on the surface. Pure silver and copper are also rather ductile, and 1/4 millimeter is probably not too thin to roll out those metals. Silver may turn black and copper may turn green, but either of them would preserve the information of the holes for many decades, or even centuries.
In fact, the information of the holes was effectively lost in less than three centuries. Daniel Waterhouse had spent years writing out paper cards, each with a number in binary notation along one edge, which represented the information on the card in other formats. That number was what Miss Spates, or other operators, punched into a golden card. Those paper cards were surely included as part of the paperwork, which accompanied a set of golden cards into their hat-box. That paperwork did not survive, becoming lost either during transfers or in the sinking of V-Million.
Wednesday, November 18, 2009
“Each pipe is four inches in diameter and thirty-two feet long. There must be a hundred of them, ...” “Stuck into one end of each pipe is a little paper speaker ripped from an old radio.” “The speaker plays a signal–a note–that resonates in the pipe and creates a standing wave.” “That means that in some parts of the pipe, the air pressure is low, and in other parts it is high.” “These U-tubes are full of mercury.” “... several U-shaped glass tubes ... are plumbed into the bottom of the long pipe.” “I put an electrical contact into each U-tube–just a couple of wires separated by an air gap. If those wires are high and dry (like because high air pressure in the organ pipe is shoving the mercury down away from them), no current flows. But if they are immersed in the mercury (because low air pressure in the organ pipe is sucking the mercury up to cover them), then current flows between them, because mercury conducts electricity! So the U-tubes produce a set of binary digits that is like a picture of the standing wave–a graph of the harmonics that make up the musical note that is being played on the speaker.” “... all of those pipes come alive playing variations on the same low C.” “The crescents of mercury in all those U-tubes are shifting up and down, opening and closing the contacts, but systematically: ...”
This system has enough problems with physics, that there are several possible orderings for the things to be considered. Let us arbitrarily start with things which are stated to be seen, followed by things which are stated to be heard, to determine whether they are reasonable. We end with things which cannot be seen, but which are the actual behavior.
Each U-tube is a manometer, which responds to the difference between the air pressures applied to the two arms. In every case, one arm is open to the ambient air of the room, which can be taken as having the same pressure at all points. The manometer is itself rather symmetric, and its response can also be described symmetrically. The higher pressure in one arm acts so as to displace the mercury toward the arm with lower pressure. If the open bore of the tube is uniform along its length, which is the simplest way to do the glass-working, then the mercury level changes by the same distance in the two arms, down in one and up in the other. It is perfectly good English, even if not perfectly good physics, to speak of “sucking”. Whether in a soda straw or in a U-tube, it is actually the larger ambient pressure which makes the fluid rise.
It is admittedly nitpicking to point out that the U-tubes are only partially full of mercury. There has to be air space in both arms of each tube, so that mercury can move without spilling out the end of the tube. That air space also allows for the electrical contacts. Stephenson’s description of the motion of the mercury emphasizes the tube arm connected to the pipe, and the contacts there which are activated. He seems to overlook the simultaneous motion in both arms, which could be called 'seesawing'.
It might be useful to put another pair of contacts in the room-air arm of each tube, to provide an unambiguous signal for high pressure. As it stands, the contacts in the pipe-air arm cannot distinguish between the conditions of high pressure and of zero pressure difference there. Both conditions would leave that circuit open.
The motion of the mercury, as described by Stephenson, can be followed by the human eye. It occurs on a time scale of seconds, in response to changes in the audio signals, which can be heard simultaneously by the human ear. This implies that the changes in pressure being detected are quasi-static. The pressure has a constant value for a while, then changes to another value, etc. Unfortunately, a simple manometer will not respond in the manner described, even to quasi-static changes in pressure, because it is itself an oscillatory system. This was recognized in Quicksilver (page 107), when Daniel Waterhouse was visiting Gresham’s College, home of the Royal Society. Among many other things, he saw “A U-shaped glass tube that Boyle had filled with quicksilver to prove that its undulations were akin to those of a pendulum.”
The frequency of oscillation for an idealized manometer depends upon only one adjustable parameter. The open cross-sectional area of the tube is assumed to be uniform along its length, and the arms of the U-tube to be vertical. The length of tube filled with mercury (measured along the centerline) is L. Then the frequency of oscillation is exactly the same as that of a small-angle simple pendulum of length L/2, in the same gravitational field. The density of the fluid and the area of the tube drop out. The factor 2 arises because the restoring force, at an instant when the mercury has been displaced along the tube by x, is the weight of a column of mercury of height 2x. Reasonable values for L might be 10 or 20 centimeters, so that the natural frequency would be about 2 or 1.5 hertz.
Consider a manometer initially in equilibrium. This can be either with equal pressures in the two arms, or with unequal pressures. The manometer itself reveals, by its levels of mercury, the difference in pressures. (Note that the most sensible units to express the pressures are mm-Hg or cm-Hg.) If the pressure in one arm is suddenly changed, the mercury in the manometer cannot instantaneously shift to the new equilibrium position. The system is effectively an oscillator which has just been released from rest, at some displacement from its new equilibrium position. It would then oscillate about that new equilibrium position, for a considerable length of time. Dissipation must be introduced into the manometer, in order to damp the oscillation before the next change in pressure.
The dissipation could be provided by a constriction of the open area of the tube near the bottom of the U, or by a porous plug filling the tube there. In either case, the viscosity of the mercury moving past the structure results in a damping force on the mercury, which is proportional to and opposing the velocity of the ends of the mercury column. The proportionality constant is effectively a 'mechanical resistance'.
If the manometer is under-damped, its motion is oscillatory at a somewhat decreased frequency, with an amplitude that decays exponentially. The decay constant is proportional to the mechanical resistance. The system exhibits mechanical 'ringing', exactly analogous to the electronic ringing in computer circuits, which Stephenson discusses on pages 436 ff in Cryptonomicon. A graph of the position of a mercury surface (versus time) would somewhat resemble the graph of electronic ringing on page 437. (The interested reader is invited to discover any problems in the details of that graph.) The ringing might cause the contacts to be closed and opened several times for a single change of pressure.
This can be avoided by increasing the mechanical resistance until the condition of critical damping is achieved. Then there is no oscillation at all, and the constant of the exponential decay is equal to the angular frequency of the undamped oscillator. This is the condition under which the system reaches equilibrium in the minimum time, after a displacement. For example, after a time equal to one period of the undamped oscillator (1/2 or 2/3 second), a displacement from the new equilibrium position would decay away to about 0.014 of its initial value. That behavior should open or close the contacts cleanly.
Thus, we have found a way to have mercury levels “shifting up and down”. The U-tubes are critically damped, and are responding to quasi-static pressure changes.
Unfortunately, resonant pipes, as described here, cannot sustain a quasi-static pressure different from the ambient pressure. Stephenson never actually says whether the ends of the pipes are open or closed. However, open ends are suggested by the statement about a small speaker stuck into one end, and by the frightfulness of the sound escaping when the computer is operating. If the pipes were closed at both ends, they could indeed support a difference between interior and ambient air pressures. However, that internal pressure would be the same at every point along a pipe. Moreover, there is no mention of a method to change the interior pressure of a pipe quasi-statically. At the very least, it would require large nearby reservoirs of high-pressure air and of near-vacuum, which could be joined to the pipe by large valves. Of course, this is completely ridiculous here, but Daniel Waterhouse’s card-punching machine at Bridewell in 1714 worked in a similar manner, including a mercury manometer to monitor the pressure in the reservoir (pages 420 ff of The System of the World).
Let us finally consider the sound in the pipes as the source of the pressure on the manometers. The terminology used to describe the audio signals supplied to the pipes by the speakers is sufficiently confused, that it is impossible even to determine how much information can be stored in one pipe at a given instant. It seems to be more than one binary digit, which could be indicated by sound being simply off or on. The first mention of “note”as something “... that resonates in the pipe, and creates a standing wave”, implies that a single frequency of oscillation is involved. A standing wave is composed of two identical sinusoidal waves traveling in opposite directions, added together. It is called “standing”, because its characteristic features do not move. Those are the nodes, where the amplitude of the oscillating quantity (here the sound pressure) is zero, and the intervening anti-nodes, where that amplitude is maximum. Resonance is achieved by selecting the frequency so that nodes occur at the effective ends of the pipe. The fundamental mode has a single anti-node at the mid-point of the length of the pipe. The harmonic modes have successively 2, 3, 4, etc. anti-nodes between the effective ends of the pipe.
On the other hand, the later statement about “... the harmonics that make up the musical note ...”, implies that several frequencies are involved in the pipe simultaneously. The same situation is also implied by the statement about “... playing variations on the same low C”. Confusion is especially likely here, because the word “variations” is standardly applied to a musical theme or tune, rather than to a single note. However, if one changes the mix of harmonics which accompany the fundamental, the resultant sound is different. This is, in fact, one of the ways in which one can identify the instrument which is making the sound.
The formal mathematical description (wave function) of a true standing wave is the product of two factors. One factor depends only on time, and is typically sinusoidal at the frequency of the wave, with either constant or exponentially decreasing amplitude. The other factor depends only on position, and is also typically sinusoidal. The zeros are the nodes of the wave, and the maxima and minima are the anti-nodes. This factor contains the information about the relative amplitude and phase of the oscillating quantity along the length of the wave medium. For a superposition of standing waves, the total wave function cannot be factored, so that the condition should not be called a standing wave.
Stephenson’s major misconception here was confusing the space factor for a standing sound wave, with the pressure itself. The space factor is only a mathematical construction, and can be changed quasi-statically. The pressure at any point (other than exactly at the nodes), must oscillate as described by the time factor. In particular, the only difference between a maximum and a minimum in the space factor, is in the phase of the pressure oscillations at those points. It is quite inappropriate to say that the pressure is “high” at one and “low” at the other.
The assertion that the fundamental mode is always present, means that it carries no usable information whatever. Its only function seems to be the frightfulness of the sound itself. If one harmonic mode at a time was produced in each pipe, the sound might best described as “a cacophony of bugle calls.” (The notes standardly produced by a bugle are harmonics 2 through 6 of the fundamental, which is typically not excited.) We have already seen that Lawrence P. Waterhouse could produce bugle calls in an enclosed staircase at the train station on Inner Qwghlm (page 284 of Cryptonomicon). If several harmonic modes are produced simultaneously in each pipe, it might be possible to store more than one bit (binary digit) per pipe. Note, however, that the value zero for any particular bit must be clearly distinguishable from the value zero for any other bit.
The remaining question about audible qualities is how the note fits into the musical scale. If the ambient temperature was about 25 degrees Celsius (77 degrees Fahrenheit), the speed of sound in dry air would be 346.1 meter/second. The effective length of a pipe, including the end correction at both open ends, was 32.2 feet. The wavelength of the fundamental would be twice as long, or 19.63 meters. The fundamental frequency would be 17.63 hertz, and all integer multiples of it would appear as harmonics. If we consider modern orchestral tuning, with A at 440 hertz, then this fundamental was definitely C-sharp, not C (16.35 hertz). There is no obvious physical reason why Waterhouse chose this particular length of pipe. Any shorter length of pipe would have worked as well (or as poorly), except for the perceived frightfulness of notes in this frequency range.
At long last, we are ready to consider the actual response of a manometer to the sound in one of the pipes. The fundamental frequency of the sound is about 10 times the natural frequency of the manometer, and any harmonic frequency is some integer multiple larger than that. Thus we have the generic problem of a mechanical oscillator driven by an oscillatory applied force, at a frequency much larger than its natural frequency. A further simplification can be achieved by assuming that the damping of the manometer is much less than critical damping, which would make the response as large as possible.
The response of such a system always has the form of an oscillation at the frequency of the applied force, with an amplitude and phase which depend upon the applied frequency. Let us specify the amplitude of the sound pressure oscillation by the height H of a mercury column which would produce the equivalent static pressure. Then the amplitude of the oscillating displacement of the mercury surfaces in the manometer is given approximately by H divided by twice the square of the ratio of applied frequency to natural frequency. Thus the useless fundamental frequency would produce an amplitude of about H/200, the possibly useful harmonic of order 2 (the octave) would produce an amplitude of about H/800, etc.
As a ridiculously large estimate of the sound pressure, let us assume that the sound level inside the pipe at one of the U-tubes was 160 decibels absolute. This is ridiculous, because 3-inch diameter paper speakers, from 1940's era AM radios or 78 RPM record players, were very poor transducers at low frequencies. They were certainly not woofers, as used for FM radios and LP record players of later eras. That 160 decibels above the reference level of 20 micro-pascal would be a sound-pressure amplitude of about 2 kilo-pascal, or about 15 mm-Hg. This would produce an oscillation of the mercury with amplitude 0.02 millimeter or less, which is too small to see with the unaided eye. It is also too small to make a reliable contact with wires which had been pushed into the manometer tube by hand. Thus Lawrence P. Waterhouse could not have gotten useful information out of his “sewer pipe RAM”, in the manner described by Stephenson.
Another point which can be mentioned is the phase of the oscillation of the mercury in the manometer. I still find it amusing, because it is initially counter-intuitive. The displacement of a mechanical oscillator, driven at a frequency much larger than its natural frequency, is almost exactly opposite in phase to the applied force. Thus, at the instant when the pressure in the pipe-arm of the U-tube is greatest, the mercury in that arm is at, or close to, its highest point. This is exactly the opposite of Stephenson’s description, which would apply only to (non-existent) quasi-static pressure changes.
To be complete, I must acknowledge one quite accurate piece of historic science reporting, which Stephenson supplied. On page 744 of Cryptonomicon appears: “Pea-sized drops of mercury are scattered around the floor like ball bearings. The flat soles of Comstock’s shoes explode them into bursts rolling in all directions.” I was familiar with five different educational institutions, which were active before, during, and after World War II. One feature they had in common, was mercury in the cracks in the floors of lecture rooms and laboratories, used for introductory courses in physics or chemistry. It came with the territory. Mercury was always being spilled, and the cleanup was always casual. The instructors knew that mercury was dangerous, but they didn’t worry about it. They wouldn’t drink it, or boil it openly, but almost any other manipulation was OK. Military personnel would have been even more casual than educators, because they would not have been concerned about how to pay to replace the spillage.
Tuesday, November 10, 2009
Another rocket was delivered preassembled to All Hallows Church (page 247). “He diverted his glass a few arc-seconds down into the adjoining churchyard, where the funeral had taken a macabre turn; the lid of the coffin had been tossed aside to reveal a helmet-shaped object with a long stick projecting from its base.”
They seemed to be remarkably reliable and accurate (pages 282-284). None of them missed fire, and the rockets from The Monument and All Hallows Church both passed directly over their intended targets. The first two rockets from the barge on the Thames went short or wide, but the third hit directly on the roof of the White Tower. (That was the only launch site that had spares.)
One ought to wonder where Jack obtained these rockets. Rockets had been used in both Asia and Europe by that time, as display fireworks and as weapons. However, according to my Encyclopaedia Britannica, 1714 fell in the middle of a 100-year period of only sporadic use of rockets in wars. Most rockets of that time had heads made of organic materials (e.g. paper, cloth, or wood), which could not withstand high internal pressure. The descriptions by Stephenson unfortunately do not mention the material, which might help identify the source.
The first thoroughly developed rockets were probably those of William Congreve, but they came later. (See Wikipedia.) They were based on the iron-headed rockets which had been used against the British in the Mysore wars (1792 and 1799). Congreve rockets were first employed against the French in 1806, and notably in the War of 1812. The U.S. National Anthem mentions “the rockets’ red glare” over Fort McHenry, near Baltimore, in 1814. Those were Congreve rockets, and their shape roughly matched Stephenson’s description. If only Jack’s rockets had been Congreve rockets, this would have afforded another example of the continuity which is a notable feature of these four novels. As Lawrence P. Waterhouse was playing the glockenspiel part of the National Anthem on the deck of USS Nevada at Pearl Harbor on December 7, 1941 (Cryptonomicon, page 77), he took advantage of the fact that “... The Star Spangled Banner is much easier to ding than to sing.”
However, Congreve rockets were rather inaccurate. Even launched from a standard frame, they could start off at a wide angle from the intended direction. They could also change direction in mid-flight. This erratic behavior actually increased their effectiveness as weapons of terror against exposed enemy forces. Congreve advised that they should be launched in volleys of 20 to 50. Surely the rockets of a century earlier were even more erratic than that.
Stephenson seems to be enamored of the arc-second as a unit of angle. To find the actual angle through which Jack depressed his spyglass in shifting from the roof of All Hallows Church to the adjacent graveyard, we need to estimate some lengths. The four stated distances along a somewhat zig-zag line from The Monument to All Hallows add up to about 1600 feet. However, from my only map of London with a scale (Baedecker’s), that direct distance seems to be about 1200 feet. I can only guess the roof of that church to have been about 40 feet high. Thus the vertical angle as seen from The Monument was about 1/30 radian, or roughly seven thousand arc-seconds, or two degrees. The mere tremor in Jack’s arms, due to his pulse-beat, surely deflected the spyglass through an angle larger than “a few arc-seconds”.
Sunday, November 8, 2009
One specific example of the practice of Natural Philosophy, or the scientific method, appeared in Eliza’s journal of her trip across the boundary between eastern France and the western parts of Germany. In the entry of 20 August 1688 (pages 830, 831 of Quicksilver), she described her situation and immediate objective. "For several days we have been working our languid way up the Marne." "This vessel is what they call a chaland, a long, narrow, cheaply made box with but a single square sail ..." "... there is nothing for a spy to look for, except, perhaps, certain military stocks." "... certain items, such as gunpowder, and especially lead, might be shipped up the river from arsenals in the vicinity of Paris." "So I peer at the chalands making their way upriver and wonder what is stored down in their holds. To outward appearances they are all carrying the same sort of cargo as the chaland of M. LeBrun, viz., salted fish, salt, wine, apples, and other goods ...."
In the entry of 25 August 1688 (pages 831, 832), she described her thoughts and experimental observations. "... was there any outward sign by which I could distinguish a chaland loaded as M. LeBrun’s is, and one that had a few tons of musket-balls in the bilge with empty barrels above ...?" "Even from a distance it is possible to observe the sideways rocking of one of these chalands by watching the top of its mast– ..." "I borrowed a pair of wooden shoes from M. LeBrun and set both of them afloat .... Into one of these I placed an iron bar, which rested directly upon the sole of the shoe. Into the other, I placed an equal weight of salt, ... Though the weights of the shoes’ cargoes were equal, the distributions of those weights were not, for the salt was evenly distributed through the whole volume of the shoe, whereas the iron bar was concentrated in its ‘bilge.’ When I set the two shoes to rocking, I could easily observe that the one laden with iron rocked with a slower, more ponderous motion, because all of its weight was far from the axis of the movement." "... I timed one hundred rockings of the chaland I was on, and then I began to make the same observation of the other chalands on the river. ... I noticed one or two that rocked very slowly. ..., the first one turned out to be laden with quarried stones."
The method appears to be excellent, with steps of modeling, theoretical explanation, and observation. Unfortunately, the real world doesn’t match any of the steps as described.
Rather weirdly, I happen to own a pair of wooden shoes. Mine are of Dutch design, rather than French, but perhaps that is irrelevant. Each shoe weighs 14 ounces, and 2 pounds of rock salt filled one, to a depth of about 1 1/2 inch and an interior width of about 3 inches.. The other held 2 pounds of steel, cut from flat stock. The stack of pieces was 5/8 inch deep and 1 1/2 inch wide. Those heavy loads left only about 1/4 inch of 'freeboard', when the shoes were floated, so that the angular amplitude had to be small. I didn't have a stopwatch, and the shoes tended to drift against the edge of the basin, thereby quickly damping the motion. However, the shoe loaded with steel rocked at more than twice the frequency of the one loaded with salt. This is exactly the opposite of Eliza's report!
The theoretical explanation as given by Stephenson is grossly oversimplified. It matches too closely the simple pendulum, in which a point mass supplies the inertia, and a restoring force is supplied by the combination of its weight and the tension in the supporting string. For a simple pendulum, a longer pendulum does indeed oscillate with a longer period.
Here the shoe or the barge is rotating, and must be considered as a special case of a 'physical pendulum'. In this situation, the inertial factor is the 'moment of inertia', which must be taken about the appropriate axis. The moment of inertia can also be expressed as mass x ('radius of gyration') squared. The radius of gyration is the root-mean-square distance of the mass from that appropriate axis. For a true physical pendulum, that axis passes through the fixed point of suspension. For a floating object, that axis passes through the center of mass, which effectively does not move as the object rotates. (Any acceleration of the center of mass would involve hydrodynamic forces, whereas this analysis is limited to hydrostatic forces.)
For a true physical pendulum, the restoring action is the torque about the suspension point, due to the weight of the body, acting at its center of mass (or 'center of gravity'). For a floating object, the torque is supplied by the pressure of the surrounding water. The effect is always in the form of an upward buoyant force, equal to the weight of the displaced water, acting at the centroid of the volume of the displaced water (the 'center of buoyancy'). Perhaps Stephenson considered that "the axis of the movement" passes through the center of buoyancy, but that is not a fixed point.
For an object floating at rest, the weight and the buoyant force must act along the same vertical line, or equivalently, the center of gravity and the center of buoyancy are both on that same vertical line. If the center of buoyancy is above the center of gravity, this equilibrium position is absolutely stable. If the center of gravity is above the center of buoyancy, the equilibrium must be investigated more carefully.
Boats, barges, and ships are typically bilaterally symmetric, and shoes are approximately so. When a barge is level (side to side), the center of buoyancy is on the vertical plane of symmetry. The load is typically adjusted so that the center of gravity is also on that plane of symmetry. Let us consider a virtual displacement by a rotation about the roll axis, which is the horizontal line in the plane of symmetry and through the center of gravity. In this displacement, the center of buoyancy typically shifts sideways, away from the plane of symmetry, because the displaced water has changed its shape. The line of action of the buoyant force intersects the plane of symmetry at the 'metacenter'. If the metacenter is above the center of gravity, the equilibrium is stable, at least for small rotations. If the center of gravity is above the metacenter, the equilibrium is unstable, and the barge or shoe will roll over.
For any physical pendulum, including a floating object, the period of oscillation about a position of stable equilibrium is always the same as that of a simple pendulum of length = (radius of gyration) squared / (length of arm associated with restoring torque). Here that length of arm is the distance between the metacenter and the center of mass. The concept which Stephenson overlooked is that a physical pendulum has two characteristic lengths, both of which depend upon the distribution of its mass. The simple pendulum has only one characteristic length, because its mass is not distributed at all, so that radius of gyration = length of arm.
In order to examine the effect of the density (or specific gravity) of the load upon the motion of a barge, we need to model the system. The description as "a long, narrow, cheaply made box" serves to inspire a model. A "box" should have a uniform rectangular cross-section, with an outside width and an overall depth from bottom of hull planks to top of deck (if any). That it is "long", suggests that we can ignore the effects of the ends as an initial approximation. That it is "cheaply made", means that the load should be uniformly distributed over the entire area of the bottom and of the deck. That way the load is supported almost directly by the pressure of the water against the hull planks, and there is no need for beam strength to support concentrated loads. Thus we can describe the entire barge by a single cross section, which can also be considered to represent a unit length ('one meter') of the barge.
We can now evaluate the mechanical properties of such a barge, under various loads. Every item shown in the cross section, whether part of the barge or of the load, can be represented by a rectangle, and can be treated in the same manner. (The bottom, sides, and deck would actually be longitudinal planks over spaced transverse ribs, but the mass of the ribs can be 'averaged' over the unit length.) The mass of a unit length of the item is its density times its area, and its individual center of mass is at the center of its rectangle. The mass and the center of mass of the entire system can be found by summing the contributions.
The moment of inertia of every rectangular item, about its individual center of mass, is given by 1/12 of its mass x the square of its diagonal (or the sum of the squares of its edge lengths). Its moment of inertia about any other point, such as the overall center of mass, is the sum of that central moment and its mass x the square of the distance between the desired point and the center of the item. The moment of inertia of the entire system about the overall center of mass, is the sum of the individual contributions. A further significance of "cheaply made", is that the mass and moment of inertia are dominated by the load, rather than by the wooden structure of the barge.
The calculation of the metacenter is particularly simple for this rectangular cross section. When the barge is level, the displaced water for a unit length appears as a rectangle, the width of the hull x the 'draft'. The draft is the vertical distance from the bottom of the hull to the waterline, and it depends upon the load in the barge at the time. The center of buoyancy is thus at half of the draft above the bottom of the hull. When the barge has a virtual displacement such that the deck has a particular (small) slope relative to the waterline, the displaced water for a unit length appears as a trapezoid, having the same area as the original rectangle. The centroid of the trapezoid is displaced horizontally from the plane of symmetry, by 1/12 x slope x width squared / draft. Thus the height of the metacenter above the hull bottom is 1/2 draft + 1/12 width squared / draft.
Let us then consider identical barges, each loaded with cargo of uniform but different density. The total load will have the same mass in each case, so that the draft will be the same in each case. Thus the position of the metacenter will also be the same in each case. The width of the load will fill the space between the inner faces of the walls. The height of the load will be inversely proportional to the density of the load. The center of mass of the system will be close to the centroid of the load (the "cheaply built" condition). Similarly, the central moment of inertia of the system will be dominated by the central moment of inertia of the load, which has the factor (1/12) [(inner width) squared + (load height) squared].
The overall radius of gyration obviously decreases as the load height decreases, or as the load density increases. At the same time, the length of the torque arm, from the overall center of mass to the metacenter, increases as the load density increases. Thus the load of greatest available density corresponds to a simple pendulum of the shortest possible length, or shortest period. This is exactly opposite to Stephenson’s assertion.
Some other conditions of equilibrium should be mentioned. An empty barge of reasonable width would have the smallest possible draft and the largest possible height of metacenter. It would be quite stable, and would not require any ballast before being loaded with useful cargo.
There is a smallest critical load density which allows the system to be stable, and to oscillate. When the load has density just greater than that critical value, the overall center of mass is below the metacenter, but close to it. The equivalent simple pendulum is very long, and the period of oscillation is correspondingly large. This is the condition which produces "ponderous motion", and not high density as Stephenson asserted.
When the load has any density less than that critical value, the overall center of mass is above the metacenter, and the barge would roll over. If the load has exactly the critical density, the system would be in neutral equilibrium. It ought to 'hang' at any angle of displacement, but it would be in danger of capsizing if anything on board were moved.
The exact value of this critical load density depends on the dimensions of the barge, but it ought to be roughly half the density of the wood used in the barge. The barge would appear dangerously overloaded, with cargo piled on the deck to about the height of the hull itself. Thus the most dangerous cargo for a barge is enclosed air! It could take the form of uncompressed fiber (wool or tow), or of partly empty barrels or crates. In the modern era, it could be empty shipping pallets, which actually represent a dangerous load for a flatbed semitrailer.
Stephenson should have known better than to ignore these effects of load distribution when he was writing Quicksilver, because on page 302 appears: "Once loaded, the carronnades are being run out to the gunwales–hugely increasing the ship’s moment of inertia, accounting for the change in the roll period–" Unfortunately, he used perhaps the most inappropriate specialized form of naval artillery as his example. The carronade was not invented until about 1769 (see Wikipedia), so that it could not have been available on Minerva in November 1713. One of its special features, beyond those described on that same page, was its very low mass. (It was typically used on upper decks of ships, without raising the center of mass too high.) The ship’s long guns would have made a greater increase in the moment of inertia by being run out, but probably not by enough to be called "hugely".
It is likely that Stephenson knows even better now, because in his latest book Anathem, the narrator (Fraa Erasmas) remarks on page 683: "Then I went back to work estimating the inertia tensor of the Geometers’ ship." What I have done above amounts to considering a single component of that tensor, for a chaland. None of the other components are needed here, when considering only rotation about the roll axis. However, Erasmas did have to allow for simultaneous rotations about all three axes. Essentially, Stephenson knows many of the right words, but he seems not to have the full significance of "moment of inertia" at his fingertips.
Friday, October 2, 2009
Saturday, September 26, 2009
He counted seven seconds between seeing the fireball of the crash, and hearing the explosion. Using the 'five seconds per mile' algorithm, which he had learned in the Boy Scouts (that is where I learned it), the crash must have been about 1.4 miles from where he then stood. Already, there is a problem; the two estimates don't agree very well, because the airplane had kept going beyond that point on the shoreline.
Bobby walked three more kilometers into Norrsbruck, where he told Günter Bischoff what he had seen (page 583). Bobby specified the distance as "... seven kilometers from where I was standing. So, ten clicks from here."
There are several problems with that statement. The numerical problem may be the most obvious. Just from knowing that a distance of three miles is approximately five kilometers, the metric version of the algorithm must be 'three seconds per kilometer'. The accepted value for the speed of sound (about 330 meters/second in dry air at zero degrees Celsius), means that this metric form of the algorithm is a much better approximation to reality, than is 'five seconds per mile'. Thus, either the time delay was 21 seconds, or the distance was 2.3 kilometers, or perhaps neither number was correct.
The visibility problem arises only if the time delay was indeed 21 seconds, so that the distance was properly 7 kilometers, or about 4.2 miles. I have never been to Sweden, but maps (e.g., http://www.sverigeturism.se/smorgasbord/smorgasbord/service/sweden-map.html ) show several rivers flowing into the Gulf of Bothnia, which implies erosion of valleys. I have lived many years in central Illinois (where Neal Stephenson has also lived) and central Kansas. Both are considered rather flat, and effectively treeless. I would not count on seeing the immediate fireball in either place, at a distance of seven kilometers, because the crash could have happened in a valley. The trees in Sweden would make it even less likely for the fireball to be visible. The ultimate column of smoke would certainly become visible, but it is hard to say just how long that would take.
This military slang usage of "click" for kilometer (better "klick"), was either an anachronism, or a separate creation which died with Bobby and Günter. I served in the U.S. Army during the Korean War, and I never heard that usage. It was widespread during the Vietnam War, and some dictionaries (e.g., http://www.urbandictionary.com/ ) suggest that it arose during the 1950s. I would guess that it was invented during joint training exercises, involving U.S. forces and other NATO forces. Based on numbers of countries, if not numbers of individual soldiers, the U.S. military was outvoted on the question of yards and miles, versus meters and kilometers. Everyone needed to agree on maps, road marches, and firing tables for artillery. (Even the British, who invented the yard and the mile, abandoned them in favor of metric units.) Perhaps "klick" started as a face-saving joke.
Tuesday, September 1, 2009
I don't know details of a Nipponese sextant of vintage 1940, but it was unlikely to have been better than modern instruments. The best (or at least, most expensive) sextants for amateur navigators, that I have found online, are the Cassens & Plath Horizon Ultra ( http://www.cassens_plath.de/catalog_web/sextantse_n.htm ) and the Tamaya Spica (http://www.stanleylondon.com/TamayaSpica.htm ). Their specifications are similar. The arc is stated to be accurate within 9 or 10 arc-seconds. The vernier on the drum of the tangent screw reads to 0.2 arc-minute. The aperture of the largest available telescope is 40 mm. Much less expensive sextants (made mostly of plastic) have similar specifications, except that the arc is typically accurate only to 30 arc-seconds.
The angular resolution of such a telescope is about 16.8 micro-radian, or 3.5 arc-second. Thus the precision of angular measurements is limited primarily by the vernier and the arc, both of which might contribute. With good technique, and information about the sun's declination and apparent radius from a nautical almanac, the astronomical latitude could be measured to about the nearest 15 arc seconds, or about 460 meters. This is better than one needs for navigation on the ocean surface. You can see much further than that over open water, and can correct your landfall.
I do happen to know about "a pretty good German watch" of vintage 1940, because I used to own one. Mine was actually made in about 1946, but it undoubtedly used a prewar design. It ticked 5 times per second, and the sweep-second hand jumped at every tick. However, I quickly realized that I couldn't do anything useful with the fifths of seconds. If I was looking at some event, and then looked at the watch, it was almost impossible to determine the time of the event, to any better than the nearest second.
It also wasn't good enough that the watch was "... zeroed against the radio transmission from Manila this morning, ..." If my watch went more than a few months from its last visit to the jeweler's shop, it would be gaining or losing several seconds per day. It was at least as important to determine its current 'rate of going', by checking the watch against a standard for several days in a row.
I have never personally tried 'shooting the sun' with a sextant, but I have read about the technique used at local noon. (I have read every story in the Hornblower series by C. S. Forester, even more often than I have read Stephenson's stories.) One turns slowly, following the sun in azimuth, while rotating the tangent screw to keep the bottom edge of the sun's reflected image in apparent contact with the horizon. The sun's angle of elevation increases as it approaches the celestial meridian, stops increasing exactly as it crosses the meridian, and thereafter decreases. I don't know how easy it is to see these stages, and to say "Mark!" to the timekeeper, exactly when the elevation is maximum.
I would guess that the very best that one can do, is to get the meridian crossing time within one second on the chronometer. [This would apply in the tropics (e.g., the Philippines), because the maximum elevation would be large, and the sun's elevation would change rapidly. At high latitudes, the maximum elevation and the rate of change would be smaller. It might not be possible to identify the time of the maximum, to closer than several seconds.] If the zero and the rate of going of the chronometer are both good enough, then the Greenwich time is also known within one second. The longitude can then be calculated to within 15 arc-seconds, using the information about the sun's right ascension from the almanac.
Unfortunately, Stephenson does not allow Goto and Ninomiya to use this good standard navigational technique, although he does not give an adequate description of what they must have done instead. He specifically states: "They reach it [the highest summit] at about two-thirty in the afternoon, ..." At that time (well after noon), the sun's azimuth and elevation are both changing continuously, and both must be measured, as nearly simultaneously as possible. The best choice would probably be to measure the elevation with the sextant, and the azimuth with the transit, using the compass needle in the transit. This might be dangerous for the observer using the transit, because transits may not have provision to protect the user's eyesight from the sunlight, as most sextants do.
Each observer would continuously track an edge of the sun, but there was no third person to be a timekeeper. One of the observers would say "Mark!" when his own observation was in good alignment, to tell the other observer to stop tracking. The one with the watch would then look away from his telescope, in order to read the time. I doubt that this could be done reliably to better than two seconds in time.
The time of the observation and the almanac identify the latitude and longitude of the sub-sun point, i.e., the point on Earth for which the sun is then at the zenith. The zenith angle, i.e., the complement of the elevation angle, defines the angular separation between the observation point and the sub-sun point. The azimuth angle runs from the observation point to the sub-sun point. One then solves the spherical triangle of those two points and the pole, in order to determine the coordinates of the point of observation.
The near-disaster, which Stephenson imposed upon his characters, was the necessity for using a magnetic azimuth. (If they had stayed on the summit over a night, they could have determined astronomical north by observing circum-polar stars.) The compass needle, between the brackets for the telescope of the transit, might be 12 cm long, or 6 cm from pivot to tip. A protractor of that radius could be divided into degrees, with marks about one mm apart. Even with a decimal vernier on the needle, an azimuth could be measured only to the nearest 6 arc-minutes. That uncertainty in solar azimuth would introduce an uncertainty in the position of the observation point of about 3 arc-minutes, mainly in latitude.
A generous reader could suggest that Stephenson had merely made a typographic error. He may have intended to say, "They reach it at about twelve-thirty, just before local apparent noon, and immediately wish they hadn't because the sun is beating almost straight down on top of them." Notice that this also makes the geometry of the sun's rays much better. Two hours later, the zenith angle of the sun would be about 30 degrees, or even more.
Normally, places in the tropics never bother with daylight saving time. However, the Nipponese armed forces typically maintained the standard time of Tokyo (GMT + 9 hours), and enforced it upon conquered areas ( http://www.absoluteastronomy.com/topics/Japan_Standard_Time ). The standard time of Manila is GMT + 8 hours, so that the watch was effectively keeping the equivalent of daylight saving time.
Stephenson carefully avoided mentioning the degrees and minutes for the location of Golgotha, and rarely mentioned dates in Cryptonomicon. However, the pile of gold bars (also on Luzon) was at about 122 degrees east longitude, so that the sun crosses the meridian there about 8 minutes before it crosses the center of time zone +8. Without knowing the date, we cannot guess how fast or slow sun time was, relative to mean time (the equation of time).
At any rate, with an appropriately chosen earlier arrival time, it would have been possible for Lieutenant Ninomiya to shoot the sun in the standard manner. However, the resulting precision of 15 arc-seconds, in both latitude and longitude, is insufficient to allow him to say, "I have the peak exactly -- ..."
There was a corresponding typographic error in another time specification. Stephenson may have intended to say, "At one-o'clock sharp, the enlisted man down in the tree begins to flash his mirror at them, a brilliant spark from a dark rug of jungle that is otherwise featureless." Unfortunately, the very next sentence includes another inadequacy: "Ninomiya centers his transit on the signal and takes down more figures."
What Ninomiya needs to determine at this time, is the displacement (distance and direction) from the summit to the tree near the entrance of Golgotha. A modern transit with a laser rangefinder can do that in a single operation. However, in 1944, there was no such thing as a laser. His transit could only measure a (magnetic) azimuth to the spark. To get a distance, he must do triangulation, using both ends of a baseline of measured length and azimuth. At each end, he must measure the angle between the spark and the marker, which defines the other end of the baseline. The soldier in the tree had to be instructed to keep flashing for a long enough period of time, so that the transit could be carried along the baseline and realigned for the second measurement.
Fortunately, the next sentence offers a possible way out of these difficulties: "In combination with various other data from maps, aerial photos, and the like, this should allow him to make an estimate of the main shaft's latitude and longitude." The mention of "aerial photos" is in the nature of a bad joke. Stephenson has gone to great length to make the point, that in this area of a multiple-canopied tropical forest, it is impossible to see through the foliage from either above or below. Radar imaging from aircraft would be ideal, penetrating the forest to show the ridges and stream valleys of the solid ground. However, that application of radar was a post-war development.
The last opportunity lies in the mention of "maps". We know that a large-scale map of Bundok Site existed, drawn on a linen bed sheet (pages 730-734). Because it was a reasonably accurate representation of the Site, it was probably generated from a well made map at smaller scale.
The history of mapping in the Philippines was intimately associated with the fact that the U.S. controlled the Philippines, from after the Spanish-American War until World War II, and for a few years afterward. (See a brief history by Joseph F. Dracup: http://www.ngs.noaa.gov/PUBS_LIB/geodetic_survey_1807.html .) The surveying and mapping was done with the advice of the U.S. Coast and Geodetic Survey, using the same type of instruments and techniques as had been employed to survey and map the continental U.S. (Not all of the triangulation surveys in the Philippines were done to the same high standard of accuracy as in the U.S.)
The data were presented using Luzon Datum 1911, which incorporated a reference ellipsoid with axes of exactly specified lengths, a single surface point whose latitude and longitude had exactly specified values, and an orientation specified by the azimuth to another point. I have not seen the specifications of Luzon Datum 1911, but it is easy to find the specifications of the analogous North American Datum 1927 ( e.g.: http://www.discoverosborne.com/Document.aspx?id=5572 ). The Nipponese captured all of the data from these surveys in 1942, so that they could produce maps that were the equivalent of what the U.S. could produce.I have not seen U.S. military maps of World War II, but I saw and used maps of Korea, for that war. Almost everything else used by the U.S. in the Korean War was essentially identical to that used in World War II, so I would guess that the maps were equivalent also. The largest-scale Korean maps were at 1:50,000 ( http://www.koreanwar.org/html/korean_war_topo_maps.html ). Each map quadrangle had its edges labeled by latitude (north and south) or longitude (east and west). A grid of latitude and longitude lines subdivided the interior of the map. The smallest things printed on the map (e.g., contour lines of elevation and the grid lines) were about 0.1 millimeter wide, which represents 5 meters on the ground.
Thus, if one makes the highly optimistic assumption, that every single landscape feature printed on the map is in the correct position within the width of those lines, then its latitude and longitude can be determined to a precision of 0.2 arc-seconds (6 meters), by interpolation between the grid lines. (Personally, I am not nearly that optimistic about the accuracy of any maps of that era.)
This happens to be just good enough, because Stephenson stated that the tenths digits were even, for both coordinates (page 1064). Thus they could have represented fifths of an arc-second, rather than tenths. Ninomiya would have looked for the bends in the Tojo River, as shown on the map, because the entrance was very close to the river. (Note that this means that the observations on the summit of Mount Calvary were totally unnecessary.)
The final consideration concerns styles in GPS receivers. In the present generation (2009), GPS receivers are typically aimed at urban explorers, who might want to find the nearest restaurant or night club, and who need to be told which way to turn at the next intersection. Some previous generations of GPS receivers were aimed at wilderness explorers. They might have wanted to record or relocate mineral deposits, or populations of particular biota, or farm cemeteries with ancestral graves, or sunken submarines, or gold bars.
Some of those receivers came with the capability of changing the datum, to be used to display the positions, at the user's choice. Thus, if Randy's receiver offered Luzon Datum 1911, he could indeed get to the same point, in the map grid, which Lieutenant Ninomiya had selected in 1944.
If everything else had gone well since 1911, Golgotha would be there.
Saturday, August 22, 2009
This stated precision of the Global Positioning System (GPS) was consistent with Randy's earlier experience. On page 601, Randy is told the coordinates of a different site, accurate to hundredths of a second. On page 633, Randy mentions this encounter in an email to his colleaues, saying "... , implying a maximum positional error on the order of the size of a dinner plate." On page 655, Randy reports finding a stack of gold bars there, with the help of his GPS receiver.
On page 1064, Randy and Goto Dengo verify that they both know the coordinates of Golgotha, as of 1944. On pages 1089-1090, Randy reaches that point, as shown by his new GPS receiver.
Let us start out with the simplest question: What distance on Earth corresponds to one second of arc? The original definition of the the meter was that the Paris meridian, from pole to equator, should measure ten million meters. (They didn't get it quite right.) That quarter circle contains 90 x 60 x 60 = 324,000 arc-seconds, so that one arc-second corresponds to 30.9 meters or 101.3 feet. There is no point in keeping more decimal places in this discussion, because Earth's surface is approximately an ellipsoid of revolution. The polar radius b is shorter that the equatorial radius a. [This relationship is often expressed by the flattening parameter f, in the form b = a (1 - f).] The distance corresponding to one arc-second depends upon the latitude, and upon the direction of the displacement. Thus Stephenson was reasonable in both of his statements about the precision of GPS.
Prior to the development of GPS, there were two ways in which the position of an arbitrary point could be determined. One method involves astronomical observations, to measure the latitude and longitude directly. It works at any place on Earth, on land or water. The other method is by survey, to measure the distance and direction to that arbitrary point, from some point whose latitude and longitude are already known. It works just as well in an archipelago such as the Philippines, as on a continent such as North America, so long as water gaps can be spanned by lines of sight between triangulation stations on land.
There are two levels at which one can ask whether the 1944 measurement was possible, with that stated precision. At the higher level, what must you have and what must you know, in order to measure latitude and longitude to 0.1 arc-second, without using GPS, but matching GPS? At the lower level, what precision could Goto and Ninomiya reasonably have attained, with their equipment and technique?
The problem of measuring longitude by astronomical observations, has historically involved measuring the time of some event with sufficient accuracy. [Stephenson even mentioned "the longitude problem" in The System of the World (e.g., pages 345-348).] In order to know longitude to 1/10 arc-second, the time must be measured to the nearest 1/150 second. This is completely impossible, if human reaction time is involved.
The best astronomical clocks before World War II were typically based on a vibrating quartz crystal in a controlled environment. The frequency of the crystal was divided electronically, and used to control an electronic alternating-current source at some convenient frequency. The alternating current drove a synchronous electric motor, and reduction gears from the motor shaft drove analog second, minute, and hour hands. Such a clock was much more accurate over long periods than any clock with a mechanical escapement, but it is not obvious how one could pull out the times of external events, to this desired precision.
Purely electronic clocks, which essentially count the oscillations of some atomic or molecular system, are a product of the development of radar during World War II. It is almost trivially easy to pull out the time, to much better precision than this, without disturbing the clock itself. However, such clocks did not exist in 1944.
The astronomical event itself must appear to be no larger than 1/10 arc-second. That typically means that it must involve a star, which acts as a point source to be viewed by a telescope. The standard relationship, for the diffraction pattern produced at a circular aperture, is that the first zero occurs at the angle such that the path difference, through points across the diameter of the aperture, is 1.22 wavelengths. For light of wavelength 550 nanometer (the peak of the response of the human eye), and an angle of 0.487 microradian (1/10 arc-second), the aperture must exceed 1.38 meter (54 inches). This is a major astronomical instrument.
The event would typically be the passage of the star across the local celestial meridian, defined by the local vertical and the celestial pole. The longitude of the telescope would be determined by the time of passage, as the star image moves behind a cross-hair, which is aligned with the meridian. The latitude of the telescope would be determined from the angle of elevation of the star image above the local horizontal, or equivalently, by the angle between the local vertical and the star image.
Of course, the coordinates of the star (right ascension for longitude and declination for latitude) would have to be known to a precision of 1/10 arc-second. This was completely unavailable in 1944. According to my Encyclopedia Britannica, star atlases even in 1989 (the date of publication) were typically good to only 1/4 arc-second.
One remaining problem in making very accurate positional measurements by astronomical observations, and comparing them to GPS measurements, is hidden in the above mentions of "local celestial meridian" and "local horizontal or vertical". (The following discussion is taken from the texts which I used for teaching an introductory course in Earth Science.)
If the mass of Earth were distributed with rotational symmetry, and with density decreasing from the center to the surface, then Earth's surface could indeed match the reference ellipsoid of GPS. At any point on such a homogeneous planet, the local vertical (as revealed by a plumb line) would be perpendicular to the reference ellipsoid. The local horizontal (as revealed by an undisturbed liquid surface) would be tangential to the reference ellipsoid. The local celestial meridian would be defined by the axis of the reference ellipsoid and the point itself.
In the actual Earth, the mass distribution has considerable lack of homogeneity. The scale of the inhomogeneities ranges from continents versus oceans, to mountain ranges versus oceanic trenches, to ore bodies versus petroleum deposits. One effect of inhomogeneity is gravitational anomalies. Directly above a region of greater (lesser) density, the measured acceleration of gravity would be stronger (weaker), than on a homogeneous planet. Another effect is variation of sea level. The ocean water would tend to pile up near a positive anomaly, but would tend to sag near a negative gravitational anomaly.
The remaining effect is deflection of the vertical. The acceleration of gravity g would tend to point toward a region of greater mass density, and away from a region of lesser mass density. This effect is obviously involved in position determination. Any north-south component of g would cause the astronomical latitude to differ from the GPS latitude, and any east-west component would similarly affect the astronomical longitude.
All of these effects can be combined in the concept of the 'geoid'. This is defined as a surface of constant gravitational potential, which matches Earth's mean sea level at every point. It can be specified by its elevation, at every point, relative to the reference ellipsoid. A plumb line is everywhere perpendicular to the geoid. The geoid is the zero for measuring elevations using 'bubble' instruments. Once the geoid is known, then g can be calculated for any point on or outside it.
Early attempts to determine the geoid were based on gravimetric surveys, in which the magnitude of g and the elevation were measured at a grid of points on Earth's surface. A complete determination would have required the grid to extend over the entire surface of Earth. However, that requirement was eased with the launch of artificial satellites in near-Earth orbits. The orbit of a satellite depends upon the exact strength and direction of g at every point of the orbit. When satellites have been tracked in enough different orbits, that information can be combined with gravimetric surveys to give a complete geoid, typically in the form of an expansion in spherical harmonics.
The deflection of the vertical at any point could be found from the slope of the geoid there (relative to the reference ellipsoid), but it can also be found directly from the expansion of the gravitational acceleration g in spherical harmonics. I have not found online any report of determination of the deflection of the vertical for the Philippines. Such a report is available for survey stations in Canada (see http://www.geod.nrcan.gc.ca/hm/pdf/evaluationofegm08_e.pdf ), where both the north-south and east-west deflections are as large as 23 arc-seconds.
A contour map of the gravitational anomaly, which is nearly equivalent to that for the geoid, was recently published [O. Andersen et al, Physics Today 62, 4, 88 (April 2009)]. It shows a texture near the Philippines comparable to that across Canada, so that I would expect the deflections of the vertical there to be comparable. Of course, the deflection must itself be known to the same precisi0n as the astronomical position to be adjusted, here 0.1 arc-second.
The final problem in comparing astronomical positions to GPS positions, is that the two systems do not share the same 'datum', or coordinate system. In particular, a GPS receiver does not read longitude zero, in the fundamental datum of GPS (WGS 84), when it is at the meridian telescope of the Greenwich Observatory. The position of that instrument historically defined zero longitude, for astronomical determinations.
Essentially, each system is compatible within itself, but it should not be expected to be compatible with the other system.
Even if all of the above problems could have been anticipated in 1944, so that the location of Golgotha was correctly known to within three meters, it still might not have been found there fifty years later. The notion of continental drift, or plate tectonics, had been proposed earlier, but the evidence to support it was developed after World War II. I don't know the actual speed of the Philippine platelet, but that distance and time represent a speed of 6 centimeters per year. That is exactly in the range of speeds reported for other plates, e.g., India colliding with Asia, to produce the 2008 earthquake in China.
All in all, I am forced to conclude that it was impossible, that any astronomical method in 1944 could have produced a position for Golgotha, which matched that given by GPS later.
A separate posting will consider how well Lieutenants Goto and Ninomiya might actually have done, in determining the location of Golgotha.