Our Surface Navy in Danger of Extinction


David Boslaugh

They were young, they had hardly any training, and they had almost no flying experience so they were easy prey for ship’s gunners and new high performance Allied fighters. Furthermore, the Kamikaze’s principal weapon, the Zero fighter, was relatively light and often did not penetrate their target ship’s structure upon impact. Sometimes in the excitement of the attack the young pilots forgot to pull the lever that armed their 550-pound bomb, so the bomb did not explode when they hit their target ship. Nevertheless, in their simultaneous massed attacks against Allied task forces, from all directions and at all altitudes, the Kamikazes sank or fatally damaged 363 ships and killed 6,600 sailors. A sign of even more potent weapons to come, the Japanese rocket propelled, pilot controlled Ohka standoff bomb had more than one and one half times the impact speed of the Zero which made it much harder to shoot down and gave it more than two and a half times as much energy to penetrate a target ship’s structure. An automatically guided Ohka type standoff weapon would be a definite threat to the surface fleet in the years following World War II.

The Allies responded to the Kamikaze threat with new tactics, new fighters, and new fleet dispositions including specially equipped, and specially protected, radar picket destroyers with new height finding search radars placed a long distance from the main formation in the direction of the expected threat to give early warning of an air attack. It was found that the best way to deal with the Ohka standoff bombs, given early radar warning of the approach of the bombers carrying them, was to deploy fighters to shoot down the mother airplanes before they came within weapon release range of the main formation. U.S. naval radar, invented only in the mid 1930s, thus proved itself over and over again in the Pacific conflict. More than one historian noted that the atomic bomb may have ended the war with Japan, but it was radar that won the war. But, even with radar’s great contribution, it had its problems.

The shipboard radars in a WW II task force were capable of generating a massive amount of ‘data’ about air targets on their glowing screens. This flood of rapidly changing radar data called for a large, well trained, smooth functioning team of human radar plotters, voice radio target tellers, fighter director officers, and gunnery coordination officers to effectively use the data. Normally each ship in a formation was assigned a wedge shaped bearing sector to survey and plot the range, bearing and height of every radar target in that sector. Plotters then had to manually calculate the course and speed of each assigned target, determine whether friendly, hostile, or unknown, and make all that data and derived information, along with an identifying track number, available to the target tellers who broadcast all the data on their ship’s assigned targets to all the rest of the ships in the formation. Other plotters aboard each ship then built comprehensive summary plots of all radar targets for use by the gunnery coordinators, fighter directors, and command decision makers on each ship. It was a system prone to human error, noisy radio circuits, fatigue, and lapses of short-term memory. There were times in massed Kamikaze attacks when the system became near saturation and began to falter.

Chief of Naval Operations, Admiral Ernest J. King, in an October 1945 letter to the chiefs of the Navy’s material bureaus for ordnance, aeronautics, and ships, summarized what he wanted of them with regard to improvements in the ability to effectively use radar. Specifically he called for “A method of presenting radar information automatically, instantaneously, and continuously and in such a manner that the human mind…may receive and act upon the information in the most convenient form; [plus] instantaneous dissemination of information within the ship and force (Bryant 141).

This was a task calling for some form of information automation; there would not be a quick or easy solution.

In July 1944 a new twin jet German Messerschmitt Me 262 attacked an RAF de Havilland Mosquito reconnaissance airplane which escaped its high speed pursuer only by hiding in a cloud bank in the nick of time. Before then, no airplane in the Luftwaffe could catch up with a Mosquito (Boyne 41).

By the end of the conflict both the RAF and the U.S. also had operational military jet aircraft, and on 14 October 1947 Captain Charles E. Yeager, piloting the Bell X-1, removed all doubt that military jets would soon be flying at speeds faster than sound.

In 1948 the Royal Navy conducted practice fleet air defense exercises against multiple high speed jet attackers and found that, thanks to the increased attack speeds, even with the best, most experienced men the air defense radar plotting teams generally fell apart when the number of simultaneous air attacks exceeded twenty. In similar exercises in 1950, the U.S. Navy found that about half of the new high speed jet attackers penetrated the fleet’s fighter defense zone unengaged. If they had been real Soviet style massed air attacks, there was concern that the task forces would have been slaughtered (Bailey).  To some senior officials the survivability of the U.S. surface Navy was in doubt. New shipboard defensive weapons having far greater engagement ranges than existing AA guns, and new ways of assimilating radar data and managing the deployment of these weapons in concert with friendly high speed interceptor aircraft were urgently needed.

The “Three Ts”

Work on devising a new shipboard long range air defense weapon had actually begun in 1944 as an anti-Kamikaze weapon. It was to be a 65-mile range, ramjet-powered supersonic missile which would be guided by the launching ship to the vicinity of the attacking aircraft which would then set off the missile’s high explosive charge by activating the missile’s proximity fuse. An elaborate, high precision electro-mechanical analog fire control computer aboard the launching ship would take inputs from the ship’s long range air search radar to lock a pencil beam fire control radar onto the target and would then generate intercept guidance orders to the missile.

The Bureau of Ordnance (BUORD) named the missile “Talos”, and at war’s end the Office of the Chief of Naval Operations (OPNAV) directed BUORD to keep working on the missile as a future defense against hostile jet aircraft (King 288).

The Talos missile was an expensive bird, and one was expended in the each of the numerous test firings needed to test the elaborate, complex system. However, many of the test flights were to evaluate some part of the system other than the expensive missile. The solution to using a Talos missile on each test firing was a much cheaper, shorter range expendable test platform powered by two-stage solid propellant rockets. The less expensive test platform could achieve the needed supersonic speeds and could reach out about 20 miles, far enough to support the test program.

By 1949 BUORD managers noted that the solid rocket test platform was quite reliable, and if given an explosive warhead it could be a potent short range air defense missile. OPNAV authorized turning the test platform into a shorter range missile which could be fielded as a tactical weapon even sooner than Talos, and could be installed in smaller ships than required by Talos, which needed a heavy cruiser sized platform. The new missile was called “Terrier.” In a final beneficial management decision, BUORD modified the Terrier missile with an improved sustainer rocket motor, and no launch booster, into a ten-mile range single-stage missile which was small enough to be carried on destroyer sized ships to give them an air defense capability beyond the range of existing guns. The bureau named the smallest missile “Tartar”, and the three missile systems together became known as the “Three Ts.”

The thirty-foot long Talos missile needed huge shipboard magazines to carry a significant warload of missiles, and cruisers were the only appropriate sized ships to carry that missile system. Therefore, in the late 1950s a number of World War II cruisers were stripped of some, or in some cases all, of their six or eight-inch gun turrets to make room for Talos missile batteries. Three heavy cruisers, Albany, Chicago, and Columbus saw not only their 8-inch turrets removed fore and aft to be replaced by Talos missile systems, but also their three and five-inch gun mounts removed in order to install two Tartar batteries port and starboard (Navy Department 823). The smaller Tartar missile systems were slated for fitting aboard new-construction guided missile destroyers.

The medium range Terrier missile system, however, was too large for conventional destroyers. Some Terrier systems were fitted in former WW II six-inch gun cruisers and a few were slated for aircraft carriers, but finally the supply of suitable WW II veteran cruisers ran out. A new class of air defense ship, having a displacement midway between that of a destroyer and a light cruiser, was needed to carry the Terrier missile system. The first of the new class, the ‘guided missile frigate’ USS Coontz (DLG-9), was so designated as being an offspring of the new, larger destroyer leader (DL) type ship, but with a ‘G’ added to the type designation to indicate that it carried guided missiles.

The Navy has usually, for good reasons, shunned purely special purpose ships, and so, even though the new guided missile frigates’ primary mission was fleet air defense, they were also armed with five-inch guns and anti-submarine weapons backed up with a state of the art sonar system. Each succeeding class of guided missile frigate leading up to USS Biddle (DLG-34) gained more in size, displacement and capabilities until many wondered why the U.S. Navy did not call a spade a spade. Or in this case, why were not these cruiser sized and capable ships called cruisers? More about this later.

Still, An AAW Battle Management Problem

The Saddest Words of Tongue or Pen, Are those Words ‘It Might Have Been’

During World War II the Royal Canadian Navy (RCN) provided 48% of all Allied escort ships for Atlantic convoys, but in spite of this lion’s share’ contribution of convoy antisubmarine forces, the RCN got virtually no respect from the Royal Navy or the US Navy who called all the shots regarding how the RCN would use its considerable anti-submarine warfare (ASW) assets Vardalas 66). At war’s end the Canadians vowed never again! In any future global conflict they intended to put themselves in a position to be in charge of transatlantic convoy management rather than just a silent junior partner; and what better way than to develop an automated fleet-wide ASW management system.

The Canadians began work in 1948 on such a digital automated shipboard system to have the capability to “capture, extract, display, communicate, and share accurate tactical information in a timely manner.” (Vardalas  67) Though primarily oriented to processing sonar data and closely coordinating tactics among convoy escort ships and hunter-killer ASW groups, it was also designed to accept and process radar data. The RCN named their concept the Digital Automated Tracking and Resolving System (DATAR.)  In 1948 the U.S. Army’s Electronic Numerical Integrator and Computer (ENIAC) was the only existing all-electronic computer in the world, but many academic and business organizations had been inspired by the Army machine to start designing even more capable and more versatile general purpose digital computers, including Feranti Limited of Great Britain. In 1950 the RCN selected Feranti-Canada to build a shipboard version of their vacuum-tube computer with rotating magnetic drum memory to be the computing power behind DATAR.

Three systems complete with the Feranti computers, sonar and radar displays from which targets could be manually entered into the computer, and, perhaps most importantly of all, a digital data link having 80-mile range, were installed in two minesweepers and a shore station. The system could accommodate 64 targets with 40-yard resolution (Friedman 49). The RCN began testing DATAR on LakeOntario in August 1953, and it worked! It needed improvements and tweaking, but it worked. The RCN was well on its way to being the world leader in automated seaborne combat management systems, and they had good reason to believe they would never again be the junior partner in Atlantic convoy management.

The two shipboard systems filled the entire after half of the two minesweepers, and major overheating, brought on by 3,800 vacuum tubes in tight space confines, was one of the problems to be addressed. Unfortunately fire broke out on one of the test ships and the system was destroyed. Lack of funds to rebuild the system resulted in project termination, and perhaps the only lasting benefit was the knowledge and experience gained by Mr. Stanley F. Knights, chief scientist on DATAR, who would later give invaluable consulting support to the U.S. Navy’s Naval Tactical Data System project (E. Swenson, 29 Sept. 1987, Page 5). Even though emphasis of other nations would remain on using more familiar analog computing technology to solve the AAW battle management problem, the Canadians were truly bold and prophetic in the selection of one of the world’s earliest digital computers for their system. In the end digital technology would prove to be the correct route.

Analogs: Too Heavy, Too Complex, Too Unreliable, and Too Little Capability

By the late 1950s the U.S. Navy was hard at work fitting the Three T missile systems into a variety of existing and new-construction ships. Also, new supersonic carrier-based fleet air defense interceptors were coming on line. Furthermore, radar, the means of long range warning and precision tracking of air attackers, was already there and improving every day. But there was still a missing ingredient in the mixture. It was being repeatedly demonstrated that WW II style manual radar plotting teams, voice radio target telling among ships in a task force, and human fighter direction and gunnery coordination, no matter how expert and well trained, was not up to the volume, speed, and precision demands of managing individual target selection and deployment of task force missile batteries, guns, and fighters in the face of a saturation air attack by new high-speed aircraft. Some form of information automation was still the obvious answer; but how?

Analog computers were the conventional approach to information automation, and they had served the fleet well since before WW II, primarily in the form of electromechanical fire control computers wherein the turn of a shaft might represent target speed and mechanical component resolvers could represent target location. Although the mechanical computers were accurate and reliable, they could process only one target at a time, whereas WW II massed Kamikaze saturation attacks had involved as many as 900 attackers coming from all directions and altitudes.

Nevertheless, most initial attempts to automate AAW battle management, other than the Canadians, relied on analog computing methods. In 1951 the Royal Navy, facing the same problems as the U.S. Navy, began work on their Comprehensive Display System (CDS) which featured 96 target tracking channels fed by operators moving cursors to pick radar targets from search radars and enter them into target tracking channels. Each target tracking channel was the electromechanical equivalent of a rudimentary analog fire control computer, and operators could also set switches indicating track number, estimated altitude, friendly, hostile or unknown, whether assigned to weapons, as well as other target parameters. The information stored in the tracking channels was then written as a synthetic air situation picture back on top of the operator’s radar scopes.

Evaluation of CDS aboard the British aircraft carrier HMS Victorious showed that, when the system worked, it was a great step forward from manual plotting teams. CDS showed enough promise to warrant installation aboard another R.N. aircraft carrier and four guided missile destroyers (Howse 264). It also piqued the interest of the U.S. Navy who tasked the Naval Research Laboratory (NRL) at Washington, D.C. to acquire one CDS system and evaluate it for USN use. NRL concluded that CDS was indeed a great improvement over WW II style AAW battle manage methods; however the Lab also found the system unreliable due to the large number of mechanical components, bulky, suffering in accuracy due to effects of temperature changes on the many components, and expensive. They did not propose the system for USN use, but rather proposed an improved electronic version, still in analog computer form, however (Gebhart 381).

The Naval Research Lab named their all-electronic system the Electronic Data System (EDS), and began design work in 1953. EDS featured 24 electronic target tracking channels which not only stored and displayed the same target parameters as the British CDS, but also, by virtue of its electronic circuitry, was able to compute target velocities. This feature not only aided fighter interceptor controllers, but also moved the artificial target symbols on the operator’s radar scopes which enabled a single EDS operator to simultaneously manually update eight target tracks as compared to the two-tracks-per-operator in the Royal Navy CDS. Furthermore, in a significant step forward, a teletype data link automatically broadcast the stored information on each target to other similarly equipped ships in a task force. On board the receiving ships the teletype data was changed back to electrical voltages to show the remotely transmitted track information on the receiving ship’s radar scopes.

Evaluation of the Naval Research Laboratory system at sea in 1955 aboard four destroyers showed that the system’s reliability and usefulness warranted installation of 16 more systems aboard selected major combatant ships – primarily guided missile cruisers. Even though a great improvement, EDS needed more track storage capability, greater accuracy, and the capability to store and display even more information about each target to fully resolve the fleet anti-air battle management problem. More improvement was needed, and no more than the twenty systems were acquired (Gebhart 384) (Graf III-2).

A shipboard missile system had to be more than just a magazine full of missiles, a launcher, a fire control computers, search radars and fire control radars. A central control station was needed to direct all the other elements, and to feed selected high-threat air targets from the air search radars to the fire control computers, which would slew the pencil-beam fire control radars into position to search for and lock on to the target. The control station was termed a ‘weapons direction system.’ or WDS. The WDS needed to have an inventory of high threat air targets ready to feed to the fire control computers, and so were equipped with a limited number of target tracking channels, usually eight, in the Talos, Terrier, and Tartar systems. They were, in effect, small versions of the Electronic Data System, but without a data link that could have integrated their combat picture with that of other task force ships.

The weapons control officers were thus forced to make their own value judgments as to which targets on a manually grease penciled transparent vertical summary radar plotting board should be manually selected on the search radar screen to be fed to the tracking channels, and with no knowledge, other than inter-task force voice radio, of whether a selected target might already be engaged by another gun or missile system or perhaps by an interceptor. EDS was installed in a few missile cruisers to serve as the weapons direction system and so partially solved the problem with its data link. But there was still a serious problem in fleetwide AAW battle management in making best use of the potent new shipboard missile systems and high-speed carrier launched interceptors.

SAGE, The US Air Force’s Answer

In March 1950 the US Air Force had begun work on a digital automated system that addressed virtually the same air battle management problem, on a nationwide scope, with which the Navy was struggling. They called it the Semi-Automatic Ground Environment, or SAGE for short. In some ways the Air Force challenge was simpler than the navy problem because SAGE would be located in fixed ground sites (27 total) sites situated around the country. These sites would not move, whereas Navy ships attempting to data link the positions of tactical air targets to other units must know not only where the target is with respect to the ship, but also precisely where the constantly moving ship is, or the target coordinates will be in error. The second Air Force advantage was large four-story warehouse sized air defense centers, each having the room to house two of the physically largest computers ever built – IBM AN/FSQ-7 computers each having 25,000 vacuum tubes, and occupying 40,000 square feet (Watson 231-233).

By a strange twist of fate the initial inspiration for the prototype of these massive computers had also come from the Army’s ENIAC computer, and had started in life in 1946 as a Navy project called WHIRLWIND, the purpose of which was to be the heart of a sophisticated aircraft flight simulator for the Navy’s Bureau of Aeronautics. Just when the Bureau had become terminally frustrated with the WHIRLWIND computer’s growing costs, the Air Force recognized it as exactly what they needed for their SAGE system. By 1953 the SAGE project had progressed far enough for the Navy to take a close look at it as a possible solution to the task force AAW battle management problem. The Navy study postulated a new special type of ship – a combined radar site, floating computer center, and command ship.

Each of the command ships would be fitted with massive vacuum-tube computers similar to the SAGE machines, but hardened to survive the shock, vibration, and corrosion rigor’s of shipboard life. Two computers would be required on each ship, as at the SAGE sites, so that one could always be in maintenance having its ever-failing vacuum tubes methodically replaced, or in hot standby ready to replace the operating machine whenever it failed. The special ships would have to be cruiser-sized at least, and one would operate with each task force, building a fleet-wide air battle picture and transmitting it over a digital data link to all air defense ships. Navy planners finally rejected the concept because the task forces could be vulnerable to a single-point failure because destruction of that single ship would leave the task force blind. Furthermore Navy ships must often act alone, and it was decreed that each air defense ship, from guided missile frigate on up, should have similar self contained automated capabilities (Graf III-2).

The Navy Solution

In 1954 Rear Admiral Rawson Bennett, Chief of Naval Research, temporarily detached Lieutenant Commander Irvin L. McNally from the Navy Electronics Laboratory at San Diego, California, to take another, closer, longer look at SAGE and to report back to him possible on ways to extend the SAGE concept to sea. McNally was to spend six months on a tri-service study team, called Project Lamplight, at the Massachusetts Institute of Technology, who was managing SAGE development. He would be in company with approximately 100 other civilian engineers, Air Force and Army officers working on ways to improve continental air defense. McNally and civilian engineer Everett E. McCown were to be the only Navy representation, and to help increase his clout with the many senior officers from the other services, McNally was spot promoted from Lieutenant Commander to Commander (Graf  III-3).

This was not McNally’s first spot promotion, for almost all his promotions had been thus because he had been continuously put into new jobs above his pay grade. Entering the Navy in 1932 as a way of riding out the Great Depression until he could get a civilian job where he could use his degree in electrical engineering, he had started as an enlisted radioman, and by 1936 he had risen to Radioman First Class. By late 1937 he was a Warrant Radio Electrician, and in 1940 he was assigned to the Naval Research Laboratory in Washington, D.C. to attend the first US Navy course in radar – taught by the Laboratory’s inventors of U.S. naval radar.

Later, in 1941, McNally was assigned to be one of the instructors in the first course in radar taught to American naval officers. Prophetically, one of his students was a young ensign, Edward C. Svendsen, about whom more will be said in this narrative. By early December 1941, McNally was living aboard USS Pennsylvania lodged in Drydock 1 at Pearl Harbor Naval Shipyard. His mission at Pearl Harbor, to set up and run a radar maintenance school for Pacific Fleet sailors, was almost prematurely terminated when a Japanese bomb destroyed the ship’s medical aid station where he had volunteered to assist on the morning of 7 December. He was spared only because the doctor in charge had asked him to go below for a supply of gas masks and battle helmets. When he returned, all were dead.

By June 1942, the radar maintenance school was in operation, and McNally had been spot promoted to Lieutenant (Junior Grade) to help match his responsibilities in operating the school. A year later, after McNally had been spot promoted to full Lieutenant, he happened to show Vice Admiral Lockwood, Commander of the Pacific Fleet Submarine Force, a rudimentary, but working, radar antenna which could be fitted to a submarine periscope. Lockwood was flabbergasted, and the next day McNally found himself aboard a Pan American clipper bound for San Francisco with orders to proceed to the Radar Design Branch in the Bureau of Ships in Washington, D.C. Here his first priority was to get his periscope antenna into production. McNally was soon spot promoted to Lieutenant Commander, and eventually took charge of the Bureau’s shipboard radar design group, which he headed until 1949 when he was posted the Navy Electronics Laboratory in San Diego to be radar program manager.

While working on Project Lamplight, McNally had, perhaps, one advantageous piece of knowledge that none of the 100 other technical participants possessed. While in charge of the Bureau of Ships shipboard radar design group, he had done considerable work with engineers of the Bell Telephone Laboratories where the new technology of transistors had been invented. One of the Bell Labs engineers had even given McNally one of their infant transistors and he had tried it in a number of kinds of circuits in place of a vacuum tube. By virtue of his experiments, he likely knew more about transistors than any other person on the study team, regardless of their level of academic degree or rank.

He was convinced that a digital computer having the computing power of the SAGE computers could be built of transistors which would not only allow the computer to be packaged into a small shipboard compartment, but also would run on only a few thousand watts of electrical power as compared to the one and one half million watts of power needed by one SAGE computer – mainly to heat incandescent vacuum-tube filaments. The SAGE managers considered transistors to be an immature, unreliable laboratory curiosity that would soon pass, but nevertheless said they would be willing to endorse McNally’s system concept back to the Chief of Naval Research with some reservations.

McNally conceived of a computer-based seaborne battle management system wherein every combatant ship from guided missile frigate on up would be equipped with the system, and all participants would have similar capabilities including the ability to assume task force command functions in an emergency. The primary difference between smaller ships and major combatants would be more computer processing power and more operator positions on the larger ships. In remembrance of WW II saturation Kamikaze raids he called for larger ships to be able to process as many as 1000 targets at one time. (This would later have to be cut back). A key component of the system would be a fleetwide automatic digital data link that would allow all participating units to share in the task force target tracking load, and allow all to see the same composite air battle picture including ‘pairing’ lines that would show which ship or interceptor was engaging what target.

As in SAGE, the operator positions would be special radar consoles that had the ability to show not only the raw radar picture from search radars, but also computer-generated symbols indicating each target, whether it was friendly, hostile or unidentified, whether it was an aircraft, surface ship or submarine, its computed speed and heading, whether it was experiencing emergency conditions, whether it was assigned to a defensive weapon, and, if so, what kind of weapon, and on what ship, and if it was assigned to fighter interceptors, which interceptor, and which ship was controlling that interceptor.

The system would also assess all hostile targets and compute which targets seemed to be most threatening, which should be assigned to weapons first, and to which weapon. If assigned to airborne interceptors, the system would compute heading, speed and altitude orders for the interceptor, and might even automatically steer the interceptor into firing position with a ship-to-air data link. He also called for new shipboard radars designed specifically to interface with the new digital system, and for airborne early warning radar aircraft that worked as equal participants with the ships on the data link.

McNally condensed his concept to 15 typewritten pages, and sent it off to the Chief of Naval Research. Rear Admiral Bennett forwarded the paper on to the Office of the Chief of Naval Operations (OPNAV) without change, with the recommendation that the Navy start work on this at once. OPNAV summarily passed the paper on to the Chief of the Bureau of Ships with the direction to expand the paper in more technical and operational capabilities detail with special emphasis on assessing the state of the new technologies, especially transistors and computers, which would be essential to making this system a reality.  (McNally, CDR Irvin L., Interview with D. L. Boslaugh, 20 April 1993.)

McNally was detailed to report to the head of the Electronics Design and Development Division, Captain W. F. Cassidy, in the Bureau of Ships where he was given the task of fleshing out his concept. McNally knew the capabilities and future potential of radar like the back of his hand, but he had been only lightly exposed to digital computers. He told Cassidy he desperately needed technical help from someone well versed in digital computers, especially transistorized computers, which he feared was an impossible request to fulfill. Amazingly, Cassidy told him he would have help there that same day. To McNally’s further amazement his new collaborator was no other than the former young ensign, Edward Svendsen – now a commander, to whom he had taught the fundamentals of radar in 1941, some 14 years previously.

Svendsen was not allowed to tell McNally how he had become an expert in computers, but he was probably as well steeped in the cutting edge of digital computer state of the art as any person alive. He was allowed to tell McNally, to his added astonishment, that he was at that very time directing the development of an experimental large scale transistorized computer.

After Svendsen had completed McNally’s radar course at NRL he returned to the battleship Mississippi where he became radar officer, and then with the advent of shipboard combat information centers (CIC) during the course of the Pacific conflict, he became the ship’s CIC officer, wherein he and his men largely built and equipped their new CIC themselves. He therefore knew the technology of radar almost as well as McNally, and CIC functions perhaps even better. He grasped intuitively how digital computers could help automate the laborious manually and intellectually intensive CIC battle management processes, and he was excited at the prospect.

Svendsen had remained aboard Mississippi until the fall of 1944 when he was ordered to the U.S. Naval Postgraduate School at Annapolis to get a masters degree in electrical engineering. Upon graduation he entered a strange and arcane new world. He was assigned to the Naval Computing Machine Laboratory, at St. Paul, Minnesota, as technical officer. In this case ‘computing machine’ was a euphemism for ‘electronic code breaking device’, and Svendsen found himself in charge of developing electronic code breaking aids for the Navy’s cryptologists at the Naval Security Group Command.

The Computing Machine Lab was physically collocated with a small company named Engineering Research Associates (ERA) who had been secretly helped into being by the Navy’s code breaking community to be their material support arm. They were beginning a transition from special purpose code breaking machinery, such as the World War II electromechanical devices they used to break back the German Submarine Force Enigma code, to new general purpose digital computers which they adjudged would be even better at codebreaking. Here again, the inspiration had been the Army’s ENIAC computer. ERA would eventually become the Univac Division of Sperry Rand Corporation.

Svendsen had been technically in charge of building the Navy’s first two large scale vacuum-tube codebreaking computers, named Atlas I and II, and which were located at the Naval Security Station in Northwest Washington, D.C. He had subsequently been transferred to the Bureau of Ships where in 1955 he was in charge of the ‘Special Applications Branch’, which included the secret Computer Design Section, physically located at the Naval Security Station. At the time they were hard at work developing a transistorized version of the Atlas II, which was intended to be a desk sized full scale computer that could fit directly in the workspace of a Navy cryptologist for his personal use. A personal computer! (Svendsen, CAPT Edward C., Interview with D. L. Boslaugh, 3 Feb. 1995)

Captain Cassidy assigned the two commanders a small room in the Main Navy building from which he even ripped out the telephone to enable complete privacy. They specified a system built of standardized ‘building block’ equipment units whereby large or small ship systems could be assembled with multiples of the standard computers, operator displays, and data link equipment. Furthermore, the system on any ship could be expanded at any time by simply plugging in more computers or ‘multiple function’ operator consoles.

The system would be hard to kill. They would minimize susceptibility to ‘single point’ system failure by having at least two of every critical equipment type so that if one failed the system could continue to run at a reduced capability with the remaining unit. For example, even the smallest system, as installed on a guided missile frigate, would have two computers and two of most other essential components. Graceful degradation in the face of component failure, rather than outright stoppage would be a hallmark of system design. Even if a system lost both computers, the operator consoles would continue to function as standard radar repeaters so that the ship would be no worse off than a ship not equipped with the system.

Svendsen specified that, rather than using existing rotating magnetic drum memories, or mercury delay lines, or memories that used charged spots on the face of a cathode ray tube, the system’s general purpose, stored program computers would have magnetic core memories. These new ‘core’ memories were just coming into experimental use, and Svendsen was already using them in his new transistorized cryptographer’s ‘personal computer.’ The two commanders researched, calculated, and wrote for about a month during which they expanded McNally’s original 15 page concept paper to a fifty page document having not only more technical and operational detail, but also convincing rationale that the system could be built (Svendsen Interview 3 Feb 1995)

In late August 1955 the Bureau of Ships sent McNally’s and Svendsen’s Technical and Operational Requirements document for a ‘Navy Tactical Data System’ off to the Chief of Naval Research who positively endorsed it to OPNAV, stating that system development should start immediately. OPNAV responded by finding project start-up funds and tasking the Bureau of Ships to establish a project office to build the new system in the shortest possible time.

The two commanders had convinced the Navy to start building a major, AAW battle management system that would form the automated anti-air battle management aid of every ship in the Navy from guided missile frigates on up – based on two new technologies: digital computing and transistors, that barely even existed; and furthermore, on new equipment types that existed only in their minds. In reality, because the new system was going to depend on new, very large, and extremely complex, computer programs, it was also going to depend on a third new technology, that of large scale computer programming, which was so new that no one even recognized it as either an art, science, or technology in its own right. This third unseen technical challenge would almost become the project’s undoing.

A Fourteen-Year Task Compressed to Five

The Chief of the Bureau of Ships assigned Commander McNally as manager of the new project and Svendsen as assistant. The office was set up with a technical staff that would never exceed six naval officers and civilian engineers, supported by the equivalent of three full time engineering specialists in the Bureau’s radar, communications, and computer design offices. OPNAV realized that the new command and control system, which had an undeniable flavor of command decision making by a giant electronic brain, was probably going to be highly controversial among naval officers. In recognition of the urgency of system development, its criticality to the fleet, and probable controversial nature, OPNAV also set up a small project office in the Pentagon composed of four officers headed by a Captain. Among many other jobs, prime missions of the OPNAV office would be justifying and defending project funding, setting up the needed operator and maintenance training schools, setting up two fleet computer programming centers on the East and West coasts, and selling the new system to a highly skeptical user community.

The normal paradigm for managing a project for a new, highly complex Navy weapon system called for selecting a prime contractor having the needed expertise in all technical areas to be the primary system designer and integrator. The prime contractor would then obtain subcontractors to design and build the needed new equipment in their areas of competence. There were many who said that Bell Laboratories was the only institution in the nation that could possibly pull off such a complex project. McNally and Svendsen, however, thought differently. They reasoned that the Navy, by virtue of its codebreaking computer design experience, its background as the inventor of U.S. Naval radar, and established expertise in radio communications, should be its own prime contractor and system integrator.

The two commanders set up a plan whereby Navy laboratories and engineering activities under task to the project office would develop detailed technical specifications for the many new needed equipment types, and they would monitor and guide selected contractors in designing and building prototype equipment. The prototype equipment would be assembled into an engineering test system at the Navy Electronics Laboratory where engineers and technicians would integrate the system and wring it out in a realistic ship simulation environment. They realized that trying to assemble and test the complex new system for the first time aboard an active Navy ship would be a predictable disaster.

The small company in St. Paul, Minnesota, that had just changed its name from Engineering Research Associates to Univac Division of Sperry Rand Corporation, and who was building the Navy codebreaking computers, was selected to design the new transistorized shipboard digital computers. Univac selected as their project leader a young engineer, Seymour Cray, having a growing reputation as a genius in the use of transistors. Univac was also contracted to develop the prototype computer programs for later turn-over to the new Fleet Computer Programming Centers, and Cray would also be in charge of computer program development. Cray would go on to gain a reputation as a world leader in supercomputer design and manufacturing.

To provide the system’s data link equipment the project office picked Collins Radio who not only had the reputation as a leader in high frequency single sideband communications, but had also invented a very capable high frequency digital data link technology. Hughes Aircraft Co. was working on operator displays for the Army’s Nike missile system that seemed to have applicability to the Navy system, and they were contracted to design and provide the new half analog/half digital radar display consoles (E. Swenson, 3 May 1988, Pages 53-60).

In the mid 1950s the Navy was accustomed to a schedule of about 14 years from authorization of development of a new electronics system project to installing service test units in the first receiving ships. Much of this time was to accommodate the many needed tests, qualifications, and project reviews by higher authorities inside and outside of the Navy. McNally and Svendsen agreed to a seemingly impossible five-year schedule with the proviso that disruptive reviews and interference in project management by higher authority had to be minimized. They worked out an agreement that, “If we need help we will ask for it, but otherwise just trust our judgment and let us keep going.” It was agreed that the OPNAV project office would not require the usual plethora of formal project reviews and status briefings from BUSHIPS, but instead would stay abreast of progress, and problems, by their day-to-day involvement in the project. Chief of Naval Operations Arleigh A. Burke issued an edict that attempts by any senior official to ‘micromanage’ the project would not be tolerated. He further stated that all project officers in BUSHIPS and OPNAV would be assigned for the duration of the project and would be moved out of the project only with his approval (Svendsen, CAPT Edward C., Interview with D. L. Boslaugh, 3 Feb. 1995)

It seems that the first thing traditionally done when the armed forces starts a new project is to give it a name, preferably something that will form a snappy acronym. In this case, however, for some reason the new project did not have an official name or even a popular name. Maybe they were too busy to stop and think about it. McNally had called it the Navy Tactical Data System in his concept paper, but other project officers called it such things as the Consolidated Electronic Display System, the Fleet Data System, or sometimes the Naval Tactical Data System. Project secretary Frances Bartolomew pointed out to the project officers their inconsistencies in naming the new system in their correspondence, and stated she was becoming embarrassed that nobody seemed to know what the name of the project was. She did not ask them what name they were going to use, but instead declared that she liked “Naval Tactical Data System” and no matter what they wrote in their correspondence that was what she was going to type. The name stuck.

Irvin McNally had risen to commander rank by a series of spot promotions during his 24-year Navy career; however his permanent commissioned rank was only that of lieutenant. In a 1956 letter to the Bureau of Naval Personnel he inquired whether his position on the lineal list of officers could be advanced so that he could compete for captain rank with his contemporary commanders. The Bureau’s response indicated that he would not have a chance to compete for promotion to captain during the remaining six years of a normal 30-year Navy career, convincing him that his best option was resignation from active duty.

Just prior to his retirement in June 1956, McNally wrote two more technical specification documents that would have long lasting impact on future shipboard combat systems. They described two new shipboard search radars that would be designed specifically to work with NTDS. One would be a three-dimensional search radar of unusual range and accuracy for air intercept control and designating targets to missile systems, the second would be a two-dimensional search radar for detecting air targets at long ranges and passing them to the three-dimensional radar for precision tracking. The two new radars would eventually become realities designated respectively AN/SPS-48 and AN/SPS-49, and Biddle would be in the first group of ships to receive the combination of the Naval Tactical Data System and the new SPS-48 three-dimensional search radar. McNally’s final contribution would be, as the head of Raytheon Corporation’s Search Radar Laboratory, to design and build the AN/SPS-49 search radar, which would be acknowledged the most capable U.S. Navy long range two-dimensional radar for over three decades (McNally Interview 20 April 1993).

The Sea – The Monster That Ate Science

Naval ship commanders are of necessity highly conservative when it comes to accepting new weapon systems because the consequences of failure are so severe in the seagoing environment. Failure of a new weapon system, or sub standard performance, can mean the death of comrades in battle and loss of ships or aircraft. They understandably want to stick with familiar weapons and techniques that they know from experience will work. As this narrative will point out, the sea, even in peacetime, can be a dangerous place. When news started circulating that traditional shipboard combat information centers were going to be replaced with an automated system built from two immature technologies of which even most of the nation’s academic, scientific, and engineering communities knew virtually nothing, it was too much for most seagoing officers.

What made it worse was the reality that the new system was going to be interposed in the direct route of their AAW battle decision making. Some future commanders of the ships to be so equipped even visited the NTDS project office in the Bureau of Ships to tell the project officers face-to-face that even if the system was installed in their ship, they would refuse to turn it on. They declared that no damned computer was going to tell them what to do. An informal survey taken by the OPNAV NTDS project office revealed that naval officers opposed the system by a ratio of twenty to one (Graf IV-23).

Reliability experts, adding fuel to the future user’s arguments, calculated that, based on the high count of transistors in NTDS, it could run, at best, for a few hours before failure set in. Even many officers and engineers in the Bureau of Ships shunned the NTDS project because they were convinced of the project’s eventual failure, and they did not want to be caught in the undertow when the project sank.

The conviction that Navy electronic technicians would not be able to cope with the new exotic computer technology was another favorite argument of system critics. In Commander  Svendsen’s mind, however, there was no doubt that sailors could handle the new equipment because he had seen the communications technicians of the Naval Security Group Command readily master their codebreaking computers. The only doubt on Svendsen’s part was whether the equipment responsibilities of Navy electronic technicians should be expanded to include new digital devices, or would that give them too many equipment types on which to train and care for. He convinced OPNAV and the Bureau of Naval Personnel to create a new enlisted equipment maintenance rating, the Data Systems Technician, to care for NTDS. The outstanding performance of this new breed of sailor during the crucial years of selling the system to its future users would prove to be most effective in silencing the critics (E. Swenson, May 1988, 59)

By April 1959 all of the prototype NTDS equipment had been delivered and installed at the Navy Electronics Lab land based test site. Furthermore, most of the new, hand picked, data system technicians (DSs) who would be assigned to the service test ships were at the Laboratory where they took classes from the NRL engineers and helped the engineers exercise and test the system during the day, while other shifts of DSs trained on the test system at night (Bureau of Ships, Technical Development Plan for the Naval Tactical Data System (NTDS) – SS 191, 1 Apr. 1964. Page 4-6). Much was learned. Many problems were found and corrected.

Most significantly, it was found that the freezer-chest sized prototype NTDS computers with their 11,000 point-contact transistors per computer were of just marginal reliability and performance for a seagoing system. In October 1959 Svendsen and his assistant project officers made the gut wrenching decision to design and build a new computer regardless of the project’s tight schedule. The year before, Fairchild Instrument Corporation had invented the planar transistor, which offered better reliability and higher computer speed, and the project decided to fashion a new computer with the new transistor technology.

The new computer would retain Seymour Cray’s architecture, instruction set, and input/output conventions, but every circuit would be of new design to best use the new technology. In nine months Univac designed, built, and tested the new machine, and in tests it vindicated the project officers decision. Reliability was an order of magnitude improved, and it ran twice as fast as the prototype machine (Lundstrom 55-57). Two of the new computers, resembling upright refrigerators, had the same processing power as one 20,000 square foot vacuum-tube SAGE computer.

System testing continued at the Navy Electronics Laboratory until November 1961, but as early as April 1959 the fast moving pace of the project had called for starting the three equipment contractors on production runs to build equipment for three service test ships. To achieve the fast paced five-year project schedule, Collins Radio, Hughes Aircraft, and Univac engineers were forced to design the service test equipment following a moving target as test results were fed into their design process.

At the end of September 1961, San Francisco and Puget Sound Naval Shipyards completed installing the prototype equipment in three service test ships including two new-construction guided missile frigates USS King (DLG-10) and USS Mahan (DLG-11). The third ship, an attack carrier, had been the subject of great controversy in the Navy because most senior naval officers were aghast at the idea of turning over one of their newest attack carriers to a six-month service testing project. The issue had been settled, however, by a chance meeting between Vice Admiral H. G. Hopwood, Commander-in-Chief of the Pacific Fleet and assistant NTDS project officer Erick N. Swenson in the fall of 1959 following a wedding reception. The enthusiastic young lieutenant described the capabilities of the new system in such a convincing manner that the admiral returned to his headquarters and dispatched a message offering up the attack carrier USS Oriskany (CVA-34) to NTDS service test (A. Swenson. Interview 17 April 1993).

By October 1961 the new data system technicians for the three service test ships had been in training for two years at contractor plants and at the NEL test site, and they were now clearly the experts when it came to taking care of NTDS equipment. Even though navy practice was to provide contractor engineers to maintain new equipment during at-sea service testing in order to give the new equipment a fair chance, Svendsen, now a Captain, decided that only navy data system technicians would maintain the new systems during the at-sea evaluation. No one else would be allowed to work on the equipment (Graf IV-23).

Service test task force operations began off San Diego in late October 1961, and the project officers learned a new reality about large, complex digital systems. The prototype NTDS computer programs had been well debugged at the land based test site, and the project managers expected them to function well when installed in the ship systems. Therefore, hardly any time had been scheduled to test the programs in their new seagoing environment. To the project officers’ horror the programs failed over and over again. They found that when a computer program is shifted to a new environment, no matter how realistic the shore-based system might have been, the programs must be extensively tested and verified as if they were brand new computer programs.

The six-month at-sea operational evaluation by the Operational Test and Evaluation Force (OPTEVFOR) was a continuing nightmare. It was as much a computer program debug evolution as it was a system test. The testers did find that, when the programs worked, NTDS was far more effective in task force defense than existing manual AAW battle management procedures. Furthermore, the new NTDS equipment under the care of the data system technicians performed flawlessly. The operational evaluation was completed on 1 April 1962, and Rear Admiral Charles K. Bergin, the commander of OPTEVFOR concluded in his report that if the NTDS equipment had performed as poorly as the computer programs he would have had no choice but to give the project a thumbs down (Swallow Letter 16 Nov 1994).

As it was, OPNAV issued a provisional approval for service use for the Naval Tactical Data System. The proviso was that the computer programs had to be debugged and stabilized. A year later a team of testers, engineers, and computer programmers from OPTEVFOR, Univac, and the FleetComputerProgrammingCenter, San Diego, who sailed aboard the three service test ships for six more months, had the computer programs as solid as the equipment. In March 1963 the Chief of Naval Operations issued final approval for service use for the Naval Tactical Data System (Buships 12-1).

In the meantime, there was no time to lose. In March 1962, a full year before service approval, OPNAV, taking a calculated risk, had directed the NTDS project to begin production of NTDS equipment to outfit seventeen existing and new construction ships. One of those equipment suites was destined for the new guided missile frigate Biddle (DLG-34) that would be laid down on the building ways at Bath Iron Works, Bath, Maine, on 9 December 1963.

Leave a comment