Atomic weapons and the arms race were inseparable from the inception of the former: Developments in physics in the 1930s led physicists to believe that nuclear fission could be used as a weapon, and when World War II began, scientists stopped publishing on the topic of fission in order to avoid sharing information with the enemy. No one was yet sure what form a fissionbased weapon would take, but the Allied nations were concerned that Nazi Germany would develop it first. In the United States the Manhattan Project was supported by enormous resources beginning in 1942. Research occurred at various sites across North America and was overseen and organized at Los Alamos, New Mexico, where the desert provided safe sites for weapons testing. Though British scientists participated, as did many European exiles, the Soviet Union was not included in the project.
Not until after Germany’s surrender did the Manhattan Project finish its work. The first test, code-named Trinity, was conducted on July 16, 1945. The first nuclear explosive, a nondeployable bomb nicknamed the Gadget, was a sphere of high explosive covered with surface detonators that directed the explosion inward, compressing a plutonium core in order to start a nuclear chain reaction that grew at an exponential rate. The Gadget exploded with a blast equal in force to about 18 thousand tons of TNT—tonnage of TNT became the standard measure of nuclear bombs thenceforth.
The test was a success. Aural and visual evidence of the explosion reached as far as 200 miles away. Almost immediately two bombs were prepared for the ongoing war in the Pacific: Fat Man, a plutonium bomb like the Gadget, was dropped on Nagasaki, Japan, on August 9; three days earlier at Hiroshima, Little Boy, a uranium “gun-type” bomb that worked by shooting one piece of uranium into another to start the chain reaction, had been dropped. Little Boy was the first gun-type nuclear bomb used, and while it seemed likely to work, it was
U.S. troops witness an atomic bomb test. Atomic weaponry shaped
the international political landscape of the cold war. at that time untested. Hundreds of thousands died at Hiroshima and Nagasaki, prompting a Japanese surrender a week later.
Future warfare would have to acknowledge the existence of nuclear weapons. Though the Soviets had been left out of the Manhattan Project and the United States was the only country with the capability to produce nuclear arms, the Soviet Union had been receiving information about the project throughout its duration thanks to its espionage efforts. Development of Soviet nuclear weapons had to be conducted without the extraordinary brain trust of Los Alamos, but had the advantage of requiring less innovation. Penal mining provided uranium, and on August 29, 1949, the Soviets successfully detonated First Lightning, a 22 kiloton Fat Man–style fission bomb. Four years after the start of the “Atomic Age,” and years before U.S. military intelligence had predicted the Soviets would succeed, the nuclear arms race was under way.
In the aftermath of World War II the United States and the Soviet Union became the most significant and resourceful superpowers. New international alliances like the North Atlantic Treaty Organization (NATO) and the Warsaw Pact transpired along ideological lines as much as geographical ones. The arms race was, on one level, simple one-upmanship: a competition through which tensions could be worked out, as they were in the Olympics and the space race. Though both the United States and the Soviet Union quickly acquired the necessary means to do significant and catastrophic damage to their opponents, escalation continued as the arms race drove them both. The United States countered the Soviet acquisition of “the bomb” by developing the hydrogen bomb—also called the fusion bomb or the thermonuclear bomb. While the first generation of nuclear weapons used fission, the hydrogen bomb relied on nuclear fusion: the process of nuclei fusing into a larger nucleus and releasing energy as a by-product, the same process that fuels the Sun.
On May 9, 1951, in the United States, Operation Greenhouse detonated a thermonuclear device codenamed George, with an explosive yield of 225 kilotons. Like the Gadget, George was a nondeployable device used to test the basic principles that would be involved in the design of its successors; a year later, Ivy Mike was detonated with a yield of 10.4 megatons (10,000 kilotons), and the hydrogen bomb officially became part of the U.S. nuclear arsenal. The Soviets kept pace, detonating a preliminary fusion device in the summer of 1953 and a fullscale thermonuclear bomb in 1954. The destructive force of these new bombs was commonly measured in megatons, making the first atomic bombs seem almost trivial in comparison. A Fat Man–type bomb could eliminate a smaller city like Nagasaki; a hydrogen bomb could eliminate a major city and its infrastructure and produce considerably more fallout.
Secrecy was part of the world of nuclear weaponry from the start. In the cold war years, new policies regulated information relevant to the design of nuclear arms: The 1946 Atomic Energy Act put nuclear technology under civilian control and banned the divulging of information related to such to any foreign nation. Eight years later a new act went substantially further: All nuclear technology was “born secret,” which is to say that it was automatically classified without need for evaluation. Nuclear technology was deemed to be a matter of national security. It is widely speculated that the born secret policy is unconstitutional, but the Supreme Court has yet to hear a case pertaining to it.
Throughout the 1950s much of the innovation of the arms race was concerned with methods of deployment. The B-47 Stratojet and B-52 Stratofortress—strategic U.S. bomber jets designed to penetrate Soviet borders— and interceptor aircraft designed to intercept and eliminate bombers were early examples of such innovations. Bomb deployment was also made more user-friendly, requiring fewer specialists and bringing the utility of nuclear weapons closer to that of conventional explosives, which required limited instruction on the part of the soldiers deploying them. Intercontinental Ballistic Missiles (ICBMs) allowed rival nations to deploy nuclear payloads without needing a pilot at all, and the United States proceeded to build missile installations throughout Europe, while the threat of Soviet missiles in Cuba sparked the Cuban missile crisis of 1962.
Some attention, of course, was paid to defense against nuclear attacks, not only the fallout shelters and cautionary films that became prevalent in the 1950s, but also antiballistic missiles to shoot down ICBMs before they struck their target, anti-aircraft artillery and fighter jets to intercept bombers, and increasingly sophisticated radar systems to detect incoming attacks. These preventative measures could not keep up with the offensive capabilities of a nuclear arsenal, though, and the development of nuclear submarines, which could launch a missile from the ocean—far from tactical targets—provided each side in the cold war with second-strike capability: the ability to ensure a retaliatory attack in the event of the other side’s first strike. Given the destructiveness of megaton bombs and the amount of fallout that would result from their wide-scale implementation, second-strike capability led to a state of what was called mutually assured destruction (MAD).
As a defense strategy, MAD calls for the development and stockpiling of weapons of mass destruction in order to force a situation in which it is infeasible for either side to attack, because of the certainty of devastating retaliation. What may have at first seemed counterintuitive was nevertheless a critical component of cold war thinking that led to the détente, or eased tensions, of the 1970s. Meanwhile, as the United States and the Soviet Union remained dominant in the nuclear field, other nations developed programs of their own: Among the NATO allies, the United Kingdom and France both became nuclear powers by the end of 1960, while the People’s Republic of China followed suit in 1964, at a time when Sino-Soviet relations were at enough of an ebb that China was a potential threat to either the United States or the Soviet Union.
During détente, the 1968 Nuclear Non-Proliferation Treaty (NNPT) was signed by by a number of states, though it was not until 1992 that France and the People’s Republic of China signed. The NNPT limited the spread of nuclear capability by permitting only those five states then possessing them—which also happened to be the five permanent members of the United Nations Security Council—to own nuclear weapons. It further permitted the use of nuclear power by other states, but only under conditions that would limit their ability to manufacture nuclear weapons. Any states not explicitly granted rights under this treaty would have to apply to the International Atomic Energy Agency, a regulatory branch of the United Nations, to pursue any nuclear technology activity.
The easing of tensions also led to armament control treaties in the late 1960s and early 1970s. SALT I (Strategic Arms Limitation Talks), held in Helsinki, Finland, between the Soviet Union and the United States, restricted the production of strategic ballistic missile launchers and submarine-launched ballistic missiles, and further treaties limited nuclear testing and forbade nuclear weapons in space. Détente ended when the Soviets invaded Afghanistan in 1979. When Ronald Reagan was elected president in 1980 he returned anti-Soviet rhetoric to pre-détente levels, calling for massive escalations in order to force the Soviet Union into economic collapse as a result of defense spending.
One of his initiatives threatened the balance of MAD: The Strategic Defense Initiative, nicknamed Star Wars, would employ a space-based system to deflect missiles en route to the United States, thus limiting the Soviet second-strike capability. Though the system was never fully developed or employed, aspects of it were adopted by every subsequent administration, even after the cold war ended.
The Strategic Arms Reduction Treaties (START) further limited nuclear arms, and periodic treaties continue to reduce the number of nuclear warheads in operation. The arms race effectively ended when the Soviet Union collapsed in 1991. Though no one possesses the resources of the cold war superpowers, the rest of the world has begun to catch up to the nuclear states: In the post– cold war years India, Pakistan, and North Korea have all tested nuclear devices (North Korea withdrew from the NNPT in 2003; India and Pakistan never signed), and more are sure to follow. The International Atomic Energy Agency estimates that, as of 2006, 40 nonnuclear countries possessed the capability to manufacture nuclear weapons if they desired to.