From marbles to missiles: a new approach to warhead verification

Posted by Andreas Persbo (andreas.persbo) on Jul 25 2014
VERTIC Blog >> Verification and Monitoring
Russell Moul and Alberto Muti, London
 
Sitting at the heart of nuclear verification is a paradox: international inspectors need to gain the highest possible confidence that an object provided for inspection is what it is claimed to be, for example, a nuclear warhead. However, at the same time the inspectors must not learn anything about the object that could be considered classified, as all information surrounding the design of nuclear weapons can potentially be used for proliferative purposes. 
 
Current accepted methods for limiting inspector’s exposure to proliferative data, for example the mass and isotopic composition of the weapon’s fissile component or its configuration and shape, involve the use of ‘information barriers’. In information barrier devices, the information received through a detector (usually neutron and gamma-ray spectrometers) is analysed by an automatic system, to assess whether the analysed object corresponds to the profile of a nuclear weapon. The entire device is isolated from external controls and input (save from the data collected by the detector), and only emits a binary output – a 0 or a 1, representing a simple ‘yes’ or ‘no’.
 
When applied practically, information barriers are can be deployed in one of two ways. The first is known as the ‘template’ approach, where the information barrier essentially checks whether the examined item fits a pre-programmed template supplied by the host to serve as a reference point. The second method is called the ‘attribution approach’, which checks whether certain characteristics are present in the examined item. For example, is there plutonium present (yes/no); is it of a certain isotopic composition (yes/no); and is it of a certain mass (yes/no).
 
Information barrier systems allow inspectors to conduct evaluations based on sensitive data without ever coming into direct contact with the data itself. However, these systems still collect sensitive information, and a device could be secretly altered to relay the information via wireless transfer, or to store it for later retrieval.
 
In order to address this risk, a team of scientists from Princeton University, led by Dr Alexander Glaser, has developed a new approach, based on principles of cryptography that were recently published in Nature, the international journal of science.
 
The method
Glaser and his colleagues have devised a method that overcomes the implicit vulnerability of collecting sensitive information by not measuring it directly. The method, known as ‘zero-knowledge proofs’ or ‘zero-knowledge protocols’ enables a verifier to confirm the correctness of a proposition made by a host, without learning why it is true – this is the art of knowing more, while learning nothing.
 
Zero-knowledge proofs were first invented in 1985 by Shafi Goldwasser and a team of cryptographers, and have become an important feature of modern cryptography. Originally these proofs were used as digital protocols that provided statements about mathematical objects. Today, these protocols are commonly used in securing online transactions, ensuring privacy for data mining and anonymity in electronic elections.
 
Glaser and his colleagues provided a simple explanation of the zero-knowledge protocol by way of a game where Alice, a host, must prove to Bob, a verifier, that two cups contain the same number of marbles (a variable we will call X), without revealing the number itself.
 
Alice can prove her statement by pouring the contents of the cups into two different buckets, each one previously containing 100 marbles, minus X. By verifying that both buckets now contain precisely 100 marbles, Bob can indirectly verify that both cups contained X marbles, without ever having to count the number of marbles originally held in the cup (or in the buckets).
 
If Alice is feeling particularly duplicitous, she can attempt to deceive Bob by putting different numbers of marbles into each bucket, so that both buckets will still total 100 marbles at the end, even though the two cups do not contain the same amount of marbles. Bob, however, can specify which cup is emptied into which bucket – providing a 50 per cent chance of discovering Alice’s deception. If Alice continues to cheat during repeated games, Bob will quickly discover her deception, as Glaser writes: ‘If Alice and Bob repeat this game, say, five times, then if Alice consistently cheats she will be caught with a probability (1-2-5)> 95%.’
 
Hypothetically, in the case of weapons verification, a host demonstrates to an inspector that an unknown object, hidden within a container, is the same as a known warhead. The principle is the same as the marbles example. Neutrons (the marbles) are transmitted through the object where a neutron radiograph is produced (N). Neutron radiographs of warheads contain highly classified information, but - this is where the zero-knowledge protocol comes in – they are never measured. In stead, they are recorded on detectors (an array of 367 bubble detectors) that are preloaded with the negative (-N) of the radiograph (the buckets). If the host has been honest the measurements will reveal the same number counts in every detector (-N + N = 0). The preloaded sets of detectors are shuffled at random and applied to the potential warhead at random, so if an attempt to cheat has been made, there is a significant probability that the image will not be uniform – a mismatch would suggest an attempt to cheat.  
 
Glaser and his team’s solution ostensibly provide statistical assurance for authenticating that an inspected object is identical to a chosen warhead. It provides an interesting method for ensuring that proliferation-sensitive information is not released, either by accident or by design, while simultaneously raising confidence in a host’s statement.
 
However, this solution is not without its problems and further work is needed before it can be used on real weapons material. For example, questions still remain over how best to ensure that a neutron source is stable enough so that both the reference and the unknown objects are exposed to the same amount of neutrons, what is referred to as the neutron flux. Systematic errors in measurements or even small misalignments caused by environmental factors may also cause problems that have not yet been identified by the team.
 
In addition, the zero-knowledge approach still suffers from what is known as the ‘initiatialisation problem’ as the approach works by comparing the examined object with a ‘template’ object whose features are previously known. It follows that it can only verify that the two objects are the same, or at least share the same key characteristics. However, any such approach cannot, by itself, establish that the template object (and by extension, the examined object, too) is what it is declared to be, for example, a nuclear weapon.

The initiatialisation problem has long since been recognised as one of the most significant issues in working with template-based information barrier devices, and unfortunately the zero-knowledge protocol does not offer a solution to it. Nevertheless, if progress is made on practical implementations, zero-knowledge verification protocols could play an important role in preventing the release of proliferation-sensitive information in the future. 
 

Last changed: Oct 01 2014 at 5:34 PM

Back