From the beginning of time there have been three ways to "authenticate" people
The first way is to identify who they are. i.e by verifying some suitably unique biometric feature, we recognise the person. The most common example in day to day life is the straightforward recognition of friends, family and colleagues on the basis of their familiar features (face, voice etc). This is actually a very secure method of authentication. It would be virtually impossible, for example, for anyone to take the place of your spouse, child or parent and fool you into believing their impersonation for more than a few seconds. Unfortunately, when communicating on the web, it is a) often the case that you'll be communicating with someone you don't know and cannot personally identify and b) technically difficult (to date) to let you see or hear the person well enough for such simple and effective biometrics to be used.
Hence, if we use biometrics on the web (or anywhere amongst strangers), it must be on the basis of submission of the biometric data to some kind of non-human check which we all agree to trust. This is a much larger problem than it may seem at first glance. If the authentication is remote then the following questions arise:
- How do we know that the biometric data is coming from the person it belongs to? (it could have been captured electronically from a previous session and stored for feeding to the system)
- How do we know that the person is alive? (fingerprints, for example, will work just as well if taken from dead or unconscious hands)
- How do we know that the person is not being forced to use their biometric identifier against their will?
What are the risks of false positive and false negative identification? These are a particular problem with biometrics as it is close to impossible to get a consistent biometric reading. Human beings have an innate ability to recognise, for example, faces under a huge range of visibility conditions and from almost any angle. Complex software can emulate that skill to a degree but it is by no means as powerful as our own inbuilt recognition systems.
Note, for example, how these questions (apart from the last to some extent) do not arise in the context of iris checking at Heathrow airport. In this situation, your own personnel are able to witness that the person being checked is obviously alive and consenting to the check. It is the remote checking required for web transactions that carries these problems.
The biometric technologies may eventually solve these problems but they haven't done so yet. The best claim, therefore, that we can currently make for biometric tools used remotely is that they are equivalent to the second method of authentication.
The second way to authenticate people is by what they have. Early examples included the missing piece to a broken object. More recently, a good one has been the other half of a banknote. Again, these examples work well if the person requiring authentication is physically present and in possession of the authenticating token. Today we try to expand that to cover the need for remote authentication. So we provide allegedly unforgeable smart cards and the like. Or, as intimated above, biometrics.
Note, though, that if we rely on smart cards for remote authentication, a similar bunch of questions arises:
- How do we know the smart card signals or responses are coming from the smart card and not from a spoofed copy?
- How do we know that the smart card is in the possession of the right person?
- Even if it is in the right hands, how do we know its not being used under duress?
False negatives and positives are less of a problem with a digital system. Unlike the inconsistent data we get with biometrics, there is no need for interpretation with digital data. Its either the right response or not. Nevertheless, although false positives should be very rare, false negatives can arise from damage or malfunction of the card. If that happens at the wrong time, it could be disastrous.
The third way to identify people is by what they know. PINs and Passwords are the examples we are all familiar with. PINs tend to be limited to 4 numbers, allowing only 10,000 possible combinations, a trivial task for a computer to crack if the protected system allows unlimited attempts. Passwords of themselves are not necessarily insecure. The trouble is that human beings can't remember good secure passwords and thus fall back on Daughter's names, Phone numbers, Vehicle Registration numbers and other easy to attack examples.
As you can see, none of these methods offers a foolproof way to authenticate someone at a distance. Strong Authentication tries to improve the security by using more than one of these methods simultaneously. For example they might insist on both a smart card and a password (and perhaps even a session PIN). The logic is that a stolen card can't be used by anyone who doesn't also know the Password/PIN. And no-one who only knows the Password/PIN but doesn't have the card can fool the system either.
A1's strongest authentication requirement is to secure the channel between manufacturers and A1 prior to the upload of VRs. We have one advantage over most remote authenticators. We will always have visited the manufacturer's site prior to any VR upload, in order to set up the software and systems necessary for them to create VRs in the first place. This means we don't have a problem with Key Distribution. Which in turn means our own trusted staff, monitored by their own trusted staff, can install, securely, sufficient one time keys (and even a large one time pad) to deal with several thousand authenticated sessions.
This doesn't provide cast iron protection against the wrong people using the system to upload VRs, but it does at least allow us to be sure that the data is coming from the right machine. Given that we will always feed back the results of an upload to another (randomly selected) authorised person trusted by the manufacturer, and that we will also use the best local authentication tools available, including biometrics and strong passwords, we feel reasonably confident that we will at least be somewhat harder to fool than most systems.
We will use a four key exchange system which effectively validates their system to A1 and validates A1 to them
The manufacturer wishing to upload VRs will need to take the following steps:
- Log on to their local workstation and identify themselves to the machine. This will be by means of smart card and/or biometric identifier combined with a strong password.
- In order to initiate an upload, they will send the next "common" key
- When A1 receives this key it checks that it is valid, not previously used, and the next key in the sequence.
- If the key is valid in all respects, A1 returns a matched key which the manufacturer can use with the other half of their pair to extract a missing key, which when hashed, matches the common key. This proves to the manufacturer that they are communicating with A1, because we should be the only people on the planet holding the relevant key.
- They then send either the missing key or their second key to A1 as proof that they too possess the right keys.
The keys are each used once only. There is thus little risk from interception of the keys, because having been used, they can't be used again. This is a dual authentication process as, theoretically, neither side can authenticate itself to the other unless they both share the relevant keys. Of course, it is possible that an attacker will have stolen copies of the keys. However, if the attacker pretends to be a manufacturer, the exploit should fail on later verification. (when A1 sends details of the successful upload to a different randomly selected authorised representative of the manufacturer). If the attacker is pretending to be A1, they will have to be in a position to access secure space on our system to which all the manufacturer's messages are sent. They will also have to defeat or disable the automatic logging devices and software. All of which a skilled insider could probably do, but then we have to ask the question "why bother?" Given that the data being uploaded will not contain significant value, what would be the point of the attack? There are two potential answers to that: First, straightforward commercial espionage. Granted the attacker can't get hold of VRs, but he can get bulk shipping data which, over time, could amount to data of significant value to competitors. Second, many hackers might be tempted to carry out the attack simply for the challenge.
Incidentally, in either case, if an attacker has managed to compromise the the keys, they will fail the next time the manufacturer attempts a real upload. The manufacturer, not knowing their keys have been copied, will re-use a key and the alarm will be triggered. Alternatively, at the A1 end, the manufacturer will appear to have skipped a key and that too will trigger the alarm.
Finally, assuming that the authentication proceeds without an alarm, the common key used to initiate the transaction is stored alongside the audit trail data record of each successful upload. This can be checked, periodically, at both ends, to ensure that both the Manufacturer and A1 have recorded the same keys alongside the same uploaded batches of data.