Why Authenticity Is Not Security

Code signing aims to solve two specific problems. The first problem is how to identify the author of code. The second problem is how to verify that code has not been modified after release. Truly solving one or the other would be a good step forward. That both may be solved simultaneously is a great step forward. There is a common misconception, however, that code signing represents a giant leap forward by making the signed code itself "secure". There is not a little danger in trusting code simply because it is signed, without regard to the security of its development before release.

First, a primer on how code signing works. Code signing calculates a cryptographic hash of the code, which is then digitally signed by the author and appended to the code itself. After release, the appended hash can be compared to a newly calculated hash in order to detect modification of the code. The digital signature can be compared to the author's certificate in order to also detect modification of the appended hash. Optionally, the author's certificate may be compared against a certificate authority to detect an inauthentic certficiate. Remember that this process was only designed to gurantee the authenticity of the code, that it has not been modified after release and that it was released by the expected author. None of those steps, however, can attest to the safety of the signed code; inversely, code that is not signed is not necessarily less safe.

Safety is introduced into the code signing process as a byproduct of an organization's criteria for signing code and revoking certificates. For example, developers may request that Microsoft sign the developers' drivers through Microsoft's Windows Logo program, to indicate that they have passed Microsoft's tests for reliability and compatibility. In addition, x64 architecture Windows drivers are required to be signed by the author or they will be prevented from loading, and will also be prevented from loading if the signing certificate is revoked. The signature certificate could be revoked in the event the driver is observed to be misbehaving, but by then the damage is already done and only the scope can be minimized. Microsoft could decline to sign a driver that fails to pass their tests, but in this case, security is not something for which they are testing. For signed code to be trusted as secure, and not merely as authentic, it must be reviewed for vulnerabilities.

In fact, the code signing process could incorporate an actual level of security. Large software vendors could require security reviews as part of their code signing process. In addition, large organizations with their own security team could take this responsibility for reviewing the security of applications they use, regardless of whether the author signed them, and re-distributing them internally with the organization's own signature. Smaller organizations might outsource this responsibility to a security vendor. Finally, security vendors could counter-sign code as a seal of approval to encourage adoption of a product.

The primary advantage of this approach is enabling end-users to decide on the trustworthiness of an application based on whether it has been signed by a trustworthy security team, and not merely by the author, if it is signed at all. The secondary advantage is the possibility of automatically enforcing these trust decisions at an administrative level, rather than leaving it up to careless users on a case-by-case basis. That, however, is a post for another time....