In security, there are three types of unpleasant experience. You can be attacked and compromised, you can be attacked and severely compromised, and you can be given the once over by a cryptologist.

The latest example of the latter is the slowly crumbling edifice of the rather inscrutable SHA-1 hash function widely used in digital signatures and VPN systems. After a year of slow battering, a few weeks ago a team of Chinese researchers confirmed that SHA-1 could now be subjected to a collision attack more rapidly than previously demonstrated.

It’s the cryptographic equivalent of burning Rome with a series of small bonfires, but it ends up mattering in a complicated sort of way.

The consolation is that, inconvenient as they appear, these mathematical attacks serve a useful purpose. They expose the insecurity, however hypothetical, of cryptographic formats on which so much of computing depends for its security survival. Few people worry about cryptography on a day-to-day basis, but take it away and the commercial Internet would become untenable.

The remarkable thing is that this white hat mathematical debunking goes back to the very birth of modern computing. Before there were PCs, mini-computers, and even mainframes, there were the machines that helped encode (and famously decode or “crack”) secure communications in World War 2.

The whole concept of the computer was born out of the application of cryptography, and whether we realise it or not it is still fundamental to its use.

Nonetheless, the work of the World War 2 cryptologists did underscore one thing that has had profound consequences for computing culture: any such system can be broken, hacked, cracked, decrypted. The notion of subversion and counter-subversion was written into computing from the word go.

The key issue was stealth. If a cipher had been cracked then this fact had to be kept secret or the knowledge gained would be useless. As the troubles of SHA-1 demonstrate, this dynamic is still at work.

It matters not whether a cryptographic form is known to be compromised, what gives is whether it could be broken in specified conditions. This sensitivity to the theoretical is a complicated intellectual insurance, a necessary check on the complacency of a system’s creator.

As the Germans of World War 2 found to their cost, making a system secure using complex mathematical coding schemes is only a powerful tool if you have a way of knowing when it has been blunted for good.