How do you choose a cryptographic algorithm and key size – Part 1

User Rating: 0 / 5

Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 

This is one of those subjects that you could argue everyday but the basics are actually very simple. It is generally agreed that the strength of a cryptographic algorithm can be defined by the level of computational complexity necessary to break the algorithm. If the algorithm is perfect then that should be the effort required to exhaust the key. In practice this means you have to try the algorithm for every key to get a matching pair of plain texts and cipher texts, the brute force attack.

So in 1975 NIST released the DES specification with a key size of 56 bits. For years the security community argued whether the NSA had tampered with the algorithm to introduce a back door for their convenience. Some twenty years or so later it became clear that the NSA had helped improve the design of the algorithm to resist the differential cryptanalysis attacks that were publicly unknown at that time and in practice a brute force attack on the key was the only practical way of breaking the algorithm.

What that allows us to do is to take 1975 as a reference point and argue that 56 bits of complexity at that time was deemed adequate for typical commercial applications which usually have relatively short lifetimes compared for example with government sensitive data. Assuming the famous Moore’s law for improvements in computational processing power at the rate of a doubling every 18 months for the same cost then we can calculate the number of bits of complexity required for successive years. In 2015 the equivalent figure would be about 82 bits of computational complexity.

The table below shows the computational complexity for the well-known algorithms along with the recommendations from NIST (SP 800-57 Recommendations for Key Management – Part 1: General; July 2012). I have added a 2015 row to show the level at which designers might operate.

In analysing this table there are some fundamental concepts to be appreciated,

  • Picking a figure and doubling it is not the right answer. Increasing the key size of an algorithm has an overhead which can be quite deceptive not only in the required processing power but also in memory storage and the size of the messages being protected. You need to consider both the client end which might be a smart card or mobile phone and the server end which will normally involve a Hardware Security Module (HSM).
  • Far more important that key size is the ability to be able to transparently change the key size (and/or algorithm).
  • The security vulnerability of a transaction based system is rarely a function of the algorithm itself, it is invariably weakened by the particular implementation of the cryptographic algorithm or the transaction protocol.

Based on the table if you were creating a new design today you would almost certainly use AES128 and ECDSA at 256bits to give you a very future proof design (ignoring the likelihood of quantum computers). However in practice most organisations will have legacy systems built up over many years. In this case DES and RSA are likely to dominate. As the table shows you can carry on using these algorithms but as of today you should move to 3TDES (3 key triple DES) and RSA at least with 1536 bits but you would be hard pushed to justify going bigger than 2048 because of the extra overheads.

Dr David Everett

Latest Tweets

Microexpert Limited
Smartphone makers join Apple’s battle against Qualcomm https://t.co/ciyH7tfbEd via @WSJ
Microexpert Limited
Mastercard enhances AI capability with acquisition of Brighterion Inc. https://t.co/VtWvHnUOjO
Follow Microexpert Limited on Twitter