F-22A Raptor, FB-22, F-22E, F-22N and Variants Index Page [Click for more ...] People's Liberation Army Air Power Index Page  [Click for more ...]
Military Ethics, Culture, Education and Training Index Page [Click for more ...]
Russian / Soviet Weapon Systems Index Page [Click for more ...]






Last Updated: Mon Jan 27 11:18:09 UTC 2014








Cryptography - Systems, Standards, Tools

Originally published  August, 1998
by Carlo Kopp
© 1998, 2005 Carlo Kopp

Cryptography is a technological area of growing importance, and within the next two decades will become a ubiquitous aspect of the networked computing environment. In the first part of this feature we discussed the basic issues in cryptography and some of their direct implications. In this second part we will explore some of the most common algorithms and standards in use.

To place cryptographic standards and algorithms into a context, it is first necessary to understand where and how they are used in a networked environment. The best starting point therefore is to briefly review the commonly used model of a network protocol stack. Most networks, be they proprietary or open, employ four basic layers to carry information. The lowest, "physical" layer, is defined by signalling levels on the transmission medium, and signal modulations used to encode the data.

Traditionally this layer has been unsecure, since the signals carried conformed to widely known open standards. More recently, some effort has focussed on strengthening security in this area, particularly in wireless networking. Spread spectrum modulations for data transmission require the equivalent of a secret key encryption scheme for a receiver to be able to demodulate the output of a transmitter.

In cryptographic terms, the spread spectrum code lengths used with current wireless technology are equivalent to very weak ciphers, with very small key spaces, and therefore provide security only against a "casual" eavesdropper. Because this technology is bound to hardware, and installed hardware base means very slow and expensive changes, we should not expect to see significant improvements in cryptographic security at this level, certainly not in the near future.

In the longer term there is however much potential for improvement, certainly the existence of many relatively secure military radio datalink schemes indicates that the technology does exist to do this. The next layer up the stack is the "datalink" layer, which is the first layer of digital encapsulation of data, the typical example being the Ethernet or HDLC packet.

This layer has also been traditionally "non-secure", in that the imperative has been to provide for widely used and commonly understood packet formats to facilitate ease of connectivity. There is no fundamental technological reason why this layer in a network cannot also be made cryptographically secure, by suitably encryption of packet headers and contents. However, to date the imperative has been to secure the higher layers, and this in turn means that we will have to wait some time yet to see secure datalink protocols.

Stepping up the stack, the next layer is networking, responsible for carrying traffic over multiple routers through a WAN, and typified by the ubiquitous IP protocol. This layer has also been an area of cryptographic non-security in times past, although much effort is now being put into support for secure connections under the Secure IP standard (IPsec). A term now in increasing use is that of the Virtual Private Network (VPN), in which a group of users connects over an unsecure WAN or LAN using preferably securely encrypted traffic at this level.

Importantly, cryptographic security at the physical, datalink and networking levels defeats the realtime sniffer, since supercomputer class performance is hard to fit into a portable device. Moreover, if the scheme in use also securely hides the packet header information, selective filtering of even recorded data can become extremely difficult.

While cryptographic schemes piggybacked on existing technology will not be able to readily exploit this, in the longer term we amy see such developments. Sitting above the networking layer will be a diverse range of transmission level protocols, either connection or datagram oriented. Again, this is an area where encryption can be applied although in current practice mostly is not.

The interface to the networking layer has however been used for encryption, by inserting a layer of encryption/decryption code between the application interface (eg socket or stream) and the virtual pipe/stream between its peer on another host. A number of schemes, such as SSL or SSH provide such a facility. Once a secure stream is established, applications can run over it which are non-secure and the arrangement will provide the level of cryptographic security characteristic of the algorithms in use.

The final point at which encryption can be used in transmission is at the application level, by employing programs which encrypt and decrypt data before transmission and after reception, respectively.

The classical example here is email encryption, its popularity attested to by the wide range of tools and standards available or proposed for this purpose, such as PGP, S/MIME/MOSS, S-HTTP, or PEM. Whatever level we choose to apply encryption to, several caveats must be observed.

The first is that the more layers or levels at which we can provide cryptographic protection, the more difficult it will be for an opponent to crack our security. Diversity in protection complicates things significantly for an attacker, since multiple keys must be broken, although the penalty is that bandwidth may be impaired and key management can become a headache. Interoperability between sites may also become an issue, since both parties must employ protocols and tools which are reliably interoperable.

To date this has been one of the biggest obstacles to the broader use of encryption for purposes such as email. The next issue is that of the choice of algorithm and key/modulus size. Whatever level we choose to encrypt at, we need to select a specific algorithm, or several algorithms, and employ a key size which will make decryption uneconomical for an opponent in a useful timescale.

With a wide number of possible algorithms or schemes in wider use, such as DES, DH, RSA, RC series and MD series ciphers and a host of more exotic ciphers, there are ample means of making things hard for an opponent. The issue then becomes one of cost to implement, yet again.

Common Algorithms

The area of cryptographic algorithms or ciphers is enormous, and pretty much the domain of professional cryptology scholars. However, within the computer industry only a small subset of the available choices is used, since the interoperability constraint tends to produce massive inertia in the deployment of new techniques. This is aside from the reluctance of most governments to allow the ready proliferation of encryption tools, particularly "strong" encryption tools which are painful even for governments to deal with. In practice therefore, the computer industry uses well understood and publicly known algorithms, with key sizes periodically increased to defeat Moore's Law in code cracking hardware.

By far the most common block cipher used in the US industry is the US Data Encryption Standard (DES), devised by IBM and adopted by the US government during the late seventies. DES is a secret key or symmetric block cipher, which converts a 64 bit block of raw data into a 64-bit block of encrypted data, using a 56-bit long key. DES is an iterated or Feistel cipher, in which a particular transform algorithm is applied repeatedly to the data block to produce the encrypted block. DES is generally regarded to be a reasonably secure cipher, since brute force attack requires of the order of 2^55 operations.

Further security may be provided by "Triple DES", which involves three consecutive encryptions. There are numerous permutations available, the three most common are:

  • DES-EEE3 with three encryptions using three distinct keys
  • DES-EDE3 using encrypt-decrypt-encrypt operations and three distinct keys
  • DES-EEE2 like EEE3 but using the same key for the first and third operation
  • DES-EDE2 like EDE3 but using the same key for the first and third operation

The most secure DES variant is triply encrypted with three distinct keys, requiring 2^112 operations to find the key for known plaintext data. The US government has traditionally been loath to permit export of cryptographic software and DES has not been an exception.

A number of other block ciphers are in use or have been proposed for use. IDEA (International Data Encryption Algorithm) was designed by Lai and Massey, and is a 64-bit iterative cipher with a 128-bit key.

IDEA is still considered secure, although a set of 2^51 "weak" keys exist which are easier to crack. SAFER (Secure and Fast Encryption Routine) was devised by Massey for Cylink Corp, and is another 64-bit block cipher with either 64 or 128 bit keys.

RSA Labs developed the RC series of ciphers, RC2 is a drop in 64-bit block replacement for DES, with a range of key sizes possible. It is generally considered to be 2-3 times faster to compute than DES. RC5 is a more advanced block cipher than RC2, providing for 32, 64, or 128 bit blocks, and key sizes from 0 to 2048 bits.

Probably the most controversial cipher in recent times is the US NSA designed Skipjack algorithm, intended for use in the Clipper chip. Since the algorithm is classified, there has been some argument about its security, aside from the massive civil liberties arguments produced by the US government's proposal to enforce the use of Skipjack/Clipper in a key escrow scheme, whereby the government would keep the secret keys to a user's Clipper chip in escrow and use then to decrypt the user's messages without the court order otherwise required to eavesdrop people.

The next important category of ciphers are what are termed stream ciphers, which operate on a stream of bits rather than discrete blocks. Whereas a block cipher always produces the same ciphertext for a given block of text and key, the output of a stream cipher will be dependent upon the preceding data in the stream.

Stream ciphers are typically considered much faster than equivalent block ciphers, and since they share some properties with the highly secure one time pad, are regarded highly by many cryptographers.

Examples of recent stream ciphers are the RSA RC4, or SEAL. Finally, cryptographic hash functions are growing in importance, for purposes of validation and authentication, such as digital signatures. Every computer scientist is familiar with more trivial hash functions used to speed up table searches - cryptographic has functions are a specific category designed to produce collision free constant length hash values, for variable input value sizes.

This means that a chuck of data can be processed by a hash function to produce an algorithm and key specific constant size output, termed a message digest, usable in a signature. The best known hash functions are RSA Labs' MD2, MD4 and MD5 schemes, and the US government SHA/SHA-1 (Secure Hash Function).

Public key cryptographic techniques were discussed in Part 1, the most commonly used scheme at this time is the patented RSA algorithm, standardised in ITU-T X.509, the banking industry SWIFT standard, the ANSI X9.31 standard, and our local AS2805.6.5.3 , as well as used in standards such as S/MIME, PEM-MIME, S-HTTP and SSL. The RSA algorithm, characteristically for public key algorithms, is relatively slow.

In software, about 10^2 slower than DES, in hardware up to 10^4 times slower than DES. therefore it is typically used for short messages to to exchange keys in a secure manner, as part of a more complex encryption scheme. While algorithms are the engines at the heart of cryptography, alone they are cumbersome to use. Therefore, they are typically incorporated in cryptographic protocols, which facilitate their transparent usage.

Common Protocols, Standards and Tools

Cryptographic protocols are no different from other protocols, in the sense that they provide a standardised and agreed upon scheme for the exchange of messages. Two programs using a compatible protocol with therefore be able to securely exchange messages, hiding the encryption, decryption and key management functions within the protocol. A protocol will typically be published as a standard, for incorporation in specific products.

The areas where protocol development has seen the most activity in recent times is secure email, electronic commerce, and the IP protocol suite. A number of new protocols exist in all areas, but it is unclear at this time which will become the prevalent standards longer term. Starting from the bottom of the protocol stack, we have the IPSEC and DNSSEC projects under the Internet Engineering Task Force (IETF).

Both are aimed at adding a good measure of cryptographic security to the Internet, which was originally designed for a mutually cooperative user base. The DNS Protocol Security Extensions are intended to provide authentication for hosts making DNS queries and storage for RSA public keys for same. The intent is that a host making a name resolution query cannot be spoofed by a third party.

The IPSEC working groups are aimed at producing mechanisms for the management of keys and payload encapsulation. The ESP (Encapsulated Security Payload) is aimed at protecting data in transit, and the AH (Authentication Header) is aimed at providing authentication of packets. These are to be supplemented by a key management scheme to support ESP/AH, at the time of writing the leading contenders for this were the IKE and SKIP protocol proposals. An issue in the context of IPSEC will the the programmer's API to these services, and several implementations have been produced with the intent of becoming the standard.

The next step up the stack are secure socket/stream level interfaces, intended for electronic commerce, and championed by the browser vendors. Netscape's SSL (Secure Socket Layer) and Microsoft/VISA's PCT (Private Communications Technology) provide both authentication and traffic encryption, and a secure channel for unprotected higher level protocols like telnet, FTP or other.

SSL has an initial negotiation phase, in which it uses RSA encryption techniques to exchange secret keys, and an transmission phase, in which ciphers including RC2, RC4, IDEA, DES and Triple DES are used to protect traffic. SSL employs the MD5 signature algorithm, and uses key certificates compliant with X.509. PCT bears considerable similarity to SSL, but includes many additional features and supports a wider range of algorithms.

The message lengths are shorter, negotiation provides for more alternatives, signatures and bulk encryption use different keys, and the authentication scheme is slightly stronger than SSL. Stepping a little further up the stack, we have S-HTTP (Secure HTTP), which is an extended variant of the HTTP browser protocol. S-HTTP provides for key exchange using either RSA, in-band, out-of-band or Kerberos schemes, and provides for bulk encryption using DES, Double and Triple DES, DESX (Enhanced DES using additional XORs), IDEA, RC2 and the CDMF scheme.

Unlike SSL and PCT, which authenticate only on a per session basis, S-HTTP is built to authenticate with signatures each and every message sent. Email is another area where encryption is becoming increasingly popular, with a wide range of proposed standards and a number of tools available.

The first major proposal was the PEM (Privacy Enhanced Mail) standard, built around the RFC 822 email formats, and is covered in RFCs 1421 through 1424. Like most modern schemes, PEM uses RSA or DES for key transfer and DES for bulk encryption, with an expanded mail header containing specifiers, has functions etc. Two more evolved mailing schemes are now contending for primacy as the official standard, these being S/MIME (Secure MIME) and MOSS/PEM-MIME (MIME Object Security Standard). MOSS is a MIME variant which incorporates features of PEM to provide a very flexible scheme for supporting MIME and non-MIME compliant recipient mailers.

The drawback of MOSS, it is argued, is that the protocol is so flexible that it is possible to have two MOSS compliant mailers which cannot communicate. More flexible than PEM, and less flexible than MOSS, is the S/MIME standard which is likely to be the longer term winner in the user base.

S/MIME appears to be preferred by the major vendors, which means that it is likely to win the near term race for installed user base. S/MIME exploits the MIME structured email message model, and adds a PKCS #7 (Public Key Cryptography Standard) message to specify the embedded cryptographic components of the message. The S/MIME protocol uses the conventional digital envelope model, with RSA for key transfer and the choice of DES, Triple DES and RC2 for the bulk components of the message.

The X.509 certification standard is supported, but the signature is hidden in the encrypted components to hide it from eavesdroppers. A public domain implementation of S/MIME exists, as well as a large number of vendor proprietary S/MIME enhancements to established mailers. Should S/MIME become the accepted standard, we can expect it to be incorporated into most mainstream mailers.

No discussion of secure mailers would be complete without Phil Zimmermann's somewhat controversial PGP (Pretty Good Privacy) tool. PGP is a freeware tool using RSA key management, IDEA bulk encryption and RSA and MD5 signature and digest formats. Since PGP was placed in the public domain, it foun its way out of the US and Zimmerman fell somewhat into disfavour with the US government over the matter, being called to explained himself before the customary inquisition of a congressional committee.

Zimmerman was later cleared, founded PGP Associates which is now part of Network Associates, Inc. However, the episode illustrates the deep fears which pervade many governments, when it comes to publicly available strong encryption tools.

Having browsed the layers of the protocol stack, and reviewed mailers, the remaining toolset of interest is the Secure Shell (SSH) package, a public domain toolset written by Tatu Ylonen in Finland, and quite widely used. SSH is a secure replacement for rlogin, rsh, rdist and rcp, the popular but security wise problematic Berkeley toolset, and provides facilities for protecting X11 sessions.

The SSH scheme uses RSA and MD5 for authentication and key exchange, and IDEA, DES, or Triple DES for bulk encryption over the secure channel, once established. A server daemon is run to support incoming requests. I have used SSH on occasion and was favourably impressed with the package, since it provides a robust and simple to use toolset. If you need to log into hosts on other people's sites, over a public channel, there is much to be said for using SSH.

The Berkeley remote toolset is great to use, but you always have this gnawing sensation which goes with the knowledge that any sniffer out there has just got your password....

Clearly there is much happening, and much yet to happen in the area of network cryptography. It is therefore important that all system administrators gain a solid understanding of the basic issues, and serious users do the same. We can expect to see our hosts loaded up with increasing amounts of cryptographically enabled tools in coming years, and not understanding the strengths, weaknesses and idiosyncrasies of this technology could prove to be costly indeed. (Readers interested in more detail should consult the RSA FAQ and website, the SSH website, and the wide range of other topical material on the web).








People's Liberation Army Air Power Index Page [Click for more ...]
Military Ethics, Culture, Education and Training Index Page [Click for more ...]
Russian / Soviet Weapon Systems Index Page [Click for more ...]





Artwork, graphic design, layout and text © 2004 - 2014 Carlo Kopp; Text © 2004 - 2014 Peter Goon; All rights reserved. Recommended browsers. Contact webmaster. Site navigation hints. Current hot topics.

Site Update Status: $Revision: 1.753 $ Site History: Notices and Updates / NLA Pandora Archive