|
|
|
|
Exploitable weaknesses in a cipher system
Deliberate weakening of a cipher system,
commonly known as a backdoor,
is a technique that is used by, or on behalf of,
intelligence agencies like
the US National Security Agency (NSA) – and others – to make
it easier for them to break the cipher and access the data. It is often
thought that intelligence services have a Master Key
that gives them instant access to the data, but in reality it is often much
more complicated, and requires the use of sophisticated computing skills.
In the past, intelligence services like the NSA weakened the ciphers just
enough to allow it to be barely broken with the computing power that was available
to them (e.g. by using their vast array of Cray super computers),
assuming that other parties did not have that capability.
Implementing a backdoor is difficult and dangerous,
as it might be discovered by the user
— after which it can no longer be used —
or by a malicious party or adversary, in which case it can be exploited.
Below is a non-exhaustive overview of known backdoor constructions and examples:
|
 |
Weakening of the algorithm
|
 |
 |
One of the most widely used types of backdoor, is weakening of the
algorithm. This was done with mechanical cipher machines – such as
the CX-52 – electronic ones
– such as the H-460 –
and software-based encryption.
NSA often weakened the algorithm just enough to break
it with the help from a super computer (e.g. Cray), assuming that
adversaries did not have that capacity.
|
This solution is universal. It can be applied to mechanical, electronic
and computer-based encryption systems. One of the first known examples
is the weakening of the
Hagelin CX-52
by Peter Jenks of the NSA,
in the early 1960s [1].
The Hagelin CX-52
had the problem that it was theoretically safe when
used correctly. It was possible however to configure the device in such
a way that it produced a short cycle, as a result of which it became
easy to break. Jenks modified the
CX-52 in such a way that it always
produced a long cycle, albeit one that he could predict.
|
|
|
The modified product was designated CX-52M and was marketed by
Crypto AG
as a new version with improved security, which customers immediately
started ordering in quantities.
He repeated the exercise in the mid-1960s, when
Crypto AG
moved from mechanical to electronic designs.
|
The first electronic cipher machines were built around (non)linear
feedback shift registers – LFSR or NLFSR – built with the (then)
newest generation of integrated circuits (ICs). This part is commonly
known as the crypto heart or the cryptologic.
Jenks
manipulated the shift registers in such as way that it seemed
robust from the outside. Nevertheless NSA could break it, as they knew
the exact nature of the built-in weakness.
Manipulating the cryptologic, or actually the cryptographic algorithm,
requires quite some mathematical ingenuity, and is not trivial at all.
|
|
|
During the 1970s, the weaknesses were discovered by several (unwitting)
Crypto AG employees and even by customers.
Crypto AG usually fended them
off with the excuse that the algorithm had been developed a long time
ago, and that an improved version would be released soon. It should be
no surprise that hiding the weaknesses became increasingly difficult over
the years.
The same principle can be applied to software-implementations of
cryptographic algorithms as well, but it has become extremely difficult
to do that in such a way that it passes existing tests,
such as NIST entropy-tests, and can withstand the peer review of
the academic community.
|
Another popular method for weakening a cipher system, is by shortening the
effective length of the cryptograohic KEY, typically specified in bits.
In the 1980s, the keys of military cipher systems were typically 128 bits long,
which was about twice the length that was needed at the time.
|
The DES encryption algorithm
– that was used for bank transactions – had a key length
of 56 bits. It had been developed by Horst Feistel at IBM as
Lucifer and had been improved by NSA.
In 1983, the small Dutch company
Text Lite, introduced the
PX-1000 pocket terminal
shown in the image on the right. It had
a built-in text editor and an acoustic modem, by which texts could be
uploaded in seconds. The device used
DES encryption for the protection
of the text messages, which was thought to be useful for journalists
and business men on the move.
|
|
|
DES
was considered secure at the time. Although it might have been
breakable by NSA, doing so would cost a lot of resources (i.e. computing
power). With DES available in a consumer product at an affordable price,
NSA faced a serious problem,
and turned to Philips Usfa for assistence.
|
Philips
bought the entire stock of DES-enabled devices and shipped it
to the US. The product was then re-released under the Philips brand,
with an algorithm that was supplied by NSA.
The new algorithm was a stream cipher with a key-length of no less than
64 bits. This is more than the 56 bits of DES, and suggested
that it was a least a strong as DES, and probably even
stronger. By reverse engineering the algorithm, Crypto Museum has
meanwhile concluded that of the 64 key bits, only 32 are
significant. This means that the key has effectively been halved.
|
|
|
Does this mean that it takes only half the time to break the key?
No, as each key-bit doubles the number of combinations,
removing 32 bits means that it has become 4,294,967,296 times easier
to break the key (232). For example:
if we assume that it takes one full year to break a 64-bit key, breaking a
32-bit key would take just 0.007 seconds. A piece of cake for NSA's
super computers.
|
A more recent example are the encryption algorithms of
TETRA radio networks, that are used in more than
100 countries by police and other emergency services, as well as
by intelligence and military services. To avoid eavesdropping,
the TETRA Encryption Algorithms (TEA) are used.
Several flavours of the TEA were developed, such as
TEA2 for EU/US services, and TEA1 which was approved for export
and civil users. Although the actual algorithms were classified — TEA
was never subjected to public scrutiny — they were all specified as having
an 80-bit key length.
|
|
|
In 2021, researchers from the Dutch cyber security firm
Midnight Blue discovered that the
key length of the TEA1 algorithm was internally reduced from
80 to 32 bits, and demonstrated that they could break it in one
minute on a regualar laptop and in 12 hours on a 1998 laptop.
The weakness was published along with four more vulnerabilities
as the TETRA:BURST revelations.
|
 |
Hide the KEY in the ciphertext
|
 |
 |
It is sometimes suggested that the cryptographic key might be hidden
in the output stream (i.e. in the cipher text). Not in
a readable form, of course, but when you known where to look, the key
will reveal itself.
Although this method is prone to discovery it has in fact been used
in the past.
|
A good example of this technique is the
Hagelin CSE-280 voice encryptor,
that was introduced by Crypto AG in the early 1970s. The product had
been developed in cooperation with the German cipher authority ZfCh
(part of the BND),
and used forward synchronisation, to allow late entry sync.
The key was hidden in the preample that was inserted
at the beginning of each transmission. If one knew where to look,
the entire key could be reconstructed. A few years after the device had
been introduced, Crypto AG's chief developer Peter Frutiger suddenly
realised how it was done.
|
|
|
It was only a matter of time before customers would discover it too.
In 1976, the Syrians became aware of the (badly hidden) key in the
preamble, and notified Crypto AG, where Frutiger provided them
with a fix that made it instantly unbreakable. NSA was furious and
Frutiger got fired for this.
|
Another example of hiding hints in the output stream, is the
T-1000/CA,
internally known as Beroflex, that was the civil version of the
NATO-approved Aroflex,
a joint development of Philips
and Siemens. It was based on a
T-1000 telex.
Whilst the Aroflex was highly secure, Beroflex (T-1000/CA) was not.
With the right means and the right knowledge, it could be broken.
This was not a trivial task however, and required the use of a special
purpose device – a super chip – that had been co-developed by experts
at the codebreaking division of the Royal Dutch Navy.
|
|
|
The exploit was based on redundancy in the enciphered message preamble.
It caused a bias which was an unnecessary shortcoming by design. It
involved solving a set of binary equasions, an exponentially large number
of times, for which the special purpose device was developed.
➤ More bout Aroflex
|
In some cases, the cipher can be weakened by manipulating the manual.
This was done for example with the manuals of the
Hagelin CX-52 machine.
Although the CX-52
was in theory a virtually unbreakable machine, it could
be set up accidentally in such a way, that it produced a short cycle
(period), which was easy to break.
By manipulating the manual, guidelines were given for 'proper' use
of the machine, but in reality the user was instructed to configure the
machine in such a way that it generated a short cycle, which was easy to
break by the NSA.
|
|
|
 |
Key generator with predictive output
|
 |
 |
Many encryption systems, old and new alike, make use of KEY-generators
– commonly pseudo random number generators, or PRNGs – for example for
the generation of unique message keys, for generating private and
public keys, and for generating the key stream in a stream cipher.
By manipulating the key generator, it is theoretically possible to generate
predictable keys, weak keys or predicatable cycles. Examples are the mechanical
key generator of the Hagelin CX-52M,
but also the software-based random number generators (RNGs)
in modern software algorithms.
Creating this kind of weaknesses is neither simple nor trivial,
as the weakened key generator has to withstand a variety of existing entropy
tests, including the ones published by the
US National Institute of Standards and Technology (NIST).
Nevertheless, various (potential) backdoors based on weakened PRNGs have been
reported in the press, some of which are attributed to the NSA.
In December 2013, Reuters reported that documents released by
Edward Snowden
indicated that NSA had payed RSA Security US$ 10 million to make
Dual Elliptic Curve Deterministic Random Bit Generator (Dual_EC_DRBG)
the default in their encryption software. It had already been proven in 2007,
that constants could be constructed in such a way as to create
a kleptographic backdoor in the NIST-recommended Dual_EC_DRBG [3].
It had been deliberately inserted by NSA as part of its BULLRUN decryption
program. NIST promptly withdrew Dual_EC_DRBG from its draft guidance [4].
➤ Wikipedia: Random number generator attack
➤ Wikipedia: Dual_EC_DRBG
|
It is often thought by the general public, that intelligence agencies have
something like a magic password, or master key, that gives them instant
access to secure communications of a subject. Although in most cases the
backdoor mechanism is far more complex, it is technically possible.
An example of a possible master key, is the so-called _NSAKEY that was
found in a Microsoft operating system in 1999. The variable contained
a 1024-bit public key, that was similar to the cryptographic keys that are
used for encryption and authentication.
Although Microsoft firmly denied it, it was widely speculated that the
key was there to give the NSA access to the system.
There are however a few other possible explanations for the presence of
this key — including a backup key, a key for installing NSA proprietary crypto
suites, and incompetence on the part of Microsoft, NSA or both — all of
which seem plausible. In addition, Dr. Nicko van Someren found a third
– far more obscure – key in Windows 2000, which he doubted had
a legitimate purpose [5].
➤ Wikipedia: _NSAKEY
|
Although most backdoors are covertly built into equipment without
the user's consent, there is one category in which the backdoor is known
and approved by the user, and that is KEY ESCROW. It requires the user
to surender the cryptographic keys to a trusted third party:
the escrow agent.
|
A good example of KEY ESCROW is the so-called
Clipper Chip, that was
introduced by the NSA in the early 1990s in an attempt to control the
use of strong encryption by the general public.
It was the intention to use this chip in all civil encryption products,
such as computers, secure telephones, etc., so that everyone would be able
to use strong encryption. By forcing people to surrender their keys to
the (US) government, law enforcement agencies had the ability to decrypt
the communication, should that prove to be necessary during the course
of an investigation.
|
|
|
It had to be assumed that the (US) government could be trusted
under all circumstances, and that sufficient mechanisms were in place to
avoid unwarranted tapping and other abuse, which was heavily disputed
by the Electronic Frontier Foundation (EFF) and other privacy organisations.
The device – which used the
Skipjack algorithm
– was not embraced by the public.
In addition, it contained a serious flaw. In 1994, shortly after its
introduction, (then) AT&T researcher Matt Blaze discovered the possibility to
tamper the device in such a way that it offered strong encryption whilst
disabling the escrow capability. And that was not what the US Government had in
mind.
➤ More about the Clipper chip
|
Encryption systems are often attacked by adversaries, by exploiting information
that is hidden in the so-called side channels. This is known as a
side channel attack. In most cases, side channels are unintended,
but they may have been inserted deliberately to give an eavesdropper a way in.
|
Side channels are often unwanted emanations – such as radio
frequency (RF) signals that are emitted by the equipment, or sound
generated by a printer or a keyboard –
but may also take the form of variations in power consumption (current)
that occur when the device is in use (power analysis).
In military jargon, unwanted emanations are commonly known as TEMPEST.
An early example of a cryptographic device that exhibited exploitable
TEMPEST problems, is the
Philips Ecolex IV mixer
shown in the image on the right, which was approved for use by
NATO.
|
|
|
As it was based on the One-Time Tape (OTT)
principle, it was theoretically
safe. However, in the mid-1960s, the Dutch national physics laboratory TNO,
proved that minute glitches in the electric signals on the teleprinter
data line, could be exploited to reconstruct the original plaintext.
The problem was eventually soved by adding filters between the
device and the teleprinter line.
➤ Wikipedia: Side-channel attack
|
Backdoors can also be based on unintentional weaknesses in the design of
an encryption device. For example, the
Enigma machine
– used during WWII by the German Army – cannot encipher a letter into
itself: the letter 'A' in the plaintext, will never yield
the letter 'A' in the ciphertext.
|
This and other weaknesses greatly helped
the codebreakers at Bletchley Park,
and allowed the cipher to be broken throughout World War II.
Unintended weaknesses were also present in the early mechanical cipher
machines of Crypto AG (Hagelin), such as the
C-36,
M-209,
C-446
and CX-52.
Although they were theoretically strong, they could accidentally be setup
in such a way that they produced a short cycle, which could be broken much
more easily. Similar properties can be found in the first generations of
electronic crypto devices that are based on shift-registers.
|
|
|
In some cases, the safety doctrine that is intended to make the device
more secure, actually makes the cipher weaker. For example: during WWII,
the German cipher authority dictated that a particular cipher wheel should
not be used in the same position on two successive days. Whilst this may
seem like a good idea, it effectively reduces the maximum number of possible
settings.
By far the most common of the unintended weaknesses is operator error,
such as choosing a simple or easy to guess password, sending multiple messages
on the same key, sending the same message on two different keys, etc.
Here are some examples of unintended weaknesses:
|
- Weak keys
- A letter can not encode into itself (Enigma)
- False security measures
- Operator mistakes
- Software bugs
|
Another way of getting surreptitious access to a computer system,
such as a personal computer, is by covertly installing additional hardware
or software that gives an adversary direct or indirect access to the system
and its data. Spyware can be visible, but can also be completely
invisible.
|
An example of a hidden-in-plain-sight device is a so-called key logger
that can be installed between keyboard and computer. The image on the right
shows two variants: one for USB (left) and one for the old PS-2 keyboard
interface.
Items like these can easily be installed in an office – for example by the
cleaning lady – and are hardly noticed in the tangle of wires below your
desk. It registers every key stroke, complete with time/date stamp, including
your passwords. If the cleaning lady removes it a few days later, you will
never find out that it was ever installed.
|
|
|
With a special key combination, the key logger can be turned in a USB
memory stick, from which the logged data can be recovered by a malicious
party. A more sophisticated example of covert hardware, is the addition
of a (miniature) chip on the printed circuit board of an existing device.
|
As many companies today have outsourced the production of their electronics, there
is always a possibility that it might be maliciously modified by a foreign
party. This is particularly the case with critical infrastructure like
routers, switches and telecommunications backbone equipment.
This problem is enhanced by the increasing complexity of modern computers,
as a result of which virtually no one knows exactly how it works. A good example
is the tiny computer that is hidden inside Intel's AMT processors, and that has
been actively exploited as a spying tool [6].
|
|
|
Manipulated hardware can be used to eavesdrop on your data, but can also be
used as part of a Distributed Denial of Service attack (DDoS), or to disrupt
the critical infrastructure of a company or even an entire country.
In many cases, such attacks are carried out by (foreign) state actors.
|
Manipulation of hardware is also possible by adding a secret chip to a
regular inconspicuous component. A good example is the
FIREWALK implant
of the US National Security Agency (NSA) that is hidden
inside a regular RJ45 Ethernet socket of a computer. It is used by the
NSA to spy behind firewalls and was disclosed in 2013 by an
unidentified party. 1
This device is particularly dangerous as it can not be found with a
visual inspection. Furthermore, it transmits the intercepted data
via radio waves and effectively bypasses all security.
|
|
|
Is this problem restricted to high-end (computing) devices? Certainly not.
Most modern domestic applicances, such as smart thermomenters, smart meters,
domotica and in particular devices for the Internet of Things (IoT), are
badly built, contain badly written software and are rarely properly protected,
as a result of which they are extremely vulnerable to manipulation (hacking).
Examples of spyware:
|
- Adding a small chip to the board (can only be done during production process)
- Adding a regular component with a built-in chip ➤ e.g. NSA's FIREWALK
- Tiny computer inside a regular processor ➤ e.g. Intel AMT
- External key logger (USB or PS2)
- Key logger (spy) software
- Computer viruses
- Supply chain attack
|
-
It was initially speculated that the documents were disclosed by
former CIA/NSA-contractor Edward Snowden,
but this appears not te be the case. This means that there is at least one
more source of information leaks.
|
|
|
Any links shown in red are currently unavailable.
If you like the information on this website, why not make a donation?
© Crypto Museum. Created: Monday 24 February 2020. Last changed: Saturday, 19 August 2023 - 07:33 CET.
|
 |
|
|
|