### Return to Table of Contents

In this chapter we discuss the limitations of the CPA security definition. In short, the CPA security definition considers only the information leaked to the adversary by *honestly generated* ciphertexts. It does not, however, consider what happens when an adversary is allowed to inject its own *maliciously crafted* ciphertexts into an honest system. If that happens, then even a CPA-secure encryption scheme can fail in spectacular ways. We begin by seeing such an example of spectacular and surprising failure, called a padding oracle attack:

**10.1: Padding Oracle Attacks**

Imagine a webserver that receives CBC-encrypted ciphertexts for processing. When receiving a ciphertext, the webserver decrypts it under the appropriate key and then checks whether the plaintext has valid X.923 padding (Construction 9.6).

Importantly, suppose that the *observable behavior of the webserver changes depending on whether the padding is valid*. You can imagine that the webserver gives a special error message in the case of invalid padding. Or, even more cleverly (but still realistic), the *difference in response time* when processing a ciphertext with invalid padding is enough to allow the attack to work.^{1} The *mechanism* for learning padding validity is not important — what is important is simply the fact that an attacker has some way to determine whether a ciphertext encodes a plaintext with valid padding. No matter how the attacker comes by this information, we say that the attacker has access to a **padding oracle**, which gives the same information as the following subroutine:

We call this a padding *oracle* because it answers only one specific kind of question about the input. In this case, the answer that it gives is always a single boolean value.

It does not seem like a padding oracle is leaking useful information, and that there is no cause for concern. Surprisingly, we can show that an attacker who doesn’t know the encryption key *k* can use a padding oracle alone to *decrypt any ciphertext of its choice*! This is true no matter what else the webserver does. As long as it leaks this one bit of information on ciphertexts that the attacker can choose, it might as well be leaking everything.

^{1}This is why VALIDPAD should not actually be implemented the way it is written in Construction 9.6. A *constant-time* implementation — in which every control path through the subroutine takes the same number of CPU cycles — is required.

### Malleability of CBC Encryption

Recall the definition of CBC decryption. If the ciphertext is *c* = *c _{0}* · · ·

*c*then the

_{ℓ}*i*th plaintext block is computed as:

*m*:=

_{i}*F*

^{−1}(

*k,c*) ⊕

_{i}*c*

_{i−1}.

From this we can deduce two important facts:

- Two consecutive blocks (
*c*_{i−1},*c*) taken in isolation are a valid encryption of_{i}*m*. Looking ahead, this fact allows the attacker to focus on decrypting a single block at a time._{i} - XORing a ciphertext block with a known value (say,
*x*) has the effect of XORing the corresponding plaintext block by the same value. In other words, for all*x*, the ciphertext (*c*_{i−1}⊕*x,c*) decrypts to_{i}*m*⊕_{i}*x*:

Dec(*k*, (*c*_{i−1}⊕*x,c*)) =_{i}*F*^{−1}(*k,c*) ⊕ (_{i}*c*_{i−1}⊕*x*) = (*F*^{−1}(*k,c*) ⊕_{i}*c*_{i−1}) ⊕*x*=*m*⊕_{i}*x*

If we send such a ciphertext (*c*_{i−1} ⊕ *x,c _{i}*) to the padding oracle, we would therefore learn whether

*m*⊕

_{i}*x*is a (single block) with valid padding. By carefully choosing different values

*x*and asking questions of this form to the padding oracle, we can eventually learn

*all of m*. We summarize this capability with the following subroutine:

_{i}

Given a ciphertext *c* that encrypts an unknown message *m*, we can see that an adversary can generate another ciphertext whose contents are *related to m in a predictable way*. This property of an encryption scheme is called **malleability**.

### Learning the Last Byte of a Block

We now show how to use the CHECKXOR subroutine to determine the last byte of a plaintext block *m*. There are two cases to consider, depending on the contents of *m*. The attacker does not initially know which case holds:

For the first (and easier) of the two cases, suppose the second-to-last byte of *m* is nonzero. We will try every possible byte *b* and ask whether *m* ⊕ *b* has valid padding. Since *m* is a block and *b* is a single byte, when we write *m* ⊕ *b* we mean that *b* is extended on the left with zeroes. Since the second-to-last byte of *m* (and hence *m* ⊕ *b*) is nonzero, only one of these possibilities will have valid padding — the one in which *m* ⊕ *b* ends in byte 01. Therefore, if *b* is the candidate byte that succeeds (*i.e.*, *m* ⊕ *b* has valid padding) then the last byte of *m* must be *b* ⊕ 01.

#### Example:

*Using LEARNLASTBYTE to learn the last byte b ^{∗} of the plaintext block m:*

For the other case, suppose the second-to-last byte of *m* is zero. Then *m* ⊕ *b* will have valid padding for *several* candidate values of *b*:

#### Example:

*Using LEARNLASTBYTE to learn the last byte b ^{∗} of the plaintext block m:*

Whenever more than one candidate *b* value yields valid padding, we know that the second-to-last byte of *m* is zero (in fact, by counting the number of successful candidates, we can know exactly how many zeroes precede the last byte of *m*).

If the second-to-last byte of *m* is zero, then the second-to-last byte of *m* ⊕ 01 *b* is nonzero. The only way for both strings *m* ⊕ 01 *b* and *m* ⊕ *b* to have have valid padding is when *m* ⊕ *b* ends in byte 01. We can re-try all of the successful candidate *b* values again, this time with an extra nonzero byte in front. There will be a unique candidate *b* that is successful in both rounds, and this will tell us that the last byte of *m* is *b* ⊕ 01.

The overall approach for learning the last byte of a plaintext block is summarized in the LEARNLASTBYTE subroutine in Figure 10.1. The set *B* contains the successful candidate bytes from the first round. There are at most 16 elements in *B* after the first round, since there are only 16 valid possibilities for the last byte of a properly padded block. In the worst case, LEARNLASTBYTE makes 256 + 16 = 272 calls to the padding oracle (via CHECKXOR).

### Learning Other Bytes of a Block

Once we have learned one of the trailing bytes of a plaintext block, it is slightly easier to learn additional ones. Suppose we know the last 3 bytes of a plaintext block, as in the example below. We would like to use the padding oracle to discover the 4th-to-last byte.

#### Example:

*Using LEARNPREVBYTE to learn the 4th-to-last byte b ^{∗} when the last 3 bytes of the block are already known.*

Since we know the last 3 bytes of *m*, we can calculate a string *x* such that *m* ⊕ *x* ends in 00 00 04 . Now we can try XOR’ing the 4th-to-last byte of *m* ⊕ *x* with different candidate bytes *b*, and asking the padding oracle whether the result has valid padding. Valid padding only occurs when the result has 00 in its 4th-to-last byte, and this happens exactly when the 4th-to-last byte of m exactly matches our candidate byte *b*.

The process is summarized in the LEARNPREVBYTE subroutine in Figure 10.1. In the worst case, this subroutine makes 256 queries to the padding oracle.

**Putting it all together**. We now have all the tools required to decrypt *any ciphertext* using only the padding oracle. The process is summarized below in the LEARNALL subroutine.

In the worst case, 256 queries to the padding oracle are required to learn each byte of the plaintext.^{2} However, in practice the number can be much lower. The example in this section was inspired by a real-life padding oracle attack^{3} which includes optimizations that allow an attacker to recover each plaintext byte with only 14 oracle queries on average.

^{2}It might take more than 256 queries to learn the last byte. But whenever learnlastbyte uses more than 256 queries, the side effect is that you’ve also learned that some other bytes of the block are zero. This saves you from querying the padding oracle altogether to learn those bytes.

^{3}*How to Break XML Encryption*, Tibor Jager and Juraj Somorovsky. ACM CCS 2011.

**10.2: What Went Wrong?**

Why didn’t the CPA-security guarantee of CBC encryption save us from padding oracle attacks? How was an attacker able to completely obliterate the privacy of encryption?

- First, CBC encryption (in fact, every encryption scheme we’ve seen so far) has a property called
**malleability**. Given an encryption*c*of an*unknown*plaintext*m*, it is possible to generate another ciphertext*c’*whose contents are*related to m in a predictable way*. In the case of CBC encryption, if ciphertext*c*_{0}· · ·*c*^{ℓ}encrypts a plaintext*m*· · ·_{1}*m*, then ciphertext (_{ℓ}*c*_{i−1}⊕*x,c*) encrypts the_{i}*related*plaintext*m*⊕_{i}*x*. In short, if an encryption scheme is malleable, then it allows information contained in one ciphertext to be “transferred” to another ciphertext. - Second, you may have noticed that the CPA security definition makes no mention of the Dec algorithm. The Dec algorithm shows up in our definition for
*correctness*,

**Figure 10.1**: *Summary of padding oracle attack*

but it is nowhere to be found in the ℒ_{cpa-*} libraries. Apparently decryption has no impact on *security*?

But the padding oracle setting involved the Dec algorithm — in particular, the adversary was allowed to see some information about the result of Dec applied to adversarially-chosen ciphertexts. Because of that, the padding oracle setting is no longer modeled well by the CPA security definition.

The bottom line is: give an attacker a malleable encryption scheme and access to any partial information related to decryption, and he/she can get information to leak out in surprising ways. As the padding-oracle attack demonstrates, even if *only a single bit of information* (i.e., the answer to a yes/no question) is leaked about the result of decryption, this is frequently enough to extract the *entire plaintext*.

If we want security even under the padding-oracle scenario, we need a better security definition and encryption schemes that achieve it. That’s what the rest of this chapter is about.

#### Discussion

**Is this a realistic concern?**You may wonder whether this whole situation is somewhat contrived just to give cryptographers harder problems to solve. Indeed, that was probably a common attitude towards the security definitions introduced in this chapter. However, in 1998, Daniel Bleichenbacher demonstrated a devastating attack against early versions of the SSL protocol. By presenting millions of carefully crafted ciphertexts to a webserver, an attacker could eventually recover arbitrary SSL session keys.

In practice, it is hard to make the external behavior of a server not dependent on the result of decryption. In the case of padding oracle attacks, mistakes in implementation can lead to different error messages for invalid padding. In other cases, the timing of the server’s response can depend on the decrypted value, in a way that allows similar attacks.

As we will see, it *is* in fact possible to provide security in these kinds of settings, and with low additional overhead. These days there is rarely a good excuse for using encryption which is only CPA-secure.

- Padding is in the name of the attack. But padding is not the culprit. The culprit is using a (merely) CPA-secure encryption scheme while allowing some information to leak about the result of decryption. The exercises expand on this further
- The attack seems superficially like brute force, but it is not: The attack makes 256 queries per byte of plaintext, so it costs about 256
*ℓ*queries for a plaintext of*ℓ*bytes. Brute-forcing the entire plaintext would cost 8since that’s how many^{ℓ}-byte plaintexts there are. So the attack is exponentially better than brute force. The lesson is: brute-forcing small pieces at a time is much better then brute-forcing the entire thing.^{ℓ}

**10.3: Defining CCA Security**

Our goal now is to develop a new security definition — one that considers an adversary that can construct malicious ciphertexts and observe the effects caused by their decryption. We will start with the basic approach of CPA security, where there are left and right libraries who differ only in which of two plaintexts they encrypt.

In a typical system, an adversary might be able to learn only some specific *partial information* about the Dec process. In the padding oracle attack, the adversary was able to learn only whether the result of decryption had valid padding.

However, we are trying to come up with a security definition that is useful *no matter how* the encryption scheme is deployed. How can we possibly anticipate every kind of partial information that might make its way to the adversary in every possible usage of the encryption scheme? The safest choice is to be as pessimistic as possible, as long as we end up with a security notion that we can actually achieve in the end. So **let’s just allow the adversary to decrypt arbitrary ciphertexts of its choice**. In other words, if we can guarantee security when the adversary has *full information* about decrypted ciphertexts, then we certainly have security when the adversary learns only *partial* information about decrypted ciphertexts (as in a typical real-world system).

But this presents a significant problem. An adversary can do *c ^{∗}* := CHALLENGE(

*m*,

_{L}*m*) to obtain a challenge ciphertext, and then immediately ask for that ciphertext

_{R}*c*to be decrypted. This will obviously reveal to the adversary whether it is linked to the left or right library from the security definition.

^{∗}So, simply providing unrestricted Dec access to the adversary cannot lead to a reasonable security definition (it is a security definition that can never be satisfied). But let’s imagine the *smallest* possible patch to prevent this immediate and obvious attack. We can allow the adversary to ask for the decryption of **any ciphertext, except those produced in response to CHALLENGE queries**. In doing so, we arrive at the final security definition: security against chosen-ciphertext attacks, or CCA-security:

### Definition 10.1: CCA Security

*Let Σ be an encryption scheme. We say that Σ has security against chosen-ciphertext attacks (CCA security) if* ℒ

^{Σ}

_{cca-L}≋ ℒ

^{Σ}

_{cca-R},

*where:*

In this definition, the set *S* keeps track of the ciphertexts that have been generated by the CHALLENGE subroutine. The DEC subroutine rejects these ciphertexts outright, but will gladly decrypt any other ciphertext of the adversary’s choice.

**Pseudorandom ciphertexts.** We can also modify the pseudorandom-ciphertexts security definition (CPA$ security) in a similar way:

### Defintion 10.2: CCA$ Security

*Let Σ be an encryption scheme. We say that Σ has pseudorandom ciphertexts in the presence of chosen-ciphertext attacks (CCA$ security) if* ℒ

^{Σ}

_{cca$-real}≋ ℒ

^{Σ}

_{cca$-rand},

*where:*

**10.4: CCA Insecurity of Block Cipher Modes**

With the padding oracle attack, we already showed that CBC mode does not provide security in the presence of chosen ciphertext attacks. But that attack was quite complicated since the adversary was restricted to learn just 1 bit of information at a time about a decrypted ciphertext. An attack in the full-edged CCA setting can be much more direct.

Consider the adversary below attacking the CCA security of CBC mode (with block length *blen*)

It can easily be veried that this adversary achieves advantage 1 distinguishing **L**_{cca-L} from **L**_{cca-R}. The attack uses the fact (also used in the padding oracle attack) that if *c*_{0}*c*_{1}*c*_{2} encrypts *m*_{1}*m*_{2}, then *c*_{0}*c*_{1} encrypts *m*_{1}. Ciphertext *c*_{0}*c*_{1} is clearly related to *cm*_{0}*c*_{1}*c*_{2} in an obvious way, but it is *different* than *c*_{0}*c*_{1}*c*_{2}, so the ℒ_{cca-*} libraries happily decrypt it.

Perhaps unsurprisingly, there are many very simple ways to catastrophically attack the CCA security of CBC-mode encryption. Here are some more (where x̄ denotes the result of flipping every bit in *x*):

The first attack uses the fact that modifying *c*_{2} has no effect on the first plaintext block. The second attack uses the fact that flipping every bit in the IV flips every bit in *m*_{1}.

**10.5: A Simple CCA-Secure Scheme**

Recall the definition of a **strong** pseudorandom permutation (PRP) (Definition 7.6). A strong PRP is one that is indistinguishable from a randomly chosen permutation, even to an adversary that can make both *forward* (*i.e.*, to *F*) and *reverse* (*i.e.*, to *F*^{−1}) queries.

This concept has some similarity to the definition of CCA security, in which the adversary can make queries to both Enc and its inverse Dec. Indeed, a strong PRP can be used to construct a CCA-secure encryption scheme in a natural way:

### Construction 10.3

*Let F be a pseudorandom permutation with block length blen* = *n* + *λ*. *Define the following encryption scheme with message space* ? = {0,1}* ^{n}*:

In this scheme, *m* is encrypted by appending a random nonce *r* to it, then applying a PRP. We can informally reason about the security of this scheme as follows:

- Imagine an adversary linked to one of the CCA libraries. As long as the random value
*r*does not repeat, all inputs to the PRP are distinct. The guarantee of a pseudorandom function/permutation is that its outputs (which are the*ciphertexts*in this scheme) will therefore all look independently uniform. - The CCA library prevents the adversary from asking for
*c*to be decrypted, if*c*was itself generated by the library. For any other value*c’*that the adversary asks to be decrypted, the guarantee of a*strong*PRP is that the result will look independently random. In particular, the result will not depend on the choice of plaintexts used to generate challenge ciphertexts. Since this choice of plaintexts is the only dierence between the two CCA libraries, these decryption queries (intuitively) do not help the adversary.

More formally, we can prove the security of this scheme as follows:

### Claim 10.4

*If F is a strong PRP (Definition 7.6) then Construction 10.3 has CCA security.*

**Exercises:**

10.1: There is nothing particularly bad about padding schemes. They are only a target because padding is a commonly used structure in plaintexts that is verified upon decryption.

A *null character* is simply the byte 00. Suppose you have access to the following oracle:

Suppose you are given *c*^{∗} := Enc(*k,m*^{∗}) for some unknown plaintext *m ^{∗}* and unknown key

*k*. Assume that

*m*is a multiple of the blocklength (so no padding is used), and that

^{∗}*m*contains no null characters.

^{∗}- Show how to use the oracle to completely decrypt
*m*, when Enc uses CBC-mode encryption.^{∗} - Show how to use the oracle to completely decrypt
*m*, when Enc uses CTR-mode encryption.^{∗}

10.2: PKCS#7 is a standard for padding that uses padding strings 01, 02 02, 03 03 03, etc. Show how to decrypt arbitrary CBC-encrypted ciphertexts using a padding oracle that checks for correct PKCS#7 padding.

10.3: Sometimes encryption is as good as decryption, to an adversary.

(a). Suppose you have access to the following **encryption** oracle, where *s* is a secret that is consistent across all calls:

Yes, this question is referring to the awful **ECB** encryption mode (Construction 9.1). Describe an attack that efficiently recovers all of *s* using access to ECBORACLE. Assume that if the length of *m*‖*s* is not a multiple of the blocklength, then ECB mode will pad it with null bytes.

*Hint:* by varying the length of *m*, you can control where the block-division boundaries are in *s*.

(b). Now suppose you have access to a CBC encryption oracle, where you can control the IV that is used:

Describe an attack that effciently recovers all of *s* using access to CBORACLE. As above, assume that *m*‖*s* is padded to a multiple of the blocklength in some way. It is possible to carry out the attack no matter what the padding method is, as long as the padding method is known to the adversary.

10.4: Prove formally that CCA$ security implies CCA security.

10.5: Let Σ be an encryption scheme with message space {0,1}* ^{n}* and define Σ

^{2}to be the following encryption scheme with message space {0,1}

*:*

^{2n}

(a). Prove that if Σ has CPA security, then so does Σ^{2}.

(b). Show that even if Σ has CCA security, Σ^{2} does not. Describe a successful distinguisher and compute its distinguishing advantage.

10.6: Show that the following block cipher modes do not have CCA security. For each one, describe a successful distinguisher and compute its distinguishing advantage.

10.7: Show that none of the schemes in Exercise 8.4 have CCA security. For each one, describe a successful distinguisher and compute its distinguishing advantage.

10.8: Alice believes that a CBC encryption of 0^{blen}‖*m* (where *m* is a single block) gives CCA security, when the Dec algorithm rejects ciphertexts when the first plaintext block is not all zeroes. Her reasoning is:

- The ciphertext has 3 blocks (including the IV). If an adversary tampers with the IV or the middle block of a ciphertext, then the first plaintext block will no longer be all zeroes and the ciphertext is rejected.
- If an adversary tampers with the last block of a ciphertext, then it will decrypt to 0
^{blen}‖*m’*where*m’*is unpredictable from the adversary’s point of view. Hence it will leak no information about the original*m*.

Is she right? Let CBC denote the encryption scheme obtained by using a secure PRF in CBC mode. Below we define an encryption scheme Σ*‘* with message space {0,1}* ^{blen}* and ciphertext space {0,1}

^{3blen}:

Show that Σ*‘* does **not** have CCA security. Describe a distinguisher and compute its distinguishing advantage. What part of Alice’s reasoning was not quite right?

*Hint:* Obtain a ciphertext *c* = *c*_{0}*c*_{1}*c*_{2} and another ciphertext *c ^{‘}* =

*c’*

_{0}

*c’*

_{1}

*c’*

_{2}, both with known plaintexts. Ask the library for the decryption of

*c*

_{0}

*c*

_{1}

*c*

^{‘}_{2}.

10.9: CBC and OFB modes are malleable in very different ways. For that reason, Mallory claims that encrypting a plaintext (independently) with both modes results in CCA security, when the Dec algorithm rejects ciphertexts whose OFB and CBC plaintexts don’t match. Is she right?

Let CBC denote the encryption scheme obtained by using a secure PRF in CBC mode. Let OFB denote the encryption scheme obtained by using a secure PRF in OFB mode. Below we define an encryption scheme Σ*‘*:

Show that Σ*‘* does **not** have CCA security. Describe a distinguisher and compute its distinguishing advantage.

10.10: This problem is a generalization of the previous one. Let Σ_{1} and Σ_{2} be two (possibly different) encryption schemes with the same message space ?. Below we define an encryption scheme Σ*‘*:

Show that Σ*‘* does **not** have CCA security, even if both Σ_{1} and Σ_{2} have CCA security. Describe a distinguisher and compute its distinguishing advantage.

10.11: Consider any padding scheme consisting of subroutines PAD (which adds valid padding to its argument) and VALIDPAD (which checks its argument for valid padding). Assume that VALIDPAD(PAD(*x*)) = 1 for all strings *x*.

Show that if an encryption scheme Σ has CCA security, then the following two libraries are indistinguishable:

That is, a CCA-secure encryption scheme hides underlying plaintexts in the presence of padding-oracle attacks.

*Note:* The distinguisher can even send a ciphertext *c* obtained from CHALLENGE as an argument to PADDINGORACLE. Your proof should somehow account for this when reducing to the CCA security of Σ.

10.12: Show that an encryption scheme Σ has CCA$ security if and only if the following two libraries are indistinguishable:

*Note:* In ℒ_{left}, the adversary can obtain the decryption of *any* ciphertext via DEC. In ℒ_{right}, the DEC subroutine is “patched” (via *D*) to give reasonable answers to ciphertexts generated in CHALLENGE.