### Return to Table of Contents

Until very recently, cryptography seemed doomed to be a cat-and-mouse game. Someone would come up with a way to encrypt, someone else would find a way to break the encryption, and this process would repeat again and again. Crypto-enthusiast Edgar Allen Poe wrote in 1840,

“Human ingenuity cannot concoct a cypher which human ingenuity cannot resolve.”

With the benefit of hindsight, we can now see that Poe’s sentiment is not true. Modern cryptography is full of schemes that we can **prove** are secure in a very specific sense.

If we wish to prove things about security, we must be very precise about what exactly we mean by “security.” Our study revolves around formal *security definitions*. In this chapter, we will learn how write, understand, and interpret the meaning of a security definition; how to prove security using the technique of *hybrids*; and how to demonstrate insecurity by showing an attack violating a security definition.

**2.1: Reasoning about Information Hiding via Code Libraries**

All of the security definitions in this course are defined using a common methodology, which is based on familiar concepts from programming. The main idea is to formally define the “allowed” usage of a cryptographic scheme through a programmatic *interface*, and to define what information is hidden in terms of two *implementations* of that interface (libraries).

### Defintion 2.1: Libraries:

*A library* ℒ

*is a collection of subroutines and private/static variables. A library’s*ℒ

**interface**consists of the names, argument types, and output type of all of its subroutines. If a program ? includes calls to subroutines in the interface of*, then we write*? ◊ ℒ

*to denote the result of*?

**linking***to*ℒ

*in the natural way (answering those subroutine calls using the implementation specified in*ℒ

*). We write*? ◊ ℒ

*⇒ z to denote the event that program*? ◊ ℒ

*outputs the value z.*

Some more specifics:

- If ? ◊ ℒ is a program that makes random choices, then its output is also a random variable.
- We can consider compound programs like ? ◊ ℒ
_{1}◊ ℒ_{2}. Our convention is that subroutine calls only happen from left to right across the ◊ symbol, so in this example, ℒ_{2}doesn’t call subroutines of ?. We can then think of ? ◊ ℒ_{1}◊ ℒ_{2}as (? ◊ ℒ_{1}) ◊ ℒ_{2}(a compound program linked to ℒ_{2}) or as ? ◊ (ℒ_{1}◊ ℒ_{2}) (? linked to a compound library), whichever is convenient. - We try to make formal security definitions less daunting by expressing them in terms of elementary CS concepts like libraries, scope, etc. But one must not forget that the ultimate goal of security definitions is to be mathematically precise enough that we can actually prove things about security.

For this reason, we need to have a handle on exactly what information the calling program can obtain from the library. We assume that the library’s *explicit interface* is the only way information gets in and out of the library. This is at odds with real-world software, where you can find *implicit* channels of information (*e.g.*, peeking into a library’s internal memory, measuring the response time of a subroutine call, etc.).^{1} In short, don’t get too carried away with the terminology of real-world software — you should think of these libraries more as *mathematical abstractions* than software.

^{1}This doesn’t mean that it’s impossible to reason about attacks where an adversary has implicit channels of information on our cryptographic implementations. It’s just that such channels must be made explicit as far as the definitions go, if one wishes to prove something about what an adversary can learn by that channel. You can’t reason about information that your definition has no way of expressing.

Imagine the interface of a sorting library: when you call a subroutine SORT(*A*), you expect back a list that contains a sorted arrangement of the contents of *A*. As you know, there might be many ways to *implement* such a subroutine SORT. If you have access to only the input/output of SORT, then you would not be able to determine whether SORT was being implemented by mergesort or quicksort, for example, because both algorithms realize the same input-output behavior (sorting a list).

This kind of idea (that there can be two implementations of the same input-output behavior) is the basis for all of our security definitions. Since we will consider libraries that use internal randomness, the outputs of subroutines may be probabilistic and we have to be careful about what we mean by “same input-output behavior.” A particularly convenient way to express this is to say that two libraries ℒ_{left} and ℒ_{right} have the same input-output behavior if no calling program behaves differently when linking to ℒ_{left} vs. ℒ_{right}. More formally,

### Definition 2.2: Interchangeable:

*Let* ℒ_{left} *and* ℒ_{right} *be two libraries with a common interface. We say that* ℒ_{left} *and* ℒ_{right} *are interchangeable, and write* ℒ

_{left}≡ ℒ

_{right},

*if for all programs ? that output a single bit, Pr*[

*A*◊ ℒ

_{left}⇒ 1] = Pr[

*A*◊ ℒ

_{right}⇒ 1].

#### Discussion:

- There is nothing special about defining interchangeability in terms of the calling program giving output 1. Since the only possible outputs are 0 and 1, we have:

- It is a common pitfall to imagine the program ? being
*simultaneously*linked to both libraries. But in the definition, calling program ? is only ever linked to one of the libraries at a time. - We have restricted the calling program ? to a single bit of output, which might seem unnecessarily restrictive. However, the definition says that the two libraries have the same effect on
*all*calling programs. In particular, the libraries must have the same effect on a calling program ? whose only goal is to**distinguish**between these particular libraries. A single output bit is necessary for this distinguishing task — just interpret the output bit as a “guess” for which library ? thinks it is linked to. For this reason, we will often refer to the calling program ? as a**distinguisher**. - Taking the previous observation even further, the definition applies against calling programs ? that “know everything” about (more formally, whose code is allowed to depend arbitrarily on) the two libraries. This is a reflection of
**Kerckhoffs’ principle**, which roughly says “assume that the attacker has full knowledge of the system.”^{2}

^{2}“Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi.”

Auguste Kerckhos, 1883. Translation: [The method] must not be required to be secret, and it can fall into

the enemy’s hands without causing inconvenience.

There is, however, a subtlety that deserves some careful attention, though. Our definitions will typically involve libraries that use internal randomness. Kerckhoffs’ principle allows the calling program to know which

*libraries*are used, which in this case corresponds to*how*a library will choose randomness (i.e., from which distribution). It doesn’t mean that the adversary will know*the result*of the libraries’ choice of randomness (i.e., the values of all internal variables in the library). It’s the difference between knowing that you will choose a random card from a deck (i.e., the uniform distribution on a set of 52 items) versus reading your mind to know exactly what card you chose. This subtlety shows up in our definitions in the following way. First, we specify two libraries,*then*we consider a particular distinguisher, and*only then*do we link and execute the distinguisher with a library. The distinguisher cannot depend on the random choices made by the library, since the choice of randomness “happens after” the distinguisher is fixed.In the context of security definitions, think of Kerckhoffs’ principle as:

*assume that the distinguisher knows every fact in the universe, except for: (1) which of the two libraries it is linked to, and (2) the outcomes of random choices made by the library (often assigned to privately-scoped variables within the library).* - The definitions here have been chosen to ease gently into future concepts. We will eventually require libraries that have multiple subroutines and maintain state (via static variables) between different subroutine calls.

Defining interchangeability in terms of distinguishers may not seem entirely natural, but it allows us to later have more granularity. Instead of requiring two libraries to have identical input-output behavior, we will be content with libraries that are “similar enough”, and distinguishers provide a conceptually simple way to measure exactly how similar two libraries are.

The following lemma will be useful throughout the course as we deal with libraries in security proofs:

### Lemma 2.3: Composition:

*If* ℒ_{left} ≡ ℒ_{right} *then* ℒ^{∗} ◊ ℒ_{left} ≡ ℒ^{∗} ◊ ℒ_{right} *for any library* ℒ^{∗}.

#### Proof:

Take an arbitrary calling program ? and consider the compound program ? ◊ ℒ^{∗} ◊ ℒ_{left}. We can interpret this program as a calling program ? linked to the library ℒ^{∗} ◊ ℒ_{left}, or alternatively as a calling program ? ◊ ℒ^{∗} linked to the library ℒ_{left}. After all, ? ◊ ℒ^{∗} is some program that makes calls to the interface of ℒ_{left}. Since ℒ_{left} ≡ ℒ_{right}, swapping ℒ_{left} for ℒ_{right} has no effect on the output of any calling program. In particular, it has no effect when the calling program happens to be ? ◊ ℒ^{∗}. Hence we have:

Since ? was arbitrary, we have proved the lemma.

#### What Does Any of This Have to do with Security Definitions?

Suppose two libraries ℒ_{left} and ℒ_{right} are interchangeable: two libraries that are different but appearing the same to all calling programs. Think of the internal *differences* between the two libraries as **information that is perfectly hidden** to the calling program. If the information weren’t perfectly hidden, then the calling program could get a whiff of whether it was linked to ℒ_{left} or ℒ_{right}, and use it to act differently in those two cases. But such a calling program would contradict the fact that ℒ_{left} ≡ ℒ_{right}.

This line of reasoning leads to:

We can use this principle to define security, as we will see shortly (and throughout the entire course). It is typical in cryptography to want to hide some sensitive information. To argue that the information is really hidden, we define two libraries with a common interface, which formally specifies what an adversary is allowed to do & learn. The two libraries are typically identical except in the choice of the sensitive information. The Principle tells us that when the resulting two libraries are interchangeable, the sensitive information is indeed hidden when an adversary is allowed to do the things permitted by the libraries’ interface.

**2.2: One-Time Secrecy for Encryption**

We now have all the technical tools needed to revisit the security of one-time pad. First, we can restate Claim 1.3 in terms of libraries:

### Claim 2.4: OTP Rule:

*The following two libraries are interchangeable (*i.e., ℒ_{otp-real} ≡ ℒ_{otp-rand}*):*

Note that the two libraries ℒ_{otp-*} indeed have the same interface. Claim 2.4 is that the two libraries are have the same input-output behavior.

Claim 2.4 is specific to one-time pad, and in the previous chapter we have argued why it has some relevance to security. However, it’s important to also have a standard definition of security that can apply to *any* encryption scheme. With such a general-purpose security definition, we can design a system in a *modular* way, saying “my system is secure as long as the encryption scheme being used has such-and-such property.” If concerns arise about a particular choice of encryption scheme, then we can easily swap it out for a different one, thanks to the clear abstraction boundary.

In this section, we develop such a general-purpose security definition for encryption. That means it’s time to face the question,

*what does it mean for an encryption scheme to be “secure?”*

To answer that question, let’s first consider a very simplistic scenario in which an eavesdropper sees the encryption of some plaintext (using some unspecified encryption scheme Σ). We will start with the following informal idea:

*seeing a ciphertext should leak no information about the choice of plaintext.*

Our goal is to somehow formalize this property as a statement about interchangeable libraries. Working backwards from The Central Principle of Security Definitions^{TM}, suppose we had two libraries whose interface allowed the calling program to learn a ciphertext, and whose only internal difference was in the choice of plaintext that was encrypted. If those two libraries were interchangeable, then their common interface (seeing a ciphertext) would leak no information about the internal differences (the choice of plaintext).

Hence, we should consider two libraries that look something like:

Indeed, the common interface of these libraries allows the calling program to learn a ciphertext, and the libraries differ only in the choice of plaintext *m _{L}* vs.

*m*(highlighted). We are getting very close! However, the libraries are not quite well-defined — it’s not clear where these plaintexts

_{R}*m*and

_{L}*m*come from. Should they be fixed, hard-coded constants? Should they be chosen randomly?

_{R}

A good approach here is actually to let the *calling program itself* choose *m _{L}* and

*m*. Think of this as giving the calling program control over precisely what the difference is between the two libraries. If the libraries are still interchangeable, then seeing a ciphertext leaks no information about the choice of plaintext,

_{R}*even if you already knew some partial information*about the choice of plaintext — in particular, even if you knew that it was one of only two options, and even if you got to

*choose*those two options.

Putting these ideas together, we obtain the following definition:

### Defintion 2.5: One-Time Secrecy:

*Let Σ be an encryption scheme. We say that Σ is (perfectly) one-time secret if* ℒ

^{Σ}

_{ots-L}≡ ℒ

^{Σ}

_{ots-R}

*, where:*

This security notion is often called *perfect secrecy* in other sources.^{3} The definition is deceptively simple, and we will explore some of its subtleties through some more examples.

^{3}Personally, I think that using the term “perfect” leads to an impression that one-time pad should always be favored over any other kind of encryption scheme (presumably with only “imperfect” security). But if you want encryption, then you should almost never favor plain old one-time pad.

**2.3: Hybrid Proofs of Security**

We will now show that one-time pad satisfies the new security definition. More precisely,

### Theorem 2.6:

*Let *OTP* denote the one-time pad encryption scheme (Construction 1.2). Then *OTP* has onetime secrecy. That is, *ℒ^{OTP}_{ots-L} ≡ ℒ^{OTP}_{ots-R}.

Given what we already know about one-time pad, it’s not out of the question that we could simply “eyeball” the claim ℒ^{OTP}_{ots-L} ≡ ℒ^{OTP}_{ots-R} Indeed, we have already shown that in both libraries the QUERY subroutine simply returns a uniformly random string. A direct proof along these lines is certainly possible.

Instead of directly relating the behavior of the two libraries, however, we will instead show that:

where ℒ_{hyb-1},. . . ,ℒ_{hyb-4} are a sequence of what we call **hybrid** libraries. (It is not hard to see that the “≡” relation is transitive, so this proves that ℒ^{OTP}_{ots-L} ≡ ℒ^{OTP}_{ots-R} This style of proof is called a **hybrid proof** and it will be the standard way to prove security throughout this course.

For the security of one-time pad, such a hybrid proof is likely overkill. But we use this opportunity to introduce the technique, since nearly every security proof we will see in this class will use the hybrid technique. Hybrid proofs have the advantage that it can be quite easy to justify that *adjacent* hybrids (e.g., ℒ_{hyb-i} and ℒ_{hyb-(i + 1)}) are interchangeable, so the method scales well even in proofs where the “endpoints” of the hybrid sequence are quite different.

#### Proof:

As described above, we will prove that

for a particular sequence of ℒ_{hyb-i} libraries that we choose. For each hybrid, we highlight the differences from the previous one, and argue why adjacent hybrids are interchangeable.

As promised, the hybrid sequence begins with ℒ^{OTP}_{ots-L}. The details of one-time pad have been filled in and highlighted.

Factoring out a block of statements into a subroutine does not affect the library’s behavior. Note that the new subroutine is exactly the ℒ_{otp-real} library from Claim 2.4 (with the subroutine name changed to avoid naming conflicts). This is no accident!

ℒ_{otp-real} has been replaced with ℒ_{otp-rand}. From Claim 2.4 along with the library composition lemma Lemma 2.3, this change has no effect on the library’s behavior

The argument to QUERY’ has been changed from *m _{L}* to

*m*. This has no effect on the library’s behavior since QUERY’ does not actually use its argument in these hybrids.

_{R}The previous transition is the most important one in the proof, as it gives insight into how we came up with this particular sequence of hybrids. Looking at the desired endpoints of our sequence of hybrids — ℒ^{OTP}_{ots-L} and ℒ^{OTP}_{ots-R} — we see that they differ only in swapping *m _{L}* for

*m*. If we are not comfortable eyeballing things, we’d like a better justification for why it is “safe” to exchange

_{R}*m*for

_{L}*m*. However, the one-time pad rule (Claim 2.4) shows that ℒ

_{R}^{OTP}

_{ots-L}in fact has the same behavior as a library ℒ

_{hyb-2}that doesn’t use either of

*m*or

_{L}*m*. Now, in a program that doesn’t use

_{R}*m*or

_{L}*m*, it is clear that we can switch them.

_{R}Having made this crucial change, we can now perform the same sequence of steps, but in reverse.

ℒ_{otp-rand} has been replaced with ℒ_{otp-real}. Again, this has no effect on the library’s behavior, due to Claim 2.4. Note that the library composition lemma is being applied with respect to a different common ℒ^{∗} than before (now *m _{R}* instead of

*m*is being used).

_{L}

A subroutine call has been inlined, which has no effect on the library’s behavior. The result is exactly ℒ^{OTP}_{ots-R}.

Putting everything together, we showed that ℒ^{OTP}_{ots-L} ≡ ℒ_{hyb-1} ≡ · · · ≡ ℒ_{hyb-4} ≡ ℒ^{OT}_{ots-R}. This completes the proof, and we conclude that one-time pad satisfies the definition of one-time secrecy.

#### Discussion:

We have now seen our first example of a hybrid proof. The example illustrates features that are common to all hybrid proofs used in this course:

- Proving security amounts to showing that two particular libraries, say ℒ
_{left}and ℒ_{right}, are interchangeable - To show this, we show a sequence of hybrid libraries, beginning with ℒ
_{left}and ending with ℒ_{right}. The hybrid sequence corresponds to a sequence of*allowable modifications*to the library. Each modification is small enough that we can easily justify why it doesn’t affect the calling program’s output probability. - Simple things like factoring out & inlining subroutines, changing unused variables, consistently renaming variables, removing & changing unreachable statements, or unrolling loops are always “allowable” modifications in a hybrid proof. As we progress in the course, we will add to our toolbox of allowable modifications. For instance, if we want to prove security of something that uses a one-time-secret encryption Σ as one of its components, then we are allowed to replace ℒ
^{Σ}_{ots-L}with ℒ^{Σ}_{ots-R}as one of the steps in the hybrid proof

**2.4: Demonstrating Insecurity with Attacks:**

We have seen an example of proving the security of a construction. To show that a construction is *insecure*, we demonstrate an **attack**. An attack means a counterexample to the definition of security. Since we define security in terms of two interchangeable libraries, an attack is a **distinguisher** (calling program) that behaves as differently as possible when linked to the two libraries.

Below is an example of an insecure construction:

### Construction 2.7:

To encrypt a plaintext *m*, the scheme simply rearranges its bits according to the permutation *k*.

### Claim 2.8:

*Construction 2.7 does not have one-time secrecy.*

#### Proof:

Our goal is to construct a program ? so that Pr[? ◊ ℒ^{Σ}_{ots-L} ⇒ 1] and Pr[? ◊ ℒ^{Σ}_{ots-R} ⇒ 1] are different, where Σ refers to Construction 2.7. There are probably many “reasons” why this construction is insecure, each of which leads to a different distinguisher ?. We need only demonstrate one such ?, and it’s generally a good habit to try to find one that makes the probabilities Pr[? ◊ ℒ^{Σ}_{ots-L} ⇒ 1] and Pr[? ◊ ℒ^{Σ}_{ots-R} ⇒ 1] as different as possible.

One immediate observation about the construction is that it only rearranges bits of the plaintext, without modifying them. In particular, encryption preserves (leaks) the number of 0s and 1s in the plaintext. By counting the number of 0s and 1s in the ciphertext, we know exactly how many 0s and 1s were in the plaintext. Let’s try to leverage this observation to construct an actual distinguisher.

Any distinguisher must use the interface of the ℒ_{ots-*} libraries; in other words, we should expect the distinguisher to call the QUERY subroutine with *some* choice of *m _{L}* and

*m*, and then do something based on the answer that it gets. If we are the ones writing the distinguisher, we must specify how these arguments

_{R}*m*and

_{L}*m*are chosen. Following the observation above, we can choose

_{R}*m*and

_{L}*m*to have a different number of 0s and 1s. An extreme example (and why not be extreme?) would be to choose

_{R}*m*= 0

_{L}^{λ}and

*m*= 1

_{R}^{λ}. By looking at the ciphertext, we can determine which of

*m*,

_{L}*m*was encrypted, and hence which of the two libraries we are currently linked with.

_{R}Putting it all together, we define the following distinguisher:

Here is what it looks like when ? is linked to ℒ^{Σ}_{ots-L}(we have filled in the details of Construction 2.7 in ℒ^{Σ}_{ots-L}):

We can see that *m _{L}* takes on the value 0

^{λ}, so each bit of

*m*is 0, and each bit of

_{L}*c*is 0. Hence, the final output of ? is 1 (true). We have:

^{Σ}

_{ots-L}⇒ 1] = 1.

Here is what it looks like when ? is linked to ℒ^{Σ}_{ots-R}:

We can see that each bit of *m _{R}*, and hence each bit of

*c*, is 1. So ? will output 0 (false), giving:

^{Σ}

_{ots-R}⇒ 1] = 0.

The two probabilities are different, demonstrating that ? behaves differently (in fact, as differently as possible) when linked to the two libraries. We conclude that Construction 2.7 does not satisfy the definition of one-time secrecy.

**Exercises:**

2.1: Consider the following encryption scheme:

(a) Fill in the details of the Dec algorithm so that the scheme satisfies correctness.

(b) Prove that the scheme satisfies one-time secrecy.

2.2: In abstract algebra, a (finite) **group** is a finite set ? of items together with an operator ⊗ satisfying the following axioms:

**Closure**: for all*a*,*b*∈ ?, we have*a*⊗*b*∈ ?.**Identity**: there is a special*identity**element e*∈ ? that satisfies*e*⊗*a*=*a*for all*a*∈ ?. We typically write “1” rather than e for the identity element.**Associativity**: for all*a*,*b*,*c*∈ ?, we have (*a*⊗*b*) ⊗*c*=*a*⊗ (*b*⊗*c*).**Inverses**: for all*a*∈ ?, there exists an*inverse*element*b*∈ ? such that*a*⊗*b*=*b*⊗*a*is the identity element of ?. We typically write “*a*^{−1}” for the inverse of*a*.

Define the following encryption scheme in terms of an arbitrary group (?,⊗):

(a) Prove that {0,1}^{λ} is a group with respect to the XOR operator. What is the identity element, and what is the inverse of a value *x* ∈ {0,1}^{λ}?

(b) Fill in the details of the Dec algorithm and prove (using the group axioms) that the scheme satisfies correctness.

(c) Prove that the scheme satisfies one-time secrecy.

2.3: Show that the following encryption scheme does **not** have one-time secrecy, by constructing a program that distinguishes the two relevant libraries from the one-time secrecy definition.

2.4: Prove that if an encryption scheme Σ has |Σ.?| < |Σ.?| then it cannot satisfy one-time secrecy. Show an explicit attack.

2.5: Let Σ denote an encryption scheme where Σ.? = Σ.?, and define Σ^{2} to be the following **double-encryption** scheme:

Prove that if Σ satisfies one-time secrecy, then so does Σ^{2}.

2.6: Let Σ denote an encryption scheme and define Σ^{2} to be the following **encrypt-twice** scheme:

Prove that if Σ satisfies one-time secrecy, then so does Σ^{2}.

2.7: Formally define a variant of the one-time secrecy definition in which the calling program can obtain two ciphertexts (on chosen plaintexts) encrypted under the same key. Call it two-time secrecy.

(a) Suppose someone tries to prove that one-time secrecy implies two-time secrecy. Show where the proof appears to break down.

(b) Describe an attack demonstrating that one-time pad does not satisfy your definition of two-time secrecy.

2.8: Show that the following libraries are interchangable:

Note that *x* and *y* are swapped in the first two lines, but not in the return statement.

2.9: In this problem we consider modifying one-time pad so that the key is not chosen uniformly. Let ?* ^{λ}* denote the probability distribution over [0,1]

^{λ}where we choose the

*i*th bit to be 0 with probability 0.4 and 1 with probability 0.6.

Let Σ denote one-time pad encryption scheme but with the key sampled from distribution ?* ^{λ}* rather than uniformly in {0,1}

^{λ}.

Consider the case of *λ* = 5. A calling program ? for the ℒ^{Σ}_{ots-*} libraries calls QUERY(01011,10001) and receives the result 01101. What is the probability that this happens, assuming that ? is linked to ℒ_{ots-L}? What about when ? is linked to ℒ_{ots-R}?

2.10: During Alice’s role-playing campaign, it becomes necessary to roll an 8-sided die. Unfortunately, she has misplaced hers! Her only source of randomness is a coin. Help her out by showing that the following two libraries are interchangeable: