Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Скачиваний:
26
Добавлен:
02.04.2015
Размер:
1.95 Mб
Скачать

9

ACHIEVING STRONG PRIVACY

C

.

Ng et al’s Proposal: Wise Adversaries . . . . . . . . . . . . . .

 

.

Our Proposal: Incorporate the Blinder into the Adversary .

 

.

Sampling Algorithms and the ISH Hypothesis . . . . . . . . .

 

.

Knowledge Extractors and Non-Falsiable Assumptions . . .

 

.

Plaintext-Awareness . . . . . . . . . . . . . . . . . . . . . . . .

 

 

. .

De nitions . . . . . . . . . . . . . . . . . . . . . . . . . .

 

 

. .

Instances of Plaintext-Aware Encryption Schemes . . . . . . .

 

 

. .

From PA+ to PA++ Plaintext-Awareness . . . . . . . . . . .

 

.

Adapting Vaudenay’s De nitions . . . . . . . . . . . . . . . . .

 

 

. .

Limiting the Adversary’s Sampling Queries . . . . . . . . . .

 

 

. .

Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . .

 

.

IND-CCA is not Su cient for Strong Privacy . . . . . . . .

 

.

Strong Privacy Using Plaintext-Awareness . . . . . . . . . . .

 

.

Security Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . .

 

 

. .

Correctness . . . . . . . . . . . . . . . . . . . . . . . . . .

 

 

. .

Security . . . . . . . . . . . . . . . . . . . . . . . . . . . .

 

 

. .

Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . .

 

.

Perspective and Future Development . . . . . . . . . . . . . .

 

 

 

 

 

 

 

 

As it was detailled in Section . . , Vaudenay’s notion of Strong privacy is impossible to

 

achieve. is result is due to the fact that adversaries are able to send queries to which they do

 

already know the answer. From their side, blinders are unable to produce the exact same an-

 

swer that the RFID system would compute, and that the adversary expects unless they are able

 

to deduce that information from previous queries, which in itself results in a loss of privacy.

 

However, looking back at the proof we described in Section . . , it becomes apparent that

 

the adversary did not break the privacy of the scheme in the sense that he did not use protocol

 

messages to compute its nal statement. Still, this adversary has been shown to be signi cant.

 

erefore, we claim that Vaudenay’s de nitions do not correctly mirror the notion of privacy

 

it aims to capture.

 

is chapter is devoted to discuss solutions and tweaks in the model that aim to overcome

 

this limitation. We rst sketch Ng et al.’s proposal [NSMSN ] of denying the adversary from

 

issuing queries for which “they already know the answer” and show that the formalism given

 

for this statement is not satisfactory.

 

We then proceed with our x and argue that it re ects the exact notion of privacy Vaudenay

 

aimed to capture. Our solution consists of merging the blinder and the adversary, i.e., hav-

 

ing blinded adversaries simulating protocol messages for themselves. Concretely, this trans-

 

lates into giving to the blinders access to all the adversary’s inputs, including its random tape,

 

which was missing from the original de nition. We introduce other limitations to the sam-

 

pling queries of the adversary, rendering them such that they are “aware” of their samplings

 

(Essentially, we require that adversaries can produce a plausible guess on the real identity of a

 

drawn tag). e bene t of all these modi cations is that Strong privacy becomes achievable

 

using a challenge-response protocol built on a plaintext-aware public-key encryption scheme.

 

To show that this notion is almost necessary, we give a counter-example to prove that the

 

same protocol instantiated by an IND-CCA , but not plaintext-aware, cryptosystem is not

 

Strong private.

9.1Ng et al’s Proposal: Wise Adversaries

Starting from the observation that the Destructive adversary in the impossibility proof of Strong privacy is already aware of the answer the genuine R interface would produce and only uses to distinguish in which world she is, Ng et al. [NSMSN ] proposed to x the model by disallowing such queries. For this sake, they introduced the notion of “wise” adversaries. A wise adversary is de ned as an adversary who does not issue oracle queries for which he “knows” the output. e main argument of [NSMSN ] is the following: if the adversary is wise, then he will never ask the interfaces about the outcome of a protocol session in which he was either passive, active, or simulating a tag if he knows the result of the instance. In this scenario, the universal adversary used by Vaudenay against Strong privacy becomes “unwise” and is thus discarded.

.

 

 

Although they claim to keep Vaudenay’s framework and de nitions, the way to prove privacy is not resolved in [NSMSN ]. Following their de nition, an adversary A making q (any) Oracle accesses is wise if no adversary can achieve the same or a smaller probability of success while making less than q Oracle calls. It turns out that wisdom is a hard notion to manipulate and di cult to prove.

Another issue is whether the notion “wise adversaries” ts in realistic scenarios. One may argue that this kind of adversary seems equivalent in terms of result but, in fact, it is not clear whywouldanadversarydenyhimselfsuchadvantage. Infact, thiscomesbacktothede nition of knowledge: what does it mean for an algorithm to know something.

9.2Our Proposal: Incorporate the Blinder into the Adversary

e solutionwe propose di ersfrom theoneproposed byNget al. and theothers described in the end of Chapter in that we do not alter the privacy game. In fact, modifying the privacy game in those previous works provoked a loss in the privacy notion captured by the model which unfortunately is not quanti ed. For instance, we can assume that an adversary defeats privacy by being able to determine if one RFID tag belongs a group of them that had prior communication with the reader. While it is possible that such statement is included in their de nitions, it is not clear from their work.

Our proposal is to make the blinder’s simulation run inside the adversary. at is, we argue that a blinder, acting for the adversary and not for the system, as Vaudenay’s de nitions suggest, should be executed by the former. Consequently, the blinder should be given all the adversary’s knowledge, and in particular her random tape that was missing from the original de nitions.

Before going into the modi cations we propose to Vaudenay’s de nitions, we dedicate the next three sections to introduce new concepts that will be proved to be later useful.

9.3Sampling Algorithms and the ISH Hypothesis

Up to this point, we never fully de ned sampling algorithms but merely treated them as algorithms implementing probability distributions. is section looks more deeply in the subject.

De nition . (Sampling Algorithm)

An e cient sampling algorithm for a probability distribution p is a polynomial-time probabilistic algorithm, in k, denoted Samp, that, on input random coins 2 f0; 1g(k), with ℓ( ) being a polynomial function, outputs vector elements om X of dimension d(k), with d also being a polynomial function, that satis es

x

2

Xd :

j

Pr[Samp( ) = x]

p(x)

j

= negl(k)

8

 

 

 

 

. :

 

 

 

e de nition above only considers computational closeness from the original distribution.

 

Althoughwemighthaveconsideredstatisticalorperfectdistance,thereasonofthisrestriction

 

is that we will only be interested in sampling requests by polynomial-time algorithms. In this

 

context, extending to computational distance can only enlarge the set of distributions that an

 

algorithm can submit without a ecting the security proof of the scheme.

 

We also note that the restriction to polynomial-time algorithms for describing the sampling

 

algorithm is due to security considerations: It is o en the case in security reductions that the

 

whole environment adversary+system has to be executed by an adversary playing a classical

 

cryptographic game, such as IND-CCA or distinguishing a PRF from a random function.

 

Although Vaudenay overlooked this matter, we nd it necessary for the proof of security for

 

simple and weakly-correct RFID schemes and the proofs of privacy of the Weak private pro-

 

tocol based on a PRF, and the Narrow-Strong private one based on an IND-CPA public-key

 

encryption scheme.

 

However, being able to simulate the adversary and her environment is not always su cient

 

for the security proof. In many settings, it is the case that the simulator needs to obtain the

 

randomness of the system. For instance, this happens in complex zero-knowledge systems

 

where the simulator would need the random tape of the whole system. Damgård rst men-

 

tioned this limitation when he considered adaptive corruption in multi-party computation

 

schemes [Dam ]. As a solution he had to restrict the adversary to so-called “good-enough”

 

distributions. A more formal treatment of the problem was subsequently presented by Canet-

 

ti and Dakdouk [CD ]. Concretely, they proposed the notion of inverse-samplable algo-

 

rithms which is centered around the idea that for every possible output of an algorithm, it is

 

possible to e ciently nd, i.e., in a polynomial number of steps, a randomness that leads to

 

the same output.

 

In the sequel, we will be interested in a more speci c class of sampling algorithms called

 

inverse-sampling algorithms. An algorithm Samp is said to be inverse-samplable if there exists

 

a polynomial-time inversion algorithm which, given a sample x from the output of Samp, ob-

 

tained using random coins 2 f0; 1g(k), with ( ) being a polynomial function, outputs a

 

S that is consistent with x, i.e., it is such that Samp( S) ! x. Moreover, the choice of has

 

to be such that ( S; x) is computationally indistinguishable from ( ; x) for a uniformly dis-

 

tributed over f0; 1g(k). We herea er state the formal de nition of such sampling algorithms,

 

as given by Ishai et al. [IKOS ].

 

De nition . (Inverse-Sampling Algorithm)

 

Given a security parameter k, we say that an e cient sampling algorithm Samp, in k, is in erse-

 

samplable if there exists a polynomial-time in erter algorithm Samp 1, in k, such that the fol-

 

lowing two games are indistinguishable

.

 

 

Real game:

Fake Game:

2R f0; 1g(k)

2R f0; 1g(k)

x

Samp( )

x

Samp( )

 

 

S

Samp 1(x)

Output ( ; x)

Output ( S; x)

at is, for every polynomial-time distinguisher D we require that

Pr[DReal Game(1k) ! 1] Pr[DFake Game(1k) ! 1] = negl(k)

De nition . (Inverse-Sampling Hypothesis)

e in erse sampling hypothesis is that for every probability distribution there exists an in ersesamplable algorithm.

is hypothesis states that for every sampling algorithm S1, including one-way sampling algorithms, there exists an inverse-sampling algorithm S2 that can be shown to be indistinguishable from S1. e analysis of ISH by Ishai et al. [IKOS ] shows that the existence of non-interactively extractable one-way function family ensembles, a generalization of knowledgeassumptions, and non-interactivezero-knowledge proofsystemsfor NP inthecommon referencestringmodeltogetherimplythatISHdoesnothold. Aninterestingsidee ectofthis result is that the existence of plaintext-aware encryption schemes and the validity of the ISH hypothesis are mutually exclusive. is is a direct consequence of the fact that plaintext-aware encryption schemes require knowledge extractors, by de nition (cf. De nition . ), and that non-interactive zero-knowledge proof systems for NP in the CRS model can be constructed fromanytrapdoorone-waypermutation[FLS ]. Aswewilllatermakeuseofplaintext-aware encryption schemes, we are obliged to make the assumption that ISH is wrong.

9.4Knowledge Extractors and Non-Falsiable Assumptions

e notion of knowledge and awareness for interactive Turing machines are de ned in terms of computations. at is, a machine is said to know x if it is able to compute f(x) for an arbitrarily chosen function f. Formalizing this notion has proven to be one of the most di cult tasks of theoretical computer scientists. In the end, the agreed de nition is that a Turing machine knows x if there exists another Turing machine that runs in the same complexity class as the former and takes its description along with all its inputs and outputs x. is last machine is called a knowledge extractor.

To be more concise, we give a concrete example with extractable one-way functions which were introduced by Canetti and Dakdouk [CD ]. Besides complying to the classical onewaynessproperty, suchafunctionhastobesuchthatthe“only”wayforanalgorithmtooutput an element that has a pre-image by this function is to pick an element from the domain of f

. -

 

 

 

and apply the function on it. Again, the term “only way” is formalized by requiring the exis-

 

tence, for every algorithm A, of a knowledge extractor that, having access to all A’s knowledge,

 

i.e., its random tape and a reference to the function that A targeted, either outputs a preimage

 

of A’s output or fails if the later is not in the image of the function. e reason for combining

 

those two notions in one primitive is that it yields a natural abstraction of several knowledge

 

extractor from the literature [Dam , BP a, PX ], in much the same way as the notion

 

of one-way function is an abstraction of the Discrete Log assumption. We give the following

 

formal de nition taken from [CD ].

 

De nition . (Extractable One-Way Function Family Ensemble)

 

Let f : K D ! R be a family of one-way functions with respect to a security parameter k. We

 

say that f is an extractable one-way function family if it is one-way and for every PPT algorithm

 

A that uses (k) random bits, there is a PPT extractor algorithm A such that

 

 

 

2

y

Img(f )

 

 

2R K

3

 

k

 

N : Pr

 

̸2

2R f0; 1g (k)

= 1 negl(k)

8

2

 

 

 

_

 

y

( ; )

 

 

 

6

 

 

 

 

 

7

 

 

 

 

 

f(x) = y

 

x

A

( ; )

 

 

 

 

6

 

 

 

A

7

 

 

 

 

4

 

 

 

 

 

5

 

 

 

 

 

 

 

 

 

 

 

 

 

Unfortunately, as for one-way functions,

the existence of extractable one-way functions and

knowledge extractors can only be assumed (and even independently from the assumption that one-way functions exist).

e rst assumption in the literature related to the existence of knowledge extractors is due to Damgård [Dam ] and called the Di e-Hellman Key (DHK) assumption (it has also been termed the knowledge of exponent assumption by Bellare and Palacio [BP a]). In short, this assumption states that the only mean for an adversary that is given an element W from a cyclic group in which g is a generator and wants to produce a valid Di e-Hellman tuple (W; gu; W u) is to pick u and that there exists an extractor that given the adversary’s input and randomness recovers u. Although it was used in numerous applications, it is not clear whether the assumption is true or false. Moreover, the assumption presents the particularity of being as hard to prove than to disprove and has consequently been the target of many criticism [Nao ]. at is, it is insu cient to construct a counter-example to invalidate the DHK assumption as it would be the case for classical computational assumptions such as the discrete logarithm or factoring. In fact, to prove that the assumption does not hold one would need to prove that there exists an adversary for which there is no extractor. Yet, some variants of the DHK assumption were deemed to be false [BP a]. Defenders of the assumption argue that it is proven to hold in the generic group model [Den b]. However, much like the random oracle model, some computational assumptions hold in the generic group model but fail as soon as the group is instantiated in any representation [Den ]. at said, much like the random oracle model, no “concrete” example for the separation is currently known.

e DHK assumption was later expanded to cover general subset membership problems by Birkett [Bir ]. He called that generalization the subset witness knowledge (SWK) assumption. Basedonthisassumption, hewasabletoextend(andcorrectsomepartsof )Dent’sproof

.