Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Скачиваний:
26
Добавлен:
02.04.2015
Размер:
1.95 Mб
Скачать

 

 

With the limitations of theoretically secure cryptography, it was natural to try to design cryptographic primitives that rely on problems that are computationally hard to solve. e choice of NP-Complete problem.

In this Chapter, we review the

3.1The LPN Problem

. . De nition of the Problem

We consider the problem of recovering a secret k-bit vector x. For that purpose, we are given an oracle Ox which knows the vector x and, on each request, answers with a uniformly chosen k-bit vector a and a bit equal to a x, where the operation is the scalar product. In other words, the output distribution of the oracle is

f(a; a x) : a 2R f0; 1gkg:

is problem is simple to solve using algebraic techniques such as Gauss elimination. All that is needed for such an algorithm to recover x is k linearly independent vectors a. e cost of an unoptimized algorithm implementing Gaussian elimination is roughly O(k3). As the vectorsarechosenuniformlyandindependentlyby Ox,theprobabilitythat k vectorsreturned by this latter are linearly independent is equal to

i

 

 

k 1

1

 

(1

i

):

=1

2

 

 

 

According to Euler’s Pentagonal number theorem, this probability tends to the number (1/2)1 0:2887 when k tends to in nity. From this, it results that the attack consisting of querying k times the oracle Ox and solving the resulting linear system using Gaussian elimination succeeds with a probability bounded by (1/2)1 in time complexity O(k3).

Suppose now that the oracle Ox adds noise to the bit it outputs.

at is, instead of out-

putting a x, it may ip that bit and output a x 1. When this

ipping occurs for every

returned bit, the attack above can still be applied by adding an extra step. Namely, the algorithm has to ip all the bits that it obtained before using Gaussian elimination. e case where the decision to ip each bit is random is more di cult to deal with. A popular variant consists in ipping the answer according to a probability that follows a Bernoulli distribution.

Fromnowon, wewillconsiderthefollowingvariantconditionedbyaparameter 2]0; 1/2[: On each query, Ox; picks a uniformly chosen k-bit vector a outputs a pair of the form (a; a x ), where is a bit chosen following a Bernoulli distribution of parameter , denoted

.

 

 

Ber( ). In this particular case, the answers of the oracle Ox; follow the probability distribution

f(a; a x ϵ) : a 2R f0; 1gk; Pr[ϵ = 1] = g:

is problem has proven to be very hard to solve and lies in the N P-hard class of complexity as it will be detailled later. Although it has many formulations, the problem, as it is stated above, is commonly known as the Learning Parity with Noise problem, abbreviated LPN [Hås ].

De nition . ( e LPN problem)

Let x be a binary vector of length k and 2]0; 1/2[ a real number. We de ne Ox; to be an oracle that outputs independent samples according to the distribution

f(a; a x ϵ) : a 2R f0; 1gk; Pr[ϵ = 1] = g:

We say that an algorithm A sol es the LPN problem with parameters (k; ) with probability if

Pr[x AOx; (1k)jx 2R f0; 1gk] :

Here, the probability is taken over the random choice of x and the random tape of A.

e LPN problem can be equivalently reformulated as a pure computational instance in which the problem is to nd an assignment for a q-bit vector x in a system of q linear equations that satisfy q0 q equations. From this perspective, the problem is best known as the minimum disagreement problem, or its abbreviation MDP problem [CKS ].

De nition . ( e MDP Problem)

Let q and k two positive integers, A an q k binary matrix, and z a binary vector of length q. If q0 denotes a positive integer smaller than or equal q, nd a k-bit vector x satisfying q0 equations of the system A x = z.

. . e Average Hardness of the LPN Problem

As a special case of a general decoding problem for linear codes, the NP-hardness of the LPN problem follows from the work of Berlekamp, McEliece, and van Tilborg [BMT ]. In short, they reduced the general decoding problem for linear codes to the three-dimensional matching problem, the NP-Complete problem in Karp’s list [Kar ]. A stronger result was found by Håstad [Hås ]: He proved that it is NP-hard to nd an algorithm that succeeds in nding solutions to the general decoding problem for linear codes better than the trivial algorithm which tests random values.

.

 

 

Table 3.1: Complexity of Solving the LPN problem for di erent values of and k. Values taken from Leveil’s thesis [Lev ].

k

128 256 512 768

0:1

219

238

272

297

0:125

224

243

273

2105

0:25

232

251

285

2121

0:4

240

262

2101

2143

However, NP-Completness only considers the worst-case hardness of solving decisional problems and does not guarantee that a randomly chosen instance of the problem cannot be solved by a polynomial-time algorithm. Unfortunately, it is the latter property that is required in cryptography. erefore, we need to consider the average-case complexity of solving the LPN problem. Of course, no proof regarding the average hardness of the LPN problem was found. ( is would prove the existence of one-way function!) Still, some arguments acts in favor of its hardness. Among them, we mention Regev’s result concerning the the selfreducibility of the problem with respect to x [Reg ] : e complexity of solving the LPN problem is independent from the choice of the secret vector x. ( is property is shared with the discrete logarithm problem.) Another result due to Kearns [Kea ] relates the LPN problem to a learning problem where the solver is restricted to “statistical queries”. In this paper, Kearns demonstrated that the class of parity functions cannot be e ciently learned by statistical queries. As learning algorithms that comply to the restriction of “statistical queries” form the majority of learning problems, this result rules out a large class of learning algorithm that can be used to attack the LPN problem. We nally mention a surprising result concerning the hardness of solving the LPN problem when the adversary does not know : Laird [P ] showed a technique that allows to revert in polynomial-time to the case where the adversary is given . erefore, from a complexity classi cation point of view, both variants are equivalent.

Algorithms to Solve the LPN Problem To date, the best method for solving the LPN prob-

lemisbyusingtheBKWalgorithm,nameda

eritsauthorsBlum,Kalai,andWasserman[BKW ].

Fromahighlevelpointofview, thisalgorithmimplementsthefollowingidea: bypickingcare-

fully a few well-chosen vectors in a quite large set of samples and computing the xor of these

vectors, we can nd basis vectors, i.e., vectors of Hamming weight equal to .

e advantage

of nding this vector is that it readily yields a bit of the LPN’s secret vector when the number

of errors introduced in the answers is even.

erefore, the algorithm relies on

nding enough

independent combinations of vectors equals to the same basis vector, and use a majority vote enables to recover the correct value of the bit at the same position of the vector’s . Note that this vote can only be e cient when the number of vectors to sum is small as the error bias of

.