Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Prime Numbers

.pdf
Скачиваний:
43
Добавлен:
23.03.2015
Размер:
2.99 Mб
Скачать

342

Chapter 7 ELLIPTIC CURVE ARITHMETIC

in hand a precomputed, stored set of di erence multiples [∆]Q = [X: Z], where ∆ has run over some relatively small finite set {2, 4, 6, . . .}; then a prime s near to but larger than r can be checked as the outlying prime, by noting that a “successful strike”

[s]Q = [r + ∆]Q = O

can be tested by checking whether the cross product

XrZ− XZr

has a nontrivial gcd with n. Thus, armed with enough multiples [∆]Q, and a few occasional points [r]Q, we can check outlying prime candidates with 3 multiplies (mod n) per candidate. Indeed, beyond the 2 multiplies for the cross

product, we need to accumulate the product (XrZ−XZr) in expectation of a final gcd of such a product with n. But one can reduce the work still further, by observing that

XrZ− XZr = (Xr − X)(Zr + Z) + XZ− XrZr.

Thus, one can store precomputed values X, Z, XZ, and use isolated values of Xr, Zr, XrZr for well-separated primes r, to bring the cost of stage two asymptotically down to 2 multiplies (mod n) per outlying prime candidate, one for the right-hand side of the identity above and one for accumulation.

As exemplified in [Brent et al. 2000], there are even more tricks for such reduction of stage-two ECM work. One of these is also pertinent to enhancement (3) above, and amounts to mixing into various identities the notion of transform-based multiplication (see Section 9.5.3). These methods are most relevant when n is su ciently large, in other words, when n is in the region where transform-based multiply is superior to “grammar-school” multiply. In the aforementioned identity for cross products, one can actually store transforms (for example DFT’s)

ˆ ˆ

Xr, Zr,

in which case the product (Xr − X)(Zr + Z) now takes only 1/3 of a (transform-based) multiply. This dramatic reduction is possible because the single product indicated is to be done in spectral space, and so is asymptotically free, the inverse transform alone accounting for the 1/3. Similar considerations apply to the accumulation of products; in this way one can get down to about 1 multiply per outlying prime candidate. Along the same lines, the very elliptic arithmetic itself admits of transform enhancement. Under the Montgomery parameterization in question, the relevant functions for curve arithmetic degenerate nicely and are given by equations (7.6) and (7.7); and again, transform-based multiplication can bring the 6 multiplies required for addh() down to 4 transform-based multiplies, with similar reduction possible for doubleh() (see remarks following Algorithm 7.4.4).

7.4 Elliptic curve method

343

As for enhancements (4) above, Montgomery’s polynomial-evaluation scheme (sometimes called an “FFT extension” because of the details of how one evaluates large polynomials via FFT) for stage two is basically to calculate two sets of points

S = {[mi]P : i = 1, . . . , d1}, T = {[nj ]P : j = 1, . . . , d2},

where P is the point surviving stage one of ECM, d1|d2, and the integers mi, nj are carefully chosen so that some combination mi ± nj hopefully divides the (single) outlying prime q. This happy circumstance is in turn detected by the fact of some x-coordinate of the S list matching with some x-coordinate of the T list, in the sense that the di erence of said coordinates has a nontrivial gcd with n. We will see this matching problem in another guise—in preparation for Algorithm 7.5.1. Because Algorithm 7.5.1 may possibly involve too much machine memory, for sorting and so on, one may proceed to define a degree-d1 polynomial

f (x) = (x − X(s)) mod n,

s S

where the X( ) function returns the a ne x-coordinate of a point. Then one may evaluate this polynomial at the d2 points x {X(t) : t T }. Alternatively, one may take the polynomial gcd of this f (x) and a g(x) =

 

 

2

 

 

 

 

 

 

X(t)). In any case, one can seek matches between the S, T point sets

 

t(x −1+

 

ring operations, which is lucrative in view of the alternative of

in O d

 

 

 

actually

doing d

1d2 comparisons. Incidentally, Montgomery’s idea is predated

 

 

 

by an approach of [Montgomery and Silverman 1990] for extensions to the Pollard (p − 1) method.

When we invoke some such means of highly e cient stage-two calculations, a rule of thumb is that one should spend only a certain fraction (say 1/4 to 1/2, depending on many details) of one’s total time in stage two. This rule has arisen within the culture of modern users of ECM, and the rule’s validity can be traced to the machine-dependent complexities of the various per-stage operations. In practice, this all means that the stage-two limit should be roughly two orders of magnitude over the stage-one limit, or

B2 100B1

This is a good practical rule, e ectively reducing nicely the degrees of freedom associated with ECM in general. Now, the time to resolve one curve—with both stages in place—is a function only of B1. What is more, there are various tabulations of what good B1 values might be, in terms of “suspected” sizes of hidden factors of n [Silverman and Wagsta 1993], [Zimmermann 2000].

We now exhibit a specific form of enhanced ECM, a form that has achieved certain factoring milestones and that currently enjoys wide use. While not every possible enhancement is presented here, we have endeavored to provide many of the aforementioned manipulations; certainly enough to forge a practical implementation. The following ECM variant incorporates various enhancements of Brent, Crandall, Montgomery, Woltman, and Zimmermann:

344

Chapter 7 ELLIPTIC CURVE ARITHMETIC

Algorithm 7.4.4 (Inversionless ECM). Given a composite number n to be factored, with gcd(n, 6) = 1, this algorithm attempts to uncover a nontrivial factor of n. This algorithm is inversion-free, needing only large-integer multiplymod (but see text following).

1. [Choose criteria]

 

B1

= 10000;

// Stage-one limit (must be even).

B2

= 100B1;

// Stage-two limit (must be even).

D = 100;

// Total memory is about 3D size-n integers.

2. [Choose random curve Eσ ]

Choose random σ [6, n − 1]; // Via Theorem 7.4.3. u = (σ2 5) mod n;

v = 4σ mod n;

C= ((v − u)3(3u + v)/(4u3v) 2) mod n;

//Note: C determines curve y2 = x3 + Cx2 + x,

 

 

// yet, C can be kept in the form num/den.

Q = [u3 mod n : v3 mod n];

// Initial point is represented [X : Z].

3. [Perform stage one]

 

 

 

 

for(1 ≤ i ≤ π(B1)) {

 

 

a

// Loop over primes pi.

Find

largest integer a such that p

i

≤ B1;

a

 

 

Q = [pi ]Q;

// Via Algorithm 7.2.7, and perhaps use FFT

}

 

 

 

 

enhancements (see text following).

 

 

 

 

 

g = gcd(Z(Q), n);

// Point has form Q = [X(Q) : Z(Q)].

if(1 < g < n) return g;

 

 

 

// Return a nontrivial factor of n.

4. [Enter stage two]

 

 

 

// Inversion-free stage two.

S1 = doubleh(Q);

 

 

 

 

S2 = doubleh(S1);

 

 

 

// This loop computes Sd = [2d]Q.

for(d [1, D]) {

 

 

 

if(d > 2) Sd = addh(Sd−1, S1, Sd−2);

βd = X(Sd)Z(Sd) mod n;

 

 

// Store the XZ products also.

}

 

 

 

 

 

g = 1;

 

 

 

 

 

B = B1 1;

 

 

 

// B is odd.

T = [B − 2D]Q;

 

 

 

// Via Algorithm 7.2.7.

R = [B]Q;

 

 

 

// Via Algorithm 7.2.7.

for(r = B; r < B2; r = r + 2D) {

 

 

 

α = X(R)Z(R) mod n;

 

 

 

 

for(prime q [r + 2, r + 2D]) {

 

 

//Loop over primes.

δ = (q − r)/2;

 

 

 

// Distance to next prime.

 

// Note the next step admits of transform enhancement.

g = g((X(R) − X(Sδ ))(Z(R) + Z(Sδ )) − α + βδ ) mod n;

}

(R, T ) = (addh(R, SD, T ), R);

}

g = gcd(g, n);

7.4 Elliptic curve method

345

if(1 < g < n) return g;

// Return a nontrivial factor of n.

5. [Failure]

 

goto [Choose random curve . . .];

// Or increase B1, B2 limits, etc.

The particular stage-two implementation suggested here involves D di erence multiples [2d]Q, and a stored XZ product for each such multiple, for a total of 3D stored integers of size n. The stage-two scheme as presented is asymptotically (for large n and large memory parameter D, say) two multiplications modulo n per outlying prime candidate, which can be brought down further if one is willing to perform large-integer inversions—of which the algorithm as presented is entirely devoid—during stage two. Also, it is perhaps wasteful to recompute the outlying primes over and over for each choice of elliptic curve. If space is available, these primes might all be precomputed via a sieve in Step [Choose criteria]. Another enhancement we did not spell out in the algorithm is the notion that, when we check whether a cross product XZ − X Z has nontrivial gcd with n, we are actually checking two-point combinations P ± P , since x-coordinates of plus or minus any point are the same. This means that if two primes are equidistant from a “pivot value” r, say q , r, q form an arithmetic progression, then checking one cross product actually resolves both primes.

To provide a practical ECM variant in the form of Algorithm 7.4.4, we had to stop somewhere, deciding what detailed and sophisticated optimizations to drop from the above presentation. Yet more optimizations beyond the algorithm have been e ected in [Montgomery 1987, 1992a], [Zimmermann 2000], and [Woltman 2000] to considerable advantage. Various of Zimmermann’s enhancements resulted in his discovery in 1998 of a 49-digit factor of M2071 = 22071 1. Woltman has implemented (specifically for cases n = 2m ± 1) variants of the discrete weighted transform (DWT) Algorithms 9.5.17, 9.5.19, ideas for elliptic multiplication using Lucas-sequence addition chains as in Algorithm 3.6.7, and also the FFT-intervention technique in [Crandall and Fagin 1994], [Crandall 1999b], with which one carries out the elliptic algebra itself in spectral space. Along lines previously discussed, one can perform either of the relevant doubling or adding operations (respectively, doubleh(), addh() in Algorithm 7.2.7) in the equivalent of 4 multiplies. In other words, by virtue of stored transforms, each of said operations requires only 12 FFTs, of which 3 such are equivalent to one integer multiply as in Algorithm 7.2.7, and thus we infer the 4-multiplies equivalence. A specific achievement along these lines is the discovery by C. Curry and G. Woltman, of a 53digit factor of M667 = 2667 1. Because the data have considerable value for anyone who wishes to test an ECM algorithm, we give the explicit parameters as follows. Curry used the seed

σ = 8689346476060549,

and the stage limits

B1 = 11000000, B2 = 100B1,

346

Chapter 7 ELLIPTIC CURVE ARITHMETIC

to obtain the factorization of 2677 1 as

1943118631 · 531132717139346021081 · 978146583988637765536217 ·

53625112691923843508117942311516428173021903300344567 · P,

where the final factor P is a proven prime. This beautiful example of serious ECM e ort—which as of this writing involves one of the largest ECM factors yet found—looms even more beautiful when one looks at the group order

#E(Fp) for the 53-digit p above (and for the given seed σ), which is

24 · 39 · 3079 · 152077 · 172259 · 1067063 · 3682177 · 3815423 · 8867563 · 15880351.

Indeed, the largest prime factor here in #E is greater than B1, and sure enough, as Curry and Woltman reported, the 53-digit factor of M677 was found in stage two. Note that even though those investigators used detailed enhancements and algorithms, one should be able to find this particular factor—using the hindsight embodied in the above parameters—to factor M667 with the explicit Algorithm 7.4.4. Another success is the 54-digit factor of n = b4 − b2 + 1, where b = 643 1, found in January 2000 by N. Lygeros and M. Mizony. Such a factorization can be given the same “tour” of group order and so on that we did above for the 53-digit discovery [Zimmermann 2000]. (See Chapter 1 for more recent ECM successes.)

Other successes have accrued from the polynomial-evaluation method pioneered by Montgomery and touched upon previously. His method was used to discover a 47-digit factor of 5 · 2256 + 1, and for a time this stood as an ECM record of sorts. Although requiring considerable memory, the polynomial-evaluation approach can radically speed up stage two, as we have explained.

In case the reader wishes to embark on an ECM implementation—a practice that can be quite a satisfying one—we provide here some results consistent with the notation in Algorithm 7.4.4. The 33-decimal-digit Fermat factor listed in Section 1.3.2, namely

188981757975021318420037633 | F15,

was found in 1997 by Crandall and C. van Halewyn, with the following parameters: B1 = 107 for stage-one limit, and the choice B2 = 50B1 for stagetwo limit, with the lucky choice σ = 253301772 determining the successful elliptic curve Eσ . After the 33-digit prime factor p was uncovered, Brent resolved the group order of Eσ (Fp) as

#Eσ (Fp) = (25 · 3 · 1889 · 5701 · 9883 · 11777 · 5909317) · 91704181,

where we have intentionally shown the “smooth” part of the order in parentheses, with outlying prime 91704181. It is clear that B1 “could have been” taken to be about 6 million, while B2 could have been about 100 million; but of course—in the words of C. Siegel—“one cannot guess the real

7.5 Counting points on elliptic curves

347

di culties of a problem before having solved it.” The paper [Brent et al. 2000] indicates other test values for recent factors of other Fermat numbers. Such data are extremely useful for algorithm debugging. In fact, one can e ect a very rapid program check by taking the explicit factorization of a known curve order, starting with a point P , and just multiplying in the handful of primes, expecting a successful factor to indicate that the program is good.

As we have discussed, ECM is especially suitable when the hidden prime factor is not too large, even if n itself is very large. In practice, factors discovered via ECM are fairly rare in the 30-decimal-digit region, yet more rare in the 40-digit region, and so far have a vanishing population at say 60 digits.

7.5Counting points on elliptic curves

We have seen in Section 7.3 that the number of points on an elliptic curve defined over a prime finite field Fp is an integer in the interval (p − 1)2, (p + 1)2 . In this section we shall discuss how one may go about actually finding this integer.

7.5.1Shanks–Mestre method

For small primes p, less than 1000, say, one can simply carry out the explicit sum (7.8) for #Ea,b(Fp). But this involves, without any special enhancements (such as fast algorithms for computing successive polynomial evaluations), O(p ln p) field operations for the O(p) instances of (p − 1)/2-th powers. One can do asymptotically better by choosing a point P on E, and finding all multiples [n]P for n (p + 1 2p, p + 1 + 2p), looking for an occurrence [n]P = O. (Note that this finds only a multiple of the order of P —it is the actual order if it occurs that the order of P has a unique multiple in the interval (p + 1 2p, p + 1 + 2p), an event that is not unlikely.) But this approach involves O(p ln p) field operations (with a fairly large implied big-O constant due to the elliptic arithmetic), and for large p, say greater than 1010,

this becomes a cumbersome method. There are faster O p lnk p algorithms

that do not involve explicit elliptic algebra (see Exercise 7.26), but these, too, are currently useless for primes of modern interest in the present context, say p ≈ 1050 and beyond, this rough threshold being driven in large part by practical cryptography. All is not lost, however, for there are sophisticated modern algorithms, and enhancements to same, that press the limit on point counting to more acceptable heights.

There is an elegant, often useful, O(p1/4+ ) algorithm for assessing curve order. We have already visited the basic idea in Algorithm 5.3.1, the babysteps, giant-steps method of Shanks (for discrete logarithms). In essence this algorithm exploits a marvelous answer to the following question: If we have two length-N lists of numbers, say A = {A0, . . . , AN −1} and B = {B0, . . . , BN −1}, how many operations (comparisons) are required to determine whether A ∩ B

348

Chapter 7 ELLIPTIC CURVE ARITHMETIC

is empty? And if nonempty, what is the precise intersection A ∩ B? A naive method is simply to check A1 against every Bi, then check A2 against every Bi, and so on. This ine cient procedure gives, of course, an O(N 2) complexity. Much better is the following procedure:

(1)Sort each list A, B, say into nondecreasing order;

(2)Track through the sorted lists, logging any comparisons.

As is well known, the sorting step (1) requires O(N ln N ) operations (comparisons), while the tracking step (2) can be done in only O(N ) operations. Though the concepts are fairly transparent, we think it valuable to lay out an explicit and general list-intersection algorithm. In the following exposition the input sets A, B are multisets, that is, repetitions are allowed, yet the final output A ∩ B is a set devoid of repetitions. We shall

assume a function sort() that returns a sorted version of

a list, having

the same elements, but arranged in nondecreasing order;

for example,

sort({3, 1, 2, 1}) = {1, 1, 2, 3}.

 

Algorithm 7.5.1 (Finding the intersection of two lists). Given two finite lists of numbers A = {a0, . . . , am−1} and B = {b0, . . . , bn−1}, this algorithm returns the intersection set A ∩ B, written in strictly increasing order. Note that duplicates are properly removed; for example, if A = {3, 2, 4, 2}, B = {1, 0, 8, 3, 3, 2}, then A ∩ B is returned as {2, 3}.

1. [Initialize]

 

A = sort(A);

// Sort into nondecreasing order.

B = sort(B);

 

i = j = 0;

 

S = { };

// Intersection set initialized empty.

2. [Tracking stage]

 

while((i < #A) and (j < #B)) {

 

if(ai ≤ bj ) {

 

if(ai == bj ) S = S {ai};

// Append the match to S.

i = i + 1;

 

while((i < (#A) 1) and (ai == ai−1)) i = i + 1;

} else {

 

j = j + 1;

 

while((j < (#B) 1) and (bj == bj−1)) j = j + 1;

}

 

}

// Return intersection A ∩ B.

return S;

Note that we have laid out the algorithm for general cardinalities; it is not required that #A = #B. Because of the aforementioned complexity of sorting, the whole algorithm has complexity O(Q ln Q) operations, where Q = max{#A, #B}. Incidentally, there are other compelling ways to e ect a list intersection (see Exercise 7.13).

7.5 Counting points on elliptic curves

349

Now to Shanks’s application of the list intersection notion to the problem of curve order. Imagine we can find a relation for a point P E, say

[p + 1 + u]P = ±[v]P,

or, what amounts to the same thing because (x, y) = (x, −y) always, we find a match between the x-coordinates of [p + 1 + u]P and vP . Such a match implies that

[p + 1 + u v]P = O.

This would be a tantalizing match, because the multiplier here on the left must now be a multiple of the order of the point P , and might be the curve order itself. Define an integer W = 1p1/422. We can represent integers k with |k| < 2p as k = β + γW , where β ranges over [0, W − 1] and γ ranges over [0, W ]. (We use the letters β, γ to remind us of Shanks’s baby-steps and giant-steps, respectively.) Thus, we can form a list of x-coordinates of the points

{[p + 1 + β]P : β [0, . . . , W − 1]},

calling that list A (with #A = W ), and form a separate list of x-coordinates of the points

{[γW ]P : γ [0, . . . , W ]},

calling this list B (with #B = W + 1). When we find a match, we can test directly to see which multiple [p + 1 + β γW ]P (or both) is the point at infinity. We see that the generation of baby-step and giant-step points requires O p1/4 elliptic operations, and the intersection algorithm has O p1/4 ln p steps, for a total complexity of O p1/4+ .

Unfortunately, finding a vanishing point multiple is not the complete task; it can happen that more than one vanishing multiple is found (and this is why we have phrased Algorithm 7.5.1 to return all elements of an intersection). However, whenever the point chosen has order greater than 4p, the algorithm will find the unique multiple of the order in the target interval, and this will be the actual curve order. It occasionally may occur that the group has low exponent (that is, all points have low order), and the Shanks method will never find the true group order using just one point. There are two ways around this impasse. One is to iterate the Shanks method with subsequent choices of points, building up larger subgroups that are not necessarily cyclic. If the subgroup order has a unique multiple in the Hasse interval, this multiple is the curve order. The second idea is much simpler to implement and is based on the following result of J. Mestre; see [Cohen 2000], [Schoof 1995]:

Theorem 7.5.2 (Mestre). For an elliptic curve E(Fp) and its twist E (Fp) by a quadratic nonresidue mod p, we have

#E + #E = 2p + 2.

When p > 457, there exists a point of order greater than 4p on at least one of the two elliptic curves E, E . Furthermore, if p > 229, at least one

350

Chapter 7 ELLIPTIC CURVE ARITHMETIC

of the two curves possesses a point P with the property that the only integer m (p + 1 2p, p + 1 + 2p) having [m]P = O is the actual curve order.

Note that the relation #E + #E = 2p + 2 is an easy result (see Exercise 7.16) and that the real content of the theorem lies in the statement concerning a singleton m in the stated Hasse range of orders. It is a further easy argument to get that there is a positive constant c (which is independent of p and the elliptic curve) such that the number of points P satisfying the theorem exceeds cp/ ln ln p—see Exercise 7.17—so that points satisfying the theorem are fairly common. The idea now is to use the Shanks method on E, and if this fails (because the point order has more than one multiple in the Hasse interval), to use it on E , and if this fails, to use it on E, and so on. According to the theorem, if we try this long enough, it should eventually work. This leads to an e cient point-counting algorithm for curves E(Fp) when p is up to, roughly speaking, 1030. In the algorithm following, we denote by x(P ) the x-coordinate of a point P . In the convenient scenario where all x-coordinates are given by X/Z ratios, the fact of denominator Z = 0 signifies as usual the point at infinity:

Algorithm 7.5.3 (Shanks–Mestre assessment of curve order).

Given an elliptic curve E = Ea,b(Fp), this algorithm returns the order #E. For list S = {s1, s2, . . .} and entry s S, we assume an index function ind(S, s) to return some index i such that si = s. Also, list-returning function shanks() is defined at the end of the algorithm; this function modifies two global lists A, B of coordinates.

1. [Check magnitude of p]

if(p ≤ 229) return p + 1 +

x

x3+ax+b

;

// Equation (7.8).

p

2. [Initialize Shanks search]

 

 

 

 

Find a quadratic nonresidue g (mod p);

 

 

W = p

1/4

 

 

 

;

 

 

 

 

// Giant-step parameter.

 

 

 

 

 

 

 

 

2

2 3

b);

 

 

 

// Twist parameters.

(c, d) = (g

 

a, g

 

 

 

 

3. [Mestre loop]

 

 

 

 

 

 

 

 

// We shall find a P of Theorem 7.5.2.

Choose3 random x [0, p − 1];

 

 

 

σ = x +ax+b

;

 

 

 

 

 

if(σ == 0)

 

 

 

 

 

 

 

 

 

 

p

 

goto [Mestre loop];

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

// Henceforth we have a definite curve signature σ = ±1.

if(σ == 1) E = Ea,b;

 

 

 

// Set original curve.

else {

 

 

 

 

 

 

 

 

 

 

 

E = Ec,d;

 

 

 

 

x = gx;

 

 

 

 

 

// Set twist curve and valid x.

}

Define an initial point P E to have x(P ) = x;

S = shanks(P, E);

// Search for Shanks intersection.

if(#S = 1) goto [Mestre loop];

// Exactly one match is sought.

7.5 Counting points on elliptic curves

 

351

Set s as the (unique) element of S;

 

β = ind(A, s); γ = ind(B, s);

// Find indices of unique match.

Choose sign in t = β ± γW such that [p + 1 + t]P == O on E;

return p + 1 + σt;

// Desired order of original curve Ea,b.

4. [Function shanks()]

 

 

shanks(P, E) {

// P is assumed on given curve E.

A = {x([p + 1 + β]P ) : β [0, W − 1]};

//Baby steps.

B = {x([γW ]P ) : γ [0, W ]};

 

// Giant steps.

return A ∩ B;

 

// Via Algorithm 7.5.1.

}

 

 

Note that assignment of point P based on random x can be done either as P = (x, y, 1), where y is a square root of the cubic form, or as P = [x : 1] in case Montgomery parameterization—and thus, avoidance of y-coordinates— is desired. (In this latter parameterization, the algorithm should be modified slightly, to use notation consistent with Theorem 7.2.6.) Likewise, in the shanks() function, one may use Algorithm 7.2.7 (or more e cient, detailed application of the addh(), doubleh() functions) to get the desired point multiples in [X : Z] form, then construct the A, B lists from numbers XZ1. One can even imagine rendering the entire procedure inversionless, by working out an analogue of baby-steps, giant-steps for lists of (x, z) pairs, seeking matches not of the form x = x , rather of the form xz = zx .

The condition p > 229 for applicability of the Shanks–Mestre approach is not artificial: There is a scenario for p = 229 in which the existence of a singleton set s of matches is not guaranteed (see Exercise 7.18).

7.5.2Schoof method

Having seen point-counting schemes of complexities ranging from O p1+ to O p1/2+ and O p1/4+ , we next turn to an elegant point-counting algorithm due to Schoof, which algorithm has polynomial-time complexity

O lnk p for fixed k. The basic notion of Schoof is to resolve the order #E

(mod l) for su ciently many small primes l, so as to reconstruct the desired order using the CRT. Let us first look at the comparatively trivial case of #E (mod 2). Now, the order of a group is even if and only if there is an element of order 2. Since a point P =O has 2P = O if and only if the calculated slope (from Definition 7.1.2) involves a vanishing y-coordinate, we know that points of order 2 are those of the form P = (x, 0). Therefore, the curve order is even if and only if the governing cubic x3 + ax + b has roots in Fp. This, in turn, can be checked via a polynomial gcd as in Algorithm 2.3.10.

To consider #E (mod l) for small primes l > 2, we introduce a few more tools for elliptic curves over finite fields. Suppose we have an elliptic curve E(Fp), but now we consider points on the curve where the coordinates

are in the algebraic closure Fp of Fp. Raising to the p-th power is a field automorphism of Fp that fixes elements of Fp, so this automorphism, applied

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]