Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Invitation to a Contemporary Physics (2004)

.pdf
Скачиваний:
609
Добавлен:
01.05.2014
Размер:
8.01 Mб
Скачать
= 7321

6.10. Cryptography

225

that φ(φ(N)) = φ(120) = 32. Thus, s = kφ(φ(N))1 This is the secret key.

We will now describe how to use the public keys N and how to use the secret key s to decode a message.

mod 120, which is 103.

and k to code a message,

Coding

We shall assume the plaintext message to be given by a string of numbers t. Using the public keys k and N, each number t of the string is changed into a coded number c by the formula:

c = tk mod N .

For example, if the message ‘Send me 1000 dollars’ is given by the string of numbers 59 15 24 14 99 23 15 99 01 00 00 00 99 14 25 22 22 11 28 29 discussed before, then the coded string of numbers using k = 7 and N = 143 is

p 59

15

24 14 99

23

15 99

01

00

00

00

c 71 115

106 53 44

23

115 44

1

0

0

0

 

 

 

 

 

 

 

 

 

p 99

14

25 22 22

11

28 29

 

 

 

 

c 44

53

64 22 22

132

68 94

 

 

 

 

 

 

 

 

 

 

 

 

 

Decoding

Using the secret key s, the coded number c can be decoded into the original number t using the formula:

t = cs mod N .

It is straightforward using this formula to check from the table above that this does recover the plaintext t from the coded text c. For example, if c = 44, using s = 103 and N = 143, we get for c = 44 a value of p equal to 44103 mod 143 = 99.

We shall now show why this decoding works. The reader who is not interested may skip this.

The decoding formula works because of Euler’s theorem in number theory, which states that xφ(n) = 1 mod n, whenever x and n have no common factor. We may apply this theorem to n = φ(N) and x = k, because k is chosen to have no common factor with φ(N) = (p − 1)(q − 1). Then we get kφ(φ(N)) = 1 mod φ(N). Since s is calculated using the formula s = kφ(φ(N))1 mod φ(N), we conclude from Euler’s theorem that ks = kφ(φ(N)) mod φ(N) = 1. Since the coded number c is equal to pk mod N, hence cs mod N = pks mod N. But we know that ks = 1 mod φ(N), or equivalently ks = (N) for some integer a, it follows that cs mod N = p · [p(N)

226

Quantum Computation and Information

mod N]. Using Euler’s theorem again on x = pa and n = N, the number in the square parenthesis is 1, and hence (cs mod N) = p, as claimed.

Quantum Breaking

RSA cryptography relies on the fact that it is hard to factorize the public key N into its prime factors p and q, because once we know p and q we can compute the secret key to break the code. As mentioned previously, once a quantum computer is turned on and Shor’s algorithm for factorization is used, we will get an exponential advantage in factorization so it is no longer di cult to factorize N, unless it is so huge that it becomes impractical to use the public and private keys generated by it to code and decode.

6.10.3Summary

We considered two encryption systems. The di culty with the private key system is the secure transmission of the private key. If coherence can be maintained over a long distance, this problem can be solved by using the Bennet–Brassard scheme.

The security of the RSA public key system relies on the di culty of factorizing a large number into its two prime components. With a quantum computer and Shor’s algorithm, this task is rendered much easier. As a result the security of the RSA public key system will become woefully inadequate.

6.11Quantum Teleportation

In this last section, we consider briefly a scheme to transmit an image of a 1-qubit quantum state = α|0 + β|1 from A to B, without physically carrying the state across. This is known as quantum teleportation. We assume that although A is in possession of the state , and she does not know what the coe cients α and β are, for otherwise she can simply send over those two coe cients.

The scheme invented by Bennett, Brassard, Crepeau, Jozsa, Peres, and Wootters (BBCJPW) makes use of the correlation of an auxiliary entangled state to send across. From now on we use subscripts to distinguish the di erent particles in the scheme. In addition to particle 1 which is in the state |ψ ≡ |ψ1 , we need the service of two more particles, 2 and 3.These two are prepared in an entangled

state, e.g., I23 = (|02 |13 − |12 |03 )/ 2 . Particle 2 is given to A and particle 3 is given to B, but with the coherence of the state I23 maintained even when A

and B are separated by a long distance. In practice this is the most di cult part to achieve, but by using low loss optical fibers, this can be done over a distance of tens of kilometers.

6.11. Quantum Teleportation

227

Now A is in possession of particles 1 and 2, whose wave functions are linear combinations of the four basis states |01 |02 , |01 |12 , |11 |02 , 11 |12 . For the present scheme we should express the wave function as a linear combination of four orthonormal entangled states,

 

 

1

(|01

 

 

 

 

12I

=

 

|12 − |11 |02 ) ,

2

 

 

1

( 01

 

 

 

 

φ12II

 

=

 

12

 

+ 11

02 ) ,

 

|

2

|

|

|

|

 

 

1

 

 

 

 

 

12III

=

 

(|01

|12

− |11

|02 ),

2

 

 

1

(|01

 

 

 

 

12IV

=

 

|12 + |11 |02 ) .

2

Now the three-particle system is in the quantum state 1 I23 , and it is just a matter of simple algebra to rewrite this as

1 I23 = 12 [I12 (−α|03 − β|13 ) + II12 (−α|03 + β|13 )

+ III12 (+α|03 + β|13 ) + IV12 (+α|03 − β|13 )] .

Now suppose A can carry out a measurement to tell in which of the entangled states I12 , II12 , III12 , IV12 particles 1 and 2 are in. Once that is measured she picks up a telephone or send an email to tell B about it. Once he knows about this he can

figure out what the state 1 is, thus succeeding in having the desired information transmitted from A to B.

For example, if her two-particle state is II12 , then B immediately knows that his state is −α|03 + β|13 , so all that he has to do is to pass his state through a simple 1-bit gate to reverse the sign of |03 to get himself an image of the state that A possesses, namely, +α|03 + β|13 . If A reports getting another state, say IV12 , then he will have to pass his state through another 1-bit gate, which reverses the sign of |13 instead.

6.11.1Summary

The BBCJPW scheme for transmitting the quantum state of particle 1 requires the help of an extra pair of particles in an entangled state. Particle 2 of the entangled state is sent to A and particle 3 of the entangled state is sent to B. A then goes ahead to determine which of the four possible entangled states particles 1 and 2 are in. Once this is communicated to B, he can use this knowledge to produce an image of the original quantum state of particle 1.

228

Quantum Computation and Information

6.12 Further Reading

Books

R. P. Feynman, Feynman Lectures on Computation, edited by A. J. G. Hey and R. W. Allen (Addison-Wesley, 1996).

D. Deustch, The Fabric of Reality (Viking Penguin Publishers, London, 1997).

H.-K. Lo, S. Popescu and T. Spiller, Introduction to Quantum Computation and Information (World Scientific, 1998).

A. J. G. Hey, Feynman and Computation: Exploring the Limits of Computers (Perseus Book, 1999).

D. Bouwmeester, A. Ekert and A. Zeilinger, The Physics of Quantum Information (Springer, 2000).

M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, 2000).

S. Braunstein and H.-K. Lo (eds.), Scalable Quantum Computers (WileyVCH, 2000).

Review articles

C. Bennett, Quantum Information and Computation, Physics Today, October, 1995, p. 24.

A. Ekert and R. Jozsa, Quantum Computation and Shor’s Factoring Algorithm, Review of Modern Physics 68 (1996) 733.

J. Preskill, Battling Decoherence: The Fault-tolerant Quantum Computer, Physics Today, June, 1999, p. 24.

D. Gottesman and H.-K. Lo, From Quantum Cheating to Quantum Security, Physics Today, November, 2000, p. 22.

A. Ekert, P. Hayden, H. Inamori and D. K. L. Oi, What is Quantum Computation?, International Journal of Modern Physics A16 (2001) 3335.

Web-sites

http://lanl.arxiv.org/archive/quant-ph

http://qso.lanl.gov/qc/

http://www.ccmr.cornell.edu/ mermin/qcomp/CS483.html

http://www.iqi.caltech.edu/index.html

http://www.qubit.org

http://www.rintonpress.com/journals/qic/index.html

6.13. Problems

229

6.13Problems

6.1Two theorems and one game:

1.Convert the decimal numbers 128 and 76 into binary numbers.

2.Suppose we have k binary numbers, s1, s2, . . . , sk, the largest of which has n bits. Add up these k numbers, bit by bit, without carry, and express the sum of each bit as a decimal number. In this way we get k decimal numbers

d1, d2, . . . , dn.

(a)Prove that if all di (i = 1, 2, . . . , n) are even, then it is impossible to decrease one and only one number sj so that every number of the new bit-sum is still all even.

(b)Prove that if at least one di (i = 1, 2, . . . , n) is odd, then it is possible to decrease one sj so that every number of the new bit-sum is even.

There is an interesting game, NIM, whose winning strategy is based on these two theorems. It is a two-person game, played with k piles of coins, with si coins in the ith pile. The two players take turns to remove any number of coins from any one pile of his/her choice. The person who succeeds in removing the last coin wins.

Suppose we express si in binary numbers. The winning strategy is to remove coins so that every bit-sum after the removal is even. According to (a), his/her opponent must leave at least one bit-sum odd after his/her removal. Since the winning configuration is 0 in every pile, whose bit-sum is even, his/her component can never win.

For example, if there are three piles, with 3, 4, 5 coins each. Then s1 = 11, s2 = 100, s3 = 101, so that d1 = 2, d2 = 1, d3 = 2. The person who starts the game will be guaranteed to win by reducing the first pile of 3 (s1 = 11) coins to 1 coin (01). This changes the bit-sum to d1 = 2, d2 = 0, d3 = 2, so according to (a), his/her opponent can never win if he/she keeps on following the same strategy to the very end.

6.2Show how to obtain the nand gate from the and and xor gates.

6.3Suppose the input register of a cn gate contains the number |cd , where c is the control bit and d is the data bit. Show that the output of the gate is

|c, d c .

6.4Compute 1000 mod 21.

6.5These are exercises in relating the di erent gates:

1.Construct the not gate from the cn gate.

2.Construct the and gate from the ccn gate.

3.Construct the xor gate from the cn gate.

4.Construct the or gate from the and, xor, and fanout gates.

230

Quantum Computation and Information

6.6Let θ be a real number, rX be a gate that changes |0 to cos θ|0 − i sin θ|1 ,

and |1 to cos θ|1 − i sin θ|0 . Let rZ be a gate that changes |0 to e−iθ|0 and |1 to e|1 . Show that rZ = hrX h, where h is the hadamad gate.

6.7Let u be a 1-bit gate which leaves |0 unaltered, but changes |1 to −|1 . Verify that the controlled-not gate cn can be obtained from the hadamad gate h and the cu gate as follows: (cn) = h2(cu)h2, where h2 represents the h gate operating on the second (data) bit.

6.8This is an exercise on Deutsch’s algorithm. Suppose x varies from 0 to 15, and suppose f(x) = 0 when 0 ≤ x ≤ 7, f(x) = 1 when 8 ≤ x ≤ 15. Find out what the state |a is in Eq. (6.4).

6.9The greatest common divisor of two numbers R0 and R1 can be obtained in the following way. Let R0 be the larger of the two numbers. First, divide R0 by R1 to get the remainder R2. Then, divide R1 by R2 to get the remainder R3, and so on down the line. At the nth step, Rn−1 is divided by Rn to get the remainder Rn+1. Continue this way until the remainder Rm+1 = 0. Then Rm is the greatest common divisor of R0 and R1.

Use this method to find the greatest common divisor between

1.R0 = 124 and R1 = 21.

2.R0 = 126 and R1 = 21.

3.R0 = 21 and R1 = 7.

4.R0 = 21 and R1 = 9.

6.10This is an exercise to factorize N by finding an even number r such that ar = 1 mod N, where a is any number which has no common factor with N (see Sec. 6.7). Unless b = ar/2 1 or c = ar/2 + 1 divides N, otherwise the

prime factors p and q of N can be obtained by finding the greatest common divisor between b and N, and between c and N.

Suppose N = 21.

1.Find out what r, b, c are when a = 5, then find out the greatest common divisor between b and N, and between c and N.

2.Find out what r, b, c are when a = 2, then find out the greatest common divisor between b and N, and between c and N.

Chaos: Chance Out of

7

Necessity

7.1 Introduction: Chaos Limits Prediction

Physics is ultimately the study of change — of becoming. Changes are determined by the laws of physics that be. For example, we have Newton’s three laws of motion. The laws themselves are, of course, believed to be changeless. The necessary connection of events implied by the deterministic nature of the physical laws leaves us no chance of freedom except for that of the choice of initial conditions, that is, the initial positions and velocities of all the elementary subunits that make up our system. Once this set of initial data is entered, the future course of events, or the process, is uniquely determined in detail and, therefore, predictable in principle for all times, indeed just as much as the known past is. Thus, for instance, the trajectories of all the 1019 molecules that belong in each cubic centimetre of the air in your room, su ering some 1027 collisions each passing second, are in principle no less predictable than those of an oscillating pendulum or an orbiting planet — only much more complex.

You may recall that in order to specify the initial conditions for a single particle, taken to be a point-like object, we need to enter a set of three numbers, its Cartesian coordinates x, y, z, say, to fix its position. We say that the particle has three dynamical degrees of freedom. Another set of three numbers is required in order to fix the corresponding components of its velocity (momentum). For N particles these add up to 6N independent data entries. The state of the system can then be conveniently represented as a point in a 6N-dimensional abstract space called phase space. The motion of the whole system then corresponds to the trajectory of this single representative phase point in this phase space. (The nature of phase space, or more generally speaking state space, of course, depends on the context. Thus, in the case of a chemical reaction we may be concerned with the concentrations x, y, z, say, of three reacting chemical species whose rate of change (kinetics) depends only on these concentrations. In that case the state space will be a three-dimensional one.) Deterministic dynamics or kinetics implies that there is a unique trajectory through any given phase or state point, and it is calculable in principle. In our example of

231

232

Chaos: Chance Out of Necessity

the air in the room, N = 1019! Behold the tyranny of large numbers! The complexity here is due to our having to store large amounts of input information and to solve as many equations of motion — the computational complexity of information processing. We will do well to remember here that a molecular dynamicist of today with free access to the fastest supercomputer available, capable of performing a billion floating point operations per second, can barely simulate the digitized motion of some 104 particles, and then only approximately. But these practical limitations are besides the point. In principle the motion is calculable exactly and hence a predictable claim.

It is true that we speak of chance and probability in physics, in statistical physics to wit, where we have the Maxwell–Boltzmann distribution of velocities of molecules of a gas, the Gaussian distribution of errors in a measurement, or the random walk of a Brownian particle (a speck of pollen or a colloidal particle floating in water, for example) and various averages of sorts. But these merely reflect an incompleteness of our knowledge of the details. The apparent randomness of the Brownian motion is due to its myriads of collisions, about 1021 per second, with the water molecules that remain hidden from our sphere of reckoning. In point of fact, even if we could calculate everything, we wouldn’t know what to do with this fine-grained information. After all, our sensors respond only to some coarse-grained averages such as pressure or density that require a highly reduced information set. It is from our incompleteness of detailed information as also, and not a little, from our lack of interest in such fine details, that there emerges the convenient intermediate concept of chance, and of probability. But strictly speaking, everything can in principle be accounted right. There is, strictly speaking, truly no game of chance: the roll of the dice, the toss of the coin or the fall of the roulette ball, can all be predicted exactly but for the complexity of computation and our ignorance of the initial conditions. This absolute determinism was expressed most forcefully by the 19th century French mathematician Pierre Simon de Laplace. Even the whole everchanging universe can be reduced to a mere unfolding of some initial conditions, unknown as they may be to us, under the constant aspect of the deterministic laws of physics — sub specie aeternitatis.

But, in a truly operational sense, this turns out not to be the case. Laplacian determinism, with its perverse reductionism, is now known to be seriously in error for two very di erent reasons. The first, that we mention only for the sake of completeness, has to do with the fact the correct framework theory for the physical happenings is not classical (Newtonian) mechanics but quantum mechanics (See Appendix B). There is an Uncertainty Principle here that limits the accuracy with which we may determine the position and the velocity (momentum) of a particle simultaneously. Try to determine one with greater precision and the other gets fuzzier. This reciprocal latitude of fixation of the position and the velocity (momentum to be precise) allows only probabilistic forecast for the future, even in principle. Quantum uncertainty, however, dominates only in the domain of the very small, i.e., on

7.1. Introduction: Chaos Limits Prediction

233

the microscopic scale of atoms and molecules. On the larger scales of the ‘world of middle dimensions’ of common experience and interest, deterministic classical mechanics is valid for all practical purposes. We will from now on ignore quantum uncertainty. We should be cautioned though that the possibility of a fantastic amplification of these quantum uncertainties to a macroscopic scale cannot be ruled out.

Macroscopic uncertainty, or rather the unpredictability, that we are going to talk about now, emerges in an entirely di erent and rather subtle manner out of the very deterministic nature of the classical laws. When this happens, we say that we have chaos, or rather deterministic chaos to distinguish it from the thermal disorder, or the molecular chaos of stochastic Brownian motion.

7.1.1 The Butterfly Effect

But how can a system be deterministic and yet have chaos in it. Isn’t there a contradiction in terms here? Well, the answer is no. The clue to a proper understanding of deterministic chaos lies in the idea of Sensitive Dependence on Initial Conditions. Let us understand this first. As we have repeatedly said before, the deterministic laws simply demand that a given set of initial conditions lead to a unique and, in principle, calculable state of the system at any future instant of time. It is implicitly understood here, however, that the initial conditions are to be given to infinite precision, i.e., to an infinite number of decimal places if you like. But this is an ideal that is frankly unattainable. Errors are ubiquitous. What if the initial conditions are known only approximately? Well, it is again implicitly assumed that the approximately known initial conditions should enable us to make approximate predictions for all times — approximate in the same proportion. That is to say that while the errors do propagate as the system evolves, they do not grow inordinately with the passage of time. Thus, as we progressively refine our initial data to higher degrees of accuracy, we should get more and more refined final-state predictions too. We then say that our deterministic system has a predictable behavior. Indeed, operationally this is the only sense in which prediction acquires a welldefined meaning. In terms of our state-space picture, this means that if we started o our system from two neighboring state points, the trajectories shall stay close by for all future times. Such a system is said to be well behaved, or regular. Now, the point is that deterministic laws do not guarantee this regularity. What then if the initial errors actually grow with time — that too exponentially? In our phase space picture then, any two trajectories that started o at some neighboring points initially, will begin to diverge so much that the line joining them will get stretched exponentially (as eλt, say) with the passage of time. Here, λ measures the rapidity of divergence (or convergence) according to whether it is positive (or negative). It is called the Lyapunov exponent. The initial instant of time can, of course, be taken to be any time along the trajectory. The condition λ > 0 is precisely what we

234

Chaos: Chance Out of Necessity

mean by the sensitive dependence on initial conditions. It makes the flow in phase space complex, almost random, since the approximately known initial conditions do not give the distant future states with comparable approximation. The system will lack error tolerance, making long-time prediction impossible. This is often referred to picturesquely as the Butterfly E ect: the flap of a butterfly’s wings in Brazil may set o a tornado in Texas. Some sensitivity! When this happens, we say that the dynamical system has developed chaos even though the governing law remains strictly deterministic. We might aptly say that chaos obeys the letter of the law, but not the spirit of it.

There is yet another way of expressing the sensitive dependence on initial conditions, without comparing neighboring trajectories. After all, a given dynamical evolution is a one-shot a air and it should be possible to express this characteristic sensitivity in terms of that single one-shot trajectory per se. It is just this: one cannot write down the solution of the dynamical equations in a closed, smooth (analytic) form valid for all times, since this would mean that the state variable at any time must be a smooth function of the initial conditions, which negates the sensitive dependence. This means that the evolution equation (i.e., the algorithm for change) must be solved step by step all the way to the final time and the calculated values of the state variable catalogued. There is no short cut. There is a kind of computational complexity (or irreducibility) in this, despite the simplicity of the algorithm that generated the change.

Now, what causes this sensitive dependence on initial conditions? Does chaos require a fine tuning of several parameters, or does it persist over a whole range of the parameter values? Is chaos robust? How common is chaos? Is every chaotic system chaotic in its own way or is there universality — within a class maybe? How do we characterise chaos? Can a simple system with a small number of degrees of freedom have chaos, or do we need a complex system with a large, almost infinite number of degrees of freedom? Is the claim of distinction between a chaotic and a statistically random system mere nitpicking, or one of physical consequence. These are some of the questions that we will address in the following sections in a somewhat intuitive fashion. Collectively, these problems are studied under the forbidding heading of ‘Dynamical Systems.’ Deep results have been obtained in this field over the last three decades or so. Some questions still remain unanswered or only partially answered. But a physical picture of chaos has emerged, which is already fairly complete. It is likely to improve greatly with our acquaintance with some examples of chaos.

7.1.2 Chaos is Common

One may have the impression that the sensitive dependence on initial conditions necessary for chaos must require a fine tuning of control parameters, which can happen only accidentally in nature. This would make chaos an oddity that can