Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
искусственный интеллект.pdf
Скачиваний:
26
Добавлен:
10.07.2020
Размер:
27.02 Mб
Скачать

suai.ru/our-contacts

quantum machine learning

Quantum-Based Modelling of Database States

Table 2 Atomic conditions on car properties

 

117

 

 

Label

Condition

YC1

year of construction = 2016

YC2

year of construction = 2017

FT1

fuel tank 35

FT2

fuel tank is very large

K1

kilometre 15.000

K2

kilometre is very small

NC

number of cylinders = 4

CA1

cylinder arrangement = Row

CA2

cylinder arrangement = Boxer

When we look at condition FT2 we make the following observation: testing FT2 against the state of a car object cannot adequately return yes or no. Instead, we expect to receive a grade of compliance from the interval [0, 1]. A high value signals a strong compliance and vice versa. Later on, we will show how the statistics of quantum measurements provides us a mean to compute the required gradual values.

At Þrst, we discuss how to model elementary data types by using the mathematics behind quantum mechanics. Here we focus on Þnite dimensional and real inner product spaces. Later on, we will explain how to construct complex data types and how to map them into the quantum world.

3 Modelling Elementary Data Types

An elementary data type deÞnes a data structure and operations to deal with its values. A data type is elementary if its values cannot be meaningfully decomposed into smaller semantic values. In our example the property year of construction is elementary. Its domain covers all possible year values of car construction. A useful operation could be the computation of the difference between 2 year values. We deÞne the function dom which assigns to a data type a set of valid values. That set is often called domain of a data type.

We distinguish between two types of elementary data types:

Ðorthogonal data type: The values of that data type are independent from each other. There is no meaningful similarity between them. Two values are either identical or not identical. In our example, the property cylinder arrangement is orthogonal.

Ðnon-orthogonal data type: Besides the test on identity between two values gradual similarity values can be required between them. In our example the property fuel tank is non-orthogonal: a required volume of 35 L is more similar to a given value of 40 L than 45 L.

The distinction between orthogonal and non-orthogonal often depends on the intended application semantics. In some application it may be important to demand

suai.ru/our-contacts

quantum machine learning

118

I. Schmitt et al.

for an exact value of 35 L for a fuel tank and every deviation is seen as wrong. In that case, fuel tank would be modelled as an orthogonal data type. For simplicity, in the following we assume that every property is categorized either as orthogonal or non-orthogonal.

In next subsections we show how to map an elementary data type dt with a Þnite domain

Dom ( dt ) := {V1, . . . , Vk }

to a family of ket vectors of an inner product space. The mapping of a value to a ket vector is denoted by the symbol . Function QDom assigns to a data type the set of ket vectors which appear as possible outcome of this mapping.

3.1 Orthogonal Data Types

The values of an orthogonal data type dt are bijectively mapped to ket vectors forming an orthonormal basis of an inner product space:

QDom( dt ) = {|V1 , . . . , |Vk }

Dom( dt ) QDom( dt )

i [1, k] : Vi → |Vi .

The corresponding ket vectors are taken to be mutually orthogonal; they span a k- dimensional inner product space.

Let us take a basis ket vector |Vx for a value of an orthogonal property. If we want to test the value Vi for identity with Vx , we proceed in a way reßecting quantum measurement. We construct the projector P = |Vi Vi |:

Vx

P

Vx

Vx

Vi

 

Vi

Vx

=

1 if i = x

|

|

 

= |

 

|

 

0 otherwise.

For testing whether a value x is contained in a value set S = {s} we use the projector

P = s S |Vs Vs |:

Vx |P |Vx = Vx |

 

|Vs Vs | |Vx

 

 

s S

 

 

 

 

=

 

Vx

Vs

Vs

Vx

=

1 if x S

|

 

|

 

0 otherwise.

 

s S

 

 

 

 

 

 

suai.ru/our-contacts

quantum machine learning

Quantum-Based Modelling of Database States

119

Fig. 2 Value mapping into a real one-qubit system

|V1 = 0.9 · |0 + 0.435· |1 |V2 = 0.7 · |0 + 0.714· |1 |V3 = 0.3 · |0 + 0.954· |1

3.2 Non-orthogonal Data Types

Between values of a non-orthogonal data type dt a gradual similarity is required. Therefore we choose non-orthogonal ket vectors for modelling. As target of the mapping we take a real inner product space with dimension n k. As extreme case we can map all values to the two-dimensional inner product space of a real one-qubit-system, see, for example, the mapping of three values in Fig. 2.

An intuitive question arises from where we get the right ket vectors. Starting point is a k ×k similarity matrix S = {sij } expressing the required gradual similarity values between all value pairs. For the construction of the ket vectors, the similarity matrix must meet the following properties:

Ð Unit interval: All values of the matrix are elements from [0, 1].

Ð Diagonal values: All diagonal values refer to the similarity of values to themselves and are therefore 1.

Ð Symmetry: The matrix is symmetric since similarity is usually required to be symmetric.

Ð Square-rooted positive semi-definiteness: For reasons explained in the sequel, we

require the matrix of square roots S 1 := {s } to be positive semi-deÞnite. That

2 ij

is, the eigenvalues must be non-negative.

Table 3 left shows an example of a similarity matrix.

Based on a similarity matrix S we can construct the ket vectors. First, we replace

1

all matrix elements by their square roots yielding S 2 . The motivation for this is that the projection probability given by quantum measurement corresponds to a squared

1

inner product. Second, we perform a spectral decomposition of S 2 and obtain the matrix V containing orthonormal eigenvectors as rows and a diagonal matrix L with the corresponding non-negative eigenvalues:

1

= V

· L · V .

S 2

suai.ru/our-contacts

quantum machine learning

120

I. Schmitt et al.

Table 3 Similarity values (left) and their element-wise square roots (right)

S

V1

V2

V3

V1

1

0.5

0

V2

0.5

1

0.5

V3

0

0.5

1

1

V1

V2

V3

S 2

V1

1

 

 

1/

 

 

0

 

 

 

 

2

 

 

V2

1/

 

 

1

 

 

1/

 

 

2

 

 

2

V3

0

 

 

1/

 

 

1

 

 

 

 

2

 

 

Since L is a diagonal matrix with non-negative values we can write it as a product

1

1

and obtain:

 

 

of its square roots L = L 2

· L 2

 

 

 

1

=

 

1

1

· V

 

S 2

V · L 2

· L 2

 

 

=

 

1

1

· V

 

 

V

· L 2

· L 2

 

 

=

 

1

 

1

 

 

L 2 · V

· L 2 · V

 

 

=

K

· K,

 

 

1

with K = {kij } = L 2 · V . The columns of matrix K correspond to the required ket vectors. However, they are vectors of k dimensions. The number of dimensions is usually higher than necessary. Let us inspect the diagonal matrix L containing the eigenvalues. Very often, some of the eigenvalues are zero. The corresponding dimensions can therefore be removed and we end up with ket vectors of an inner product space of a dimension n less than k.1 The mapping is given by:

QDom( dt ) = {|V1 , . . . , |Vk }

Dom( dt ) QDom( dt )

 

n

j [1, k] : Vj → |Vj =

.

kij |i span{|1 , . . . , |n } = Rn

 

i=1

where |i denotes the i-th canonical unit vector of Rn.

We will demonstrate the derivation of ket vectors from a similarity matrix using the example given in Table 3. The similarity matrix is given left and its square root is given right. The Cholesky-decomposition yields the square matrix given in Table 4. The matrix can be reduced by the last row since the corresponding eigenvalue is zero. Thus, we obtain three two-dimensional ket vectors from the resulting columns. They are illustrated in Fig. 3.

1A more efÞcient method to derive the ket vectors is to apply the Cholesky-decomposition from

1

S 2 [5].