Correlation functions are useful whenever we want to focus on n < N particles and "average out" the remaining particles. Indeed, suppose F = F(x_1, dots, x_n) is a symmetric function of x_1, dots, x_n. Then

frac{1}{n!} int F(x_1, dots, x_n) R_n(x_1, dots, x_n) dx_1 dots dx_n

= frac{N!}{(N-n)!n!} int F(x_1, dots, x_n) P_N(x_1, dots, x_n, x_{n+1}, dots, x_N) dx_1 dots dx_n dx_{n+1} dots dx_N

= int sum_{1 le i_1 < dots < i_n le N} F(x_{i_1}, dots, x_{i_n}) P_N(x_1, dots, x_N) dx_1 dots dx_N

Thus

(93.1)

mathbb{Exp} hat{F} = frac{1}{n!} int F(x_1, dots, x_n) R_n(x_1, dots, x_n) dx_1 dots dx_n

where

(93.2)

hat{F}(x_1, dots, x_N) = sum_{1 le i_1 < dots < i_n le N} F(x_{i_1}, dots, x_{i_n})

is the symmetric extension of F(x_1, dots, x_n) to N variables.

Suppose x_1^0 < x_2^0 < dots < x_n^0 and let delta > 0 be small. Let x_j^0 be the characteristic function of the (disjoint) sets (x_j^0 - frac{delta}{2}, x_j^0 + frac{delta}{2}). Let

F(x_1, dots, x_n) = sum_{sigma in S_n} prod_{j=1}^n chi_j^0(x_{sigma_j}),

clearly F is symmetric.

Then we have from (93.2)

delta^n R_n(x_1^0, dots, x_n^0) sim frac{1}{n!} int F(x_1, dots, x_n) R_n(x_1, dots, x_n) dx_1 dots dx_n

= int hat{F}(x_1, dots, x_N) P_N(x_1, dots, x_N) dx_1 dots dx_N

= int_{x_1 < dots < x_N} hat{F}(x_1, dots, x_N) hat{P}_N(x_1, dots, x_N) dx_1 dots dx_N

Remark 94+ 
ightarrow (where hat{P}_N = N! P_N)

= int_{x_1 < dots < x_N} sum_{1 le i_1 < dots < i_n le N} F(x_{i_1}, dots, x_{i_n}) hat{P}_N(x_1, dots, x_N) dx_1 dots dx_N

Now as x_1 < dots < x_N, F(x_{i_1}, dots, x_{i_n}) = prod_{j=1}^n chi_j^0(x_{i_j}). Hence

delta^n R_n(x_1^0, dots, x_n^0) = sum_{x_1 < dots < x_N} [sum_{1 le i_1 < dots < i_n le N} ((prod_{j=1}^n chi_j^0(x_{i_j})))] hat{P}_N(x_1, dots, x_N) dx_1 dots dx_N

= sum_{x_1 < dots < x_N} [chi_1^0(x_1) dots chi_n^0(x_n) + dots + chi_1^0(x_{N-n+1}) dots chi_n^0(x_N)] hat{P}_N(x_1, dots, x_N) dx_1 dots dx_N

overset{	extbf{Why?}}{simeq} Prob (exactly 1 eigenvalue in each of the intervals (x_j^0 - delta / 2, x_j^0 + delta / 2))


Remark 94+

Note: In the case of RMT, P_N(lambda_1, dots, lambda_N) dlambda_1 dots dlambda_N only has physical meaning, even though P_N(lambda_1, dots, lambda_N) is symmetric in the lambda_js, only when lambda_1 < dots < lambda_N. Indeed, remember that the map M mapsto (Lambda(M), O(M)) always specifies the eigenvalues in some order, in particular, lambda_1(M) < dots < lambda_N(M).

When we compute the expectation mathbb{Exp} f for some quantity f(lambda_1, dots, lambda_N) which is symmetric in lambda_1, dots, lambda_N, we have

(94+.1)

mathbb{Exp} f = idotsint_{lambda_1 < dots < lambda_N} f(lambda_1, dots, lambda_N) P_N(lambda_1, dots, lambda_N) dlambda_1 dots dlambda_N.

However as a computational convenience, we observe that

(94+.2)

mathbb{Exp} f = frac{1}{N!}idotsint_{mathbb{R}^N} f(lambda_1, dots, lambda_N) P_N(lambda_1, dots, lambda_N) dlambda_1 dots dlambda_N.

Although (94+.2) is easier to manipulate, when we want to understand the meaning of the statistic mathbb{Exp} f, we must refer to (94+.1).


Thus

R_n(x_1^0, dots, x_n^0) is the density of the probability that is one eigenvalue at each of the points x_1^0, dots, x_n^0, x_1^0 < dots < x_n^0.

Note the following:

If F = F(x_1) = chi_Omega(x_1), the characteristic function of Omega subset mathbb{R}

hat{F}(x_1, dots, x_N) = sum_{i=1}^N F(x_i) = sum_{i=1}^N chi_Omega(x_i) = sharp{ i: x_i in Omega }

Thus by (93.1)

(95.1)

mathbb{Exp} (sharp { i: x_i in Omega }) = int_Omega R_1(x) dx

Bearing (94+) in mind, we also have for random matrix ensembles,

(95.2)

mathbb{Exp} (sharp { lambda_i in Omega }) = int_Omega R_1(x) dx.

Also if Omega_1, Omega_2 are two disjoint sets in mathbb{R} and

F(x_1, x_2) = chi_{Omega_1}(x_1) chi_{Omega_2}(x_2) + chi_{Omega_1}(x_2) chi_{Omega_2}(x_1),

then

hat{F}(x_1, dots, x_N) = sum_{1 le i_1 < i_2 le N} [chi_{Omega_1}(x_{i_1}) chi_{Omega_2}(x_{i_2}) + chi_{Omega_1}(x_{i_2}) chi_{Omega_2}(x_{i_1})]

= sharp { (i_1, i_2): i_1 < i_2, (x_{i_1}, x_{i_2}) in Omega_1 	imes Omega_2 cup Omega_2 	imes Omega_1 }

Thus

(96.1)

mathbb{Exp}(# {pairs ( i_1, i_2 ), i_1 < i_2: either x_{i_1} in Omega_1 and x_{i_2} in Omega_2 or x_{i_2} in Omega_1 and x_{i_1} in Omega_2})

= frac{1}{2!} int [chi_{Omega_1}(x_1)chi_{Omega_2}(x_2) + chi_{Omega_1}(x_2)chi_{Omega_2}(x_1)] R(x_1, x_2) dx_1 dx_2

= int_{Omega_1 	imes Omega_2} R(x_1, x_2) dx_1 dx_2

Remark 96+ 
ightarrow

Exercise: Show how (96.1) changes if Omega_1 = Omega_2.

We now show how to compute R_n(x_1, dots, x_n) using

(89.1)

<f> = det (mathbf{1}_{L^2(mathbb{R})} + Kchi_g)

where f(M) = det(I + g(M)) and K is given in (88.2).


Remark 96+

Again bearing (94+) in mind, we have for random matrix ensembles

int_{x_1 < dots < x_N} hat{F}(x_1, dots, x_N) hat{P}_N(x) d^N x

= int_{mathbb{R}^N} hat{F}(x_1, dots, x_N) P_N(x) d^N x, P_N = frac{1}{N!} hat{P}_N

underset{(96.1)}{=} int_{Omega_1 	imes Omega_2} R(x_1, x_2) dx_1 dx_2

Now suppose for definiteness that Omega_1 lies to the left of Omega_2

overline{Omega_1} quad overline{Omega_2}

then for x_1 < dots < x_N

hat{F}(x_1, dots, x_N) = sum_{1 le i_1 < i_2 le N} chi_{Omega_1}(x_{i_1}) chi_{Omega_2}(x_{i_2})

= # { ordered pairs of eigenvalues, (x_{i_1}, x_{i_2}), x_{i_1} < x_{i_2} such that (x_{i_1}, x_{i_2}) in Omega_1 	imes Omega_2 }.

Hence

(96+.1)

mathbb{Exp} { # { ordered pairs of eigenvalues, (x_{i_1}, x_{i_2}), x_{i_1} < x_{i_2}, such that (x_{i_1}, x_{i_2}) in Omega_1 	imes Omega_2}}

= int_{Omega_1 	imes Omega_2} R(x_1, x_2) dx_1 dx_2.


More explicitly for any g in L^infty (mathbb{R}),

(97.0)

int prod_{i=1}^N (1 + g(x_i)) P_N(x_1, dots, x_N) d^N x = det (mathbf{1} + K chi_g)_{L^2 (mathbb{R})}

where

P_N(x) d^Nx = frac{(prod_{i=1}^N omega(x_i)) |V(x)|^2 d^Nx}{int (prod_{i=1}^N omega(y_i)) |V(y)|^2 d^Ny}

Choose g such that

1 + g = gamma_0 chi_0 + dots + gamma_k chi_k for some k, where chi_i are the characteristic functions of disjoint Borel sets Omega_i, 0 le i le k, in mathbb{R}, such that mathbb{R} = cup_{i=0}^k Omega_i.

Here gamma_i in mathbb{R}, i=0, dots, k.

Clearly,

(97.1)

g = (gamma_0 - 1) chi_0 + dots + (gamma_k - 1) chi_k

= sum_{i=0}^k eta_i chi_i, eta_i = gamma_i - 1, 0 le i le k.

For any 1 le j le N, let

(97.2)

sigma_j(xi_1, dots, xi_N) = sum_{1 le i_1 < i_2 < dots < i_j le N} xi_{i_1} dots xi_{i_j}

denote the j^{th} elementary symmetric function and set sigma_0 = 1.

We have

(98.1)

prod_{i=1}^N (1 + xi_i) = sum_{j=0}^N sigma_j(xi_1, dots, xi_N)

thus

int prod_{i=1}^N (1 + g(x_i)) P_N(x) d^N(x)

= sum_{j=0}^N sum_{1 le i_1 < dots < i_j le N} int g(x_{i_1}) dots g(x_{i_j}) P_N(x) d^Nx

= sum_{j=0}^N egin{pmatrix} N \ j end{pmatrix} int g(x_1) dots g(x_j) P_N(x) d^Nx

(by symmetry)

= sum_{j=0}^N egin{pmatrix} N \ j end{pmatrix} frac{(N-j)!}{N!} int g(x_1) dots g(x_j) R_j(x_1, dots, x_j) dx_1 dots dx_j

Remarks 98+, 98++ 
ightarrow

Substituting (98++.2) for prod_{i=1}^j g(x_i) we find (exercise: see ref (3) p. 87)

(98.2)

int prod_{i=1}^N (1 + g(x_i) P_N(x) d^Nx)

= underset{0 le |n| le N}{sum_{n_0, n_1, dots, n_k ge 0}} frac{eta_0^{n_0} eta_1^{n_1} dots eta_k^{n_k}}{|n|!} int_{mathbb{R}^{|n|}} R_{|n|}(x_1, dots, x_{|n|}) 	imes

chi_{{ n_0, n_1, dots, n_k quad of quad { x_1, dots, x_{|n|} } quad lie quad in quad Omega_0, Omega_1, dots, Omega_k quad respectively }} d^{|n|}x


Remark 98+

Now

(98+.1)

prod_{i=1}^j g(x_i) = prod_{i=1}^j (eta_0 chi_0(x_i) + dots + eta_k chi_k(x_i))

= sum_{i_1, dots, i_j = 0}^k eta_{i_1} dots eta_{i_j} chi_{i_1}(x_1) dots chi_{i_j}(x_j)

= underset{sum_{i=0}^k n_i = j}{sum_{n_0, n_1, dots, n_k ge 0}} underset{sharp { q:i_q = k } = n_k}{underset{vdots}{underset{sharp { q:i_q = 0 } = n_0}{sum_{0 le i_1, dots, i_j le k}}}} eta_{i_1} dots eta_{i_j} chi_{i_1}(x_1) dots chi_{i_j}(x_j)

= underset{sum_{i=0}^k n_i = j}{sum_{n_0, n_1, dots, n_k ge 0}} eta_0^{n_0} eta_1^{n_1} dots eta_k^{n_k} E(n_0, dots, n_k; x)

where

(98+.2)

E(n_0, dots, n_k; x) =underset{sharp { q:i_q = k } = n_k}{underset{vdots}{underset{sharp { q:i_q = 0 } = n_0}{sum_{0 le i_1, dots, i_j le k}}}} chi_{i_1}(x_1) dots chi_{i_j}(x_j)

Consider, for example, the case where k=5 and j=6 and n_0 = 1, n_1 = 0, n_2 = 2, n_3 = 1, n_4 = 0, n_5 = 2, sum_{q = 0}^k n_q = j = 6 and with { x_1, dots, x_6 } arranged as follows

Now clearly chi_{i_1}(x_1) chi_{i_2}(x_2) dots chi_{i_j}(x_6) = 1 if and only if i_1 = 2, i_2 = 3, i_3 = 5, i_4 = 2, i_5 = 0, i_6 = 5 .


Remark 98++

In particular only one term in (98+.2) contributes. We conclude that

(98++.1)

E(n_0, dots, n_k; x) = chi_{{ x = { x_1, dots, x_j }: n_q quad of quad the quad x_is quad are quad in quad Omega_q, 0 le q le k }}(x)

Thus

(98++.2)

prod_{i=1}^j g(x_i) = underset{sum_{i=0}^k n_i = k}{sum_{n_0, n_1, dots, n_k ge 0}} eta_0^{n_0} eta_1^{n_1} dots eta_k^{n_k} chi_{{ x = (x_1, dots, x_j): n_q quad of quad the quad x_is in Omega_q, 0 le q le k }}(x)


where |n| = n_0 + n_1 + dots + n_k.

On the other hand, by the Fredholm expansion of a determinant,

det (mathbf{1} + K g) = sum_{j=0}^infty frac{1}{j!} int_{mathbb{R}^j} det egin{pmatrix} K(x_1, x_1) & dots & K(x_1, x_j) \ vdots & ddots & vdots \ K(x_j, x_1) & dots & K(x_j, x_j) end{pmatrix} prod_{i=1}^j g(x_i) d^j x

= sum_{j=0}^infty frac{1}{j!} int_{mathbb{R}^j} det (K(x_i, x_k))_{i,k=1}^j prod_{i=1}^j g(x_i) d^j x .

Here we have used the fact that det (K(x_i, x_k))_{i, k=1}^j = 0 if j > N (why?).

Again expanding out g(x) using (98++.2), we find as above

(99.1)

det (mathbf{1} + K chi_g) = underset{0 le |n| le N}{sum_{n_0, n_1, dots, n_k ge 0}} frac{eta_0^{n_0} dots eta_k^{n_k}}{|n|!} int_{mathbb{R}^{|n|}} det (K(x_i, x_k))_{i, k =1}^{|n|}

	imes chi_{{ n_0, n_1, dots, n_k quad of quad { x_1, dots, x_{|n|} } quad lie quad in quad Omega_0, Omega_1, dots, Omega_k quad respectively }} d^{|n|}x.

Equating (98.2) and (99.1), and comparing coefficients, we find in particular for k le N, n_0 = 0, n_1 = dots = n_k = 1

(100.1)

0 = int_{mathbb{R}^k} Delta_k(x_1, dots, x_k) chi_{{ Omega_1, dots, Omega_k quad have quad underline{one} quad of { x_1, dots, x_k }, Omega_0 quad has quad none }} d^k x

where

Delta_k(x_1, dots, x_k) = R_k(x_1, dots, x_k) - det (K(x_i, x_j))_{i, j = 1}^k

As Delta_k(x_1, dots, x_k) is symmetric, (100.1) Rightarrow

(100.2)

underset{1 le j le k}{underset{sharp{ i: x_i in Omega_j } = 1}{int_{x_1 le dots le x_k}}} Delta_k(x_1, dots, x_k) d^k x = 0.

Let Omega_j = (a_j, b_j), 1 le j le k be disjoint intervals ordered from the left i.e.,

a_1 < b_1 < a_2 < b_2 < dots < a_k < b_k

and inserting these Omega_js into (100.2) and letting b_j downarrow a_j, we obtain

Delta_k(x_1, dots, x_k) = 0

for all a_1 < dots < a_k and hence for all a_1, dots, a_k by symmetry. We conclude that for k ge 1,

(100.3)

R_k(x_1, dots, x_k) = det (K(x_i, x_j))_{1 le i, j le k}

Exercise: Use the above calculations to rederive (95.1).

Remark: For other proofs of this result see ref 3 p. 96-98 and also ref 2 p. 103-108 (this calculation is taken from [Meh]).

Remark 101+ 
ightarrow

Rightarrow

The above calculations show that in order to evaluate key eigenvalue statistics for Unitary Ensembles we must understand the asymptotic behavior of the correlation kernel

K(x, y) = sum_{j=0}^{N-1} phi_j(x) phi_j(y)

where phi_j(x) = p_j(x) (omega(x))^{frac{1}{2}}, and the p_js are orthonormal w.r.t. the weight omega(x),

int_{mathbb{R}} p_i(x)p_j(x) omega(x) dx = delta_{ij}, 0 le i, j le infty.

Thus the problem of the asymptotics of eigenstatistics reduces, for Unitary Ensembles to the classical problem of the asymptotics of orthogonal polynomials (OPs).


Remark 101+:

We now compute

Prob { n_1 eigenvalues in Omega_1, ..., n_k eigenvalues in Omega_k }

where again the Omega_is are disjoint and sum_{j=1}^k n_j = N.

Set n_0 = N - sum_{j=1}^k n_j and set Omega_0 = mathbb{R} ackslash cup_{j=1}^k Omega_j.

Again letting chi_j be the characteristic function of Omega_j, 0 le j le k, we have, using (98++.1)

Prob { n_1 eigenvalues in Omega_1, ..., n_k eigenvalues in Omega_k }

= frac{int_{x_1 le x_2 le dots le x_N} E(n_0, n_1, dots, n_k; x) prod_{i=1}^N omega(x_i) |V(x)|^2 d^N x}{int_{x_1 le dots le x_N} prod_{i=1}^N omega(x_i) |V(x)|^2 d^N x}

= frac{int_{mathbb{R}^N} E(n_0, n_1, dots, n_k; x) prod_{i=1}^N omega(x_i) |V(x)|^2 d^N x}{int_{mathbb{R}^N} prod_{i=1}^N omega(x_i) |V(x)|^2 d^N x}

now for

F(x, gamma_0, dots, gamma_k) = prod_{j=1}^N (gamma_0 chi_0 + dots + gamma_k chi_k)(x_j)

we have for sum_{i=1}^k n_i le N

E(n_0, dots, n_k; x) = frac{1}{n_1! dots n_k!} frac{partial^{n_1 + dots + n_k}}{partial gamma_1^{n_1} dots partial gamma_k^{n_k}} |_{underset{gamma_1 = dots = gamma_k = 0}{gamma_0 = 1}} F(x; gamma_0, dots, gamma_k)

where we have used (98+.1) with j = N

F = prod_{j=1}^N (gamma_0 chi_0 + dots + gamma_k chi_k)(x_j)

= underset{sum_{i=1}^k n_i = N}{sum_{n_0, n_1, dots, n_k ge 0}} gamma_0^{n_0} dots gamma_k^{n_k} E(n_0, dots, n_k; x).

Thus

Prob { n_1 eigenvalues in Omega_1, ..., n_k eigenvalues in Omega_k }

= frac{1}{n_1! dots n_k!} frac{partial^{n_1 + dots + n_k}}{partial gamma_1^{n_1} dots partial gamma_k^{n_k}} |_{underset{gamma_1 = dots = gamma_k = 0}{gamma_0 = 1}} frac{int_{mathbb{R}^N} prod_{j=1}^N (gamma_0 chi_0 + dots + gamma_k chi_k)(x_j) prod_{i=1}^N omega(x_i) |V(x)|^2 d^N x}{int_{mathbb{R}^N} prod_{i=1}^N omega(x_i) |V(x)|^2 d^N x}

Setting 1 + g = gamma_0 chi_0 + dots + gamma_k chi_k

and hence at gamma_0 = 1,

g= eta_1 chi_1 + dots + eta_k chi_k where eta_j = gamma_j -1, 1 le j le k.

It follows now from (89.1)

<f> = <1 + g> = det (mathbf{1} + K chi_g)

where K is the correlation kernel in (88.2)

K(x, y) = sum_{j=0}^{N-1} phi_j(x) phi_j(y).

Thus, finally,

(101+++.1)

Prob { n_1 eigenvalues in Omega_1, ..., n_k eigenvalues in Omega_k }

= frac{1}{n_1! dots n_k!} frac{partial^{n_1 + dots + n_k}}{partial gamma_1^{n_1} dots partial gamma_k^{n_k}} |_{eta_1 = dots = eta_k = -1} det (mathbf{1} + K sum_{j=1}^k eta_j chi_j).


(Ref: Szegddot{o}, "Orthogonal Polynomials").

For the next couple of lectures we will consider this problem. A key object that controls the asymptotics of OPs is the so-called equilibrium measure (see Ref 2, Chap 6; see also Saff and Totik "Logarithmic potentials with external fields" for the general theory).

We will see eventually that this quantity is intimately related to the density of states for RMT and also to the one-point correlation function R_1(x). The calculations below are taken from ref (2), which in turn are based on work of K. Johansson (see ref (2)).

In the calculations that follow we will always assume that the probability density P_N(M) dM varies with N in the following way

(102.1)

P_N(M) dM = frac{1}{Z_N} e^{-N tr V(M)} dM.

As all our calculations so far have assumed that N is given and fixed, they all remain valid: we must just set

omega = omega_N = e^{-N tr V(M)}.

After integrating out the eigenvectors we obtain as before a probability measure on the eigenvalues

hat{P}_N(lambda) d^N lambda = frac{1}{hat{Z}_N} e^{- N sum V(lambda_i)} prod_{i < j} (lambda_i - lambda_j)^2 d^N lambda

where (after symmetrizing)

(103.1)

hat{Z}_N = int_{mathbb{R}^N} e^{-N sum V(lambda_i)} prod_{i < j} |lambda_i - lambda_j|^2 d^N lambda

= int_{mathbb{R}^N} e^{-N^2 H(lambda)} d^Nlambda

where H(lambda) = frac{1}{N^2} sum_{i 
e j}^N log |lambda_i -lambda_j|^{-1} + frac{1}{N} sum_{i=1}^N V(lambda_i).

Let

(103.2)

mu_lambda = frac{1}{N} sum_{i=1}^N delta_{lambda_i}

be the normalized counting measure for the eigenvalues, and note that H(lambda) can be expressed as follows:

(104.1)

H(mu_lambda) = iint_{t 
e s} log|t - s|^{-1} d mu_lambda(t) d mu_lambda(s) + int V(t) d mu_lambda(t)

Note that the scaling in the potential

e^{-trV} 
ightarrow e^{-NtrV}

is chosen so that the terms in (104.1) are balanced.

Intuitively, the leading contribution to the partition function (103.1) as N 
ightarrow infty comes from those lambda_is for which H(mu_lambda) is a minimum. Thus we are led to consider the following energy minimization problem

(104.2)

E^V = inf_{mu in M_1(mathbb{R})} H(mu)

where H(mu) = iint log|t - s|^{-1} d mu(t) d mu(s) + int V(t) d mu(t) and M_1(mathbb{R}) = { mu is a Borel measure on mathbb{R}: int dmu = 1 }.

We will show eventually that (103.2) has a unique minimizer mu = mu^{eq}, the equilibrium measure mentioned above. From the definition of H(mu), mu^{eq} has an electrostatic interpretation: it is the equilibrium configuration for electrons with logarithmic electrostatic repulsion

iint log|t - s|^{-1} d mu^{eq}(t) d mu^{eq}(s)

in an external field int V(t) d mu^{eq}(t). As already indicated, d mu^{eq} is intimately related to a variety of problems in RMT, and also in analysis. The existence and uniqueness of the solution of the variational problem (104.2) relies ultimately on the fact that we are dealing with a (constrained) convex minimization problem.


推薦閱讀:
相关文章