Difference between revisions of "Собствени стойност и собствени вектори на матрица"
Line 1: | Line 1: | ||
+ | <!-- | ||
+ | |||
[[File:Mona Lisa eigenvector grid.png|thumb|270px|In this [[shear mapping]] the red arrow changes direction but the blue arrow does not. The blue arrow is an eigenvector, and since its length is unchanged its eigenvalue is 1.]] | [[File:Mona Lisa eigenvector grid.png|thumb|270px|In this [[shear mapping]] the red arrow changes direction but the blue arrow does not. The blue arrow is an eigenvector, and since its length is unchanged its eigenvalue is 1.]] | ||
Line 841: | Line 843: | ||
[[Category:Articles including recorded pronunciations]] | [[Category:Articles including recorded pronunciations]] | ||
[[Category:German loanwords]] | [[Category:German loanwords]] | ||
+ | --> |
Revision as of 16:54, 17 December 2012
An example of an eigenvalue equation where the transformation T is represented in terms of a differential operator is the time-independent Schrödinger equation in quantum mechanics:
where H, the Hamiltonian, is a second-order differential operator and , the wavefunction, is one of its eigenfunctions corresponding to the eigenvalue E, interpreted as its energy.
However, in the case where one is interested only in the bound state solutions of the Schrödinger equation, one looks for within the space of square integrable functions. Since this space is a Hilbert space with a well-defined scalar product, one can introduce a basis set in which and H can be represented as a one-dimensional array and a matrix respectively. This allows one to represent the Schrödinger equation in a matrix form.
Bra-ket notation is often used in this context. A vector, which represents a state of the system, in the Hilbert space of square integrable functions is represented by . In this notation, the Schrödinger equation is:
where is an eigenstate of H. It is a self adjoint operator, the infinite dimensional analog of Hermitian matrices (see Observable). As in the matrix case, in the equation above is understood to be the vector obtained by application of the transformation H to .
Contents
Molecular orbitals
In quantum mechanics, and in particular in atomic and molecular physics, within the Hartree–Fock theory, the atomic and molecular orbitals can be defined by the eigenvectors of the Fock operator. The corresponding eigenvalues are interpreted as ionization potentials via Koopmans' theorem. In this case, the term eigenvector is used in a somewhat more general meaning, since the Fock operator is explicitly dependent on the orbitals and their eigenvalues. If one wants to underline this aspect one speaks of nonlinear eigenvalue problem. Such equations are usually solved by an iteration procedure, called in this case self-consistent field method. In quantum chemistry, one often represents the Hartree–Fock equation in a non-orthogonal basis set. This particular representation is a generalized eigenvalue problem called Roothaan equations.
Geology and glaciology
In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram,<ref>Template:Citation</ref><ref>Template:Citation</ref> or as a Stereonet on a Wulff Net.<ref>Template:Citation</ref> The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. Eigenvectors output from programs such as Stereo32<ref>Stereo32</ref> are in the order E1 ≥ E2 ≥ E3, with E1 being the primary orientation of clast orientation/dip, E2 being the secondary and E3 being the tertiary, in terms of strength. The clast orientation is defined as the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of E1, E2, and E3 are dictated by the nature of the sediment's fabric. If E1 = E2 = E3, the fabric is said to be isotropic. If E1 = E2 > E3 the fabric is planar. If E1 > E2 > E3 the fabric is linear. See 'A Practical Guide to the Study of Glacial Sediments' by Benn & Evans, 2004.<ref>Template:Citation</ref>
Principal components analysis
Template:Main Template:See also
The eigendecomposition of a symmetric positive semidefinite (PSD) matrix yields an orthogonal basis of eigenvectors, each of which has a nonnegative eigenvalue. The orthogonal decomposition of a PSD matrix is used in multivariate analysis, where the sample covariance matrices are PSD. This orthogonal decomposition is called principal components analysis (PCA) in statistics. PCA studies linear relations among variables. PCA is performed on the covariance matrix or the correlation matrix (in which each variable is scaled to have its sample variance equal to one). For the covariance or correlation matrix, the eigenvectors correspond to principal components and the eigenvalues to the variance explained by the principal components. Principal component analysis of the correlation matrix provides an orthonormal eigen-basis for the space of the observed data: In this basis, the largest eigenvalues correspond to the principal-components that are associated with most of the covariability among a number of observed data.
Principal component analysis is used to study large data sets, such as those encountered in data mining, chemical research, psychology, and in marketing. PCA is popular especially in psychology, in the field of psychometrics. In Q-methodology, the eigenvalues of the correlation matrix determine the Q-methodologist's judgment of practical significance (which differs from the statistical significance of hypothesis testing): The factors with eigenvalues greater than 1.00 are considered practically significant, that is, as explaining an important amount of the variability in the data, while eigenvalues less than 1.00 are considered practically insignificant, as explaining only a negligible portion of the data variability. More generally, principal component analysis can be used as a method of factor analysis in structural equation modeling.
Vibration analysis
Eigenvalue problems occur naturally in the vibration analysis of mechanical structures with many degrees of freedom. The eigenvalues are used to determine the natural frequencies (or eigenfrequencies) of vibration, and the eigenvectors determine the shapes of these vibrational modes. In particular, undamped vibration is governed by
or
that is, acceleration is proportional to position (i.e., we expect x to be sinusoidal in time). In n dimensions, m becomes a mass matrix and k a stiffness matrix. Admissible solutions are then a linear combination of solutions to the generalized eigenvalue problem
where is the eigenvalue and is the angular frequency. Note that the principal vibration modes are different from the principal compliance modes, which are the eigenvectors of k alone. Furthermore, damped vibration, governed by
leads to what is called a so-called quadratic eigenvalue problem,
This can be reduced to a generalized eigenvalue problem by clever algebra at the cost of solving a larger system.
The orthogonality properties of the eigenvectors allows decoupling of the differential equations so that the system can be represented as linear summation of the eigenvectors. The eigenvalue problem of complex structures is often solved using finite element analysis, but neatly generalize the solution to scalar-valued vibration problems.
Eigenfaces
Template:Main In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel.<ref>Template:Citation</ref> The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated with a large set of normalized pictures of faces are called eigenfaces; this is an example of principal components analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made.
Similar to this concept, eigenvoices represent the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems, for speaker adaptation.
Tensor of moment of inertia
In mechanics, the eigenvectors of the moment of inertia tensor define the principal axes of a rigid body. The tensor of moment of inertia is a key quantity required to determine the rotation of a rigid body around its center of mass.
Stress tensor
In solid mechanics, the stress tensor is symmetric and so can be decomposed into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors as a basis. Because it is diagonal, in this orientation, the stress tensor has no shear components; the components it does have are the principal components.
Eigenvalues of a graph
In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A, or (increasingly) of the graph's Laplacian matrix (see also Discrete Laplace operator), which is either T−A (sometimes called the Combinatorial Laplacian) or I−T−1/2AT−1/2 (sometimes called the Normalized Laplacian), where T is a diagonal matrix with Tv,v equal to the degree of vertex v, and in T−1/2, the vth diagonal entry is deg(v)−1/2. The kth principal eigenvector of a graph is defined as either the eigenvector corresponding to the kth largest or kth smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
The principal eigenvector is used to measure the centrality of its vertices. An example is Google's PageRank algorithm. The principal eigenvector of a modified adjacency matrix of the World Wide Web graph gives the page ranks as its components. This vector corresponds to the stationary distribution of the Markov chain represented by the row-normalized adjacency matrix; however, the adjacency matrix must first be modified to ensure a stationary distribution exists. The second smallest eigenvector can be used to partition the graph into clusters, via spectral clustering. Other methods are also available for clustering.
Basic reproduction number
The basic reproduction number () is a fundamental number in the study of how infectious diseases spread. If one infectious person is put into a population of completely susceptible people, then is the average number of people that one infectious person will infect. The generation time of an infection is the time, , from one person becoming infected to the next person becoming infected. In a heterogenous population, the next generation matrix defines how many people in the population will become infected after time has passed. is then the largest eigenvalue of the next generation matrix.<ref>Template:Citation</ref><ref>Template:Citation</ref>
See also
- Nonlinear eigenproblem
- Quadratic eigenvalue problem
- Introduction to eigenstates
- Eigenplane
- Jordan normal form
- List of numerical analysis software
- Antieigenvalue theory
Notes
References
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Template:Citation.
- Pigolkina, T. S. and Shulman, V. S., Eigenvalue (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977.
- Template:Citation.
- Template:Citation.
- Curtis, Charles W., Linear Algebra: An Introductory Approach, 347 p., Springer; 4th ed. 1984. Corr. 7th printing edition (August 19, 1999), ISBN 0-387-90992-3.
- Template:Citation.
- Template:Citation.
- Template:Citation.
External links
- What are Eigen Values? — non-technical introduction from PhysLink.com's "Ask the Experts"
- Eigen Values and Eigen Vectors Numerical Examples – Tutorial and Interactive Program from Revoledu.
- Introduction to Eigen Vectors and Eigen Values – lecture from Khan Academy
Theory
- Template:Springer
- Template:Springer
- Template:Planetmath reference
- Eigenvector — Wolfram MathWorld
- Eigen Vector Examination working applet
- Same Eigen Vector Examination as above in a Flash demo with sound
- Computation of Eigenvalues
- Numerical solution of eigenvalue problems Edited by Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst
- Eigenvalues and Eigenvectors on the Ask Dr. Math forums: [1], [2]
Online calculators
Demonstration applets