Eigenvalues and Eigenvectors in NumPy
NumPy provides powerful tools to explore advanced linear algebra concepts such as matrix eigenvalues and eigenvectors. These are critical in understanding matrix structure, solving systems of equations, and analyzing transformations in data science, physics, and machine learning.
- Eigenvalues: Scalars that indicate how a linear transformation scales vectors along specific directions.
- Eigenvectors: Non-zero vectors that only change in magnitude (not direction) under a given linear transformation.
Eigenvalues in NumPy
In linear algebra, an eigenvalue of a square matrix is a scalar that indicates how a linear transformation scales vectors (eigenvectors) along specific directions. If A is a square matrix and v is a non-zero vector, then:
A × v = λ × v
A × v = λ × v
where λ is the eigenvalue corresponding to the eigenvector v. Eigenvalues help reveal important properties of a matrix, including stability and transformations.
You can compute eigenvalues in NumPy using np.linalg.eig(), which returns both the eigenvalues and eigenvectors of a square matrix.
import numpy as np
# Define a square matrix
A = np.array([[4, -2],
[1, 1]])
# Compute eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("Eigenvalues:")
print(eigenvalues)
import numpy as np
# Define a square matrix
A = np.array([[4, -2],
[1, 1]])
# Compute eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("Eigenvalues:")
print(eigenvalues)
How It Works:
- np.linalg.eig(A) returns a tuple: the first element contains the eigenvalues.
- Eigenvalues may be real or complex numbers depending on the matrix.
- Each eigenvalue corresponds to an eigenvector that satisfies A × v = λ × v.
Output
Eigenvalues:
[3. 2.]
Eigenvalues:
[3. 2.]
In this example, the matrix has two real eigenvalues: 3 and 2. These values describe how the matrix stretches or compresses vectors in specific directions.
Eigenvectors in NumPy
An eigenvector of a matrix is a non-zero vector that changes only in scale (not direction) when that matrix is applied to it. In other words, for a matrix A and a scalar eigenvalue λ, the eigenvector v satisfies:
A × v = λ × v
A × v = λ × v
Eigenvectors reveal the directions in which a linear transformation acts by stretching or compressing. They are widely used in systems analysis, machine learning (e.g., PCA), and differential equations.
You can compute both eigenvectors and eigenvalues in NumPy using np.linalg.eig(). The eigenvectors are returned as columns of the second array in the output.
import numpy as np
# Define a square matrix
A = np.array([[4, -2],
[1, 1]])
# Compute eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("Eigenvectors (as columns):")
print(eigenvectors)
import numpy as np
# Define a square matrix
A = np.array([[4, -2],
[1, 1]])
# Compute eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("Eigenvectors (as columns):")
print(eigenvectors)
How It Works:
- np.linalg.eig(A) returns eigenvalues and corresponding eigenvectors.
- Each column of eigenvectors corresponds to an eigenvalue in the same position of the eigenvalues array.
- The eigenvectors are normalized to have unit length by default.
- You can verify correctness by checking: A @ v ≈ λ * v.
Output
Eigenvectors (as columns):
[[ 0.89442719 0.70710678]
[ 0.4472136 0.70710678]]
Eigenvectors (as columns):
[[ 0.89442719 0.70710678]
[ 0.4472136 0.70710678]]
In this output, the first column is the eigenvector corresponding to the first eigenvalue (e.g., 3), and the second column corresponds to the second eigenvalue (e.g., 2).
# Verify the first eigenpair
v = eigenvectors[:, 0]
lambda_val = eigenvalues[0]
# Check A @ v ≈ λ * v
print("A @ v:", A @ v)
print("λ * v:", lambda_val * v)
# Verify the first eigenpair
v = eigenvectors[:, 0]
lambda_val = eigenvalues[0]
# Check A @ v ≈ λ * v
print("A @ v:", A @ v)
print("λ * v:", lambda_val * v)
Output
A @ v: [2.68328157 1.34164079]
λ * v: [2.68328157 1.34164079]
A @ v: [2.68328157 1.34164079]
λ * v: [2.68328157 1.34164079]
This confirms that the vector is truly an eigenvector — it satisfies the equation A × v = λ × v.
Example 1: Verifying Eigenvalue and Eigenvector Relationship
In this example, we’ll compute the eigenvalues and eigenvectors of a 2×2 matrix using NumPy. We’ll then verify the fundamental eigenvalue equation: A × v = λ × v.
import numpy as np
# Define a 2x2 matrix
A = np.array([[3, 1],
[0, 2]])
# Compute eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("Matrix A:")
print(A)
print("Eigenvalues:")
print(eigenvalues)
print("Eigenvectors (columns):")
print(eigenvectors)
# Verify the first eigenpair
v = eigenvectors[:, 0] # First eigenvector
lambda_val = eigenvalues[0] # Corresponding eigenvalue
print("A @ v:", A @ v)
print("λ * v:", lambda_val * v)
import numpy as np
# Define a 2x2 matrix
A = np.array([[3, 1],
[0, 2]])
# Compute eigenvalues and eigenvectors
eigenvalues, eigenvectors = np.linalg.eig(A)
print("Matrix A:")
print(A)
print("Eigenvalues:")
print(eigenvalues)
print("Eigenvectors (columns):")
print(eigenvectors)
# Verify the first eigenpair
v = eigenvectors[:, 0] # First eigenvector
lambda_val = eigenvalues[0] # Corresponding eigenvalue
print("A @ v:", A @ v)
print("λ * v:", lambda_val * v)
How It Works:
- We define a simple 2×2 matrix with known eigenvalues.
- np.linalg.eig() computes both the eigenvalues and the eigenvectors.
- We extract the first eigenpair and verify the relationship A @ v ≈ λ * v.
Output
Matrix A:
[[3 1]
[0 2]]
Eigenvalues:
[3. 2.]
Eigenvectors (columns):
[[1. 0.70710678]
[0. 0.70710678]]
A @ v: [3. 0.]
λ * v: [3. 0.]
Matrix A:
[[3 1]
[0 2]]
Eigenvalues:
[3. 2.]
Eigenvectors (columns):
[[1. 0.70710678]
[0. 0.70710678]]
A @ v: [3. 0.]
λ * v: [3. 0.]
The outputs confirm that the first eigenvector corresponds to the first eigenvalue — their relationship satisfies A × v = λ × v.
Frequently Asked Questions
What are eigenvalues and eigenvectors?
What are eigenvalues and eigenvectors?
Eigenvalues are scalars that tell us how a matrix stretches or compresses space along certain directions. The corresponding eigenvectors are those directions that remain unchanged in direction (but not necessarily in magnitude) when the matrix is applied.
How do I compute eigenvalues and eigenvectors in NumPy?
How do I compute eigenvalues and eigenvectors in NumPy?
Use np.linalg.eig(). It returns a tuple containing an array of eigenvalues and a matrix whose columns are the corresponding eigenvectors.
What does np.linalg.eig() return?
What does np.linalg.eig() return?
The first array contains the eigenvalues, and the second array contains the eigenvectors. Each column of the eigenvector array corresponds to an eigenvalue at the same index.
How can I verify that a vector is an eigenvector in NumPy?
How can I verify that a vector is an eigenvector in NumPy?
Check if A @ v is approximately equal to λ * v. Use np.allclose() to compare the two arrays due to floating-point precision issues.
Can a matrix have complex eigenvalues or eigenvectors in NumPy?
Can a matrix have complex eigenvalues or eigenvectors in NumPy?
Yes. If the matrix is not symmetric, it can have complex eigenvalues and eigenvectors. NumPy supports complex numbers and will return them when necessary.
What's Next?
Up next, we’ll explore Random Number Generation in NumPy — the foundation for creating simulations, initializing models, and performing randomized operations. You’ll learn how to generate integers, floats, and samples from various distributions using NumPy’s modern default_rng() API.