NumPy
Linear Algebra

Linear Algebra with NumPy

NumPy provides a module called numpy.linalg for performing linear algebra operations. Let's discuss some of the key operations.

Matrix Multiplication

Dot Product

The dot function calculates the dot product of two arrays. If both arrays are 1-D, it performs inner product of vectors. For 2-D arrays, it performs matrix multiplication.

import numpy as np
 
a = np.array([[1, 2], [3, 4]])
b = np.array([[5, 6], [7, 8]])
 
dot_product = np.dot(a, b)
print(dot_product)

Output:

[[19 22]
 [43 50]]

Here's a step-by-step breakdown of how the output is calculated:

  • The first element of the resulting matrix (at row 0, column 0), it's calculated as (1*5 + 2*7) = 19.
  • The second element of the first row (at row 0, column 1) is calculated as (1*6 + 2*8) = 22.
  • The first element of the second row (at row 1, column 0) is calculated as (3*5 + 4*7) = 43.
  • The last element of the resulting matrix (at row 1, column 1) is calculated as (3*6 + 4*8) = 50.

Matmul

The matmul function performs matrix multiplication. It's similar to dot, but it does not support scalar multiplication, and it treats 1D arrays as matrices with one column.

matmul_product = np.matmul(a, b)
print(matmul_product)

Output:

[[19 22]
 [43 50]]

Decomposition

Eigenvalues and Eigenvectors

Eigenvalues and eigenvectors are crucial in understanding how a matrix stretches or compresses space. Eigenvectors are vectors that remain in the same direction after the transformation, and eigenvalues indicate the factor by which these vectors are scaled. The eig function computes the eigenvalues and eigenvectors of a square matrix.

a = np.array([[4, 2], [1, 3]])
eigenvalues, eigenvectors = np.linalg.eig(a)
 
print("Eigenvalues:", eigenvalues)
print("Eigenvectors:", eigenvectors)
Eigenvalues: [5. 2.] 
Eigenvectors: [[ 0.89442719 -0.70710678] [
 0.4472136 0.70710678]]

Cholesky Decomposition

Cholesky decomposition decomposes a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose. It's used for efficient solutions in statistics and optimization.

a = np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]], dtype=np.float32)
cholesky_decomposition = np.linalg.cholesky(a)
 
print(cholesky_decomposition)
[[ 2. 0. 0.] 
[ 6. 1. 0.] 
[-8. 5. 3.]]

QR Decomposition

QR decomposition factorizes a matrix into the product of an orthogonal matrix Q and an upper triangular matrix R. It's used in solving linear systems and least-squares problems.

a = np.array([[3, 2, 2], [2, 3, -2]])
q, r = np.linalg.qr(a)
 
print("Q:", q)
print("R:", r)
Q: [[-0.83205029 -0.5547002 ] 
[-0.5547002 0.83205029]] 
R: [[-3.60555128 -3.32820118 -0.5547002 ] 
[ 0. 1.38675049 -2.77350098]]

Singular Value Decomposition (SVD)

The svd function computes the singular value decomposition of a matrix. SVD decomposes a matrix into three matrices: U, Σ (a diagonal matrix), and V^T. It's used for data compression, dimensionality reduction, and solving linear equations.

a = np.array([[3, 2, 2], [2, 3, -2]])
u, s, vh = np.linalg.svd(a)
 
print("U:", u)
print("S:", s)
print("Vh:", vh)

Norms and Other Numbers

Norm

Matrix norms quantify the "size" of a matrix. The Frobenius norm is the most common and calculates the square root of the sum of squared elements.

a = np.array([1, 2, 3, 4, 5])
norm = np.linalg.norm(a)
 
print(norm)
# Output: 7.416198487095663

Rank

The matrix_rank function computes the rank of a matrix. It's useful in various applications, including determining linear independence.

a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
rank = np.linalg.matrix_rank(a)
 
print(rank)
# Output: 2

Determinant

The det function calculates the determinant of a matrix. The determinant measures the "volume scaling factor" of a matrix. It's crucial in linear transformations and solving systems of linear equations.

a = np.array([[1, 2], [3, 4]])
determinant = np.linalg.det(a)
 
print(determinant)
# Output: -2.00000001

Trace

The trace function calculates the sum of the elements on the main diagonal of the matrix. It's often used in linear algebra theorems and in finding eigenvalues.

a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
trace = np.trace(a)
 
print(trace)
# Output: 15

Solving Equations and Inverting Matrices

Solve

The solve function solves a linear matrix equation, or a system of linear scalar equations.

# Solving the system of equations 3 * x0 + x1 = 9 and x0 + 2 * x1 = 8
a = np.array([[3, 1], [1, 2]])
b = np.array([9, 8])
x = np.linalg.solve(a, b)
 
print(x)
# Output: [2. 3.]

Inverse

The inv function computes the multiplicative inverse of a matrix.

a = np.array([[1, 2], [3, 4]])
inverse = np.linalg.inv(a)
 
print(inverse)
 
# Output:
# [[-2. 1. ] 
# [ 1.5 -0.5]]