## Introduction

Recently, Quanta magazine published an article about how a new basic identity about Linear Algebra has surfaced from Applied Physics. The identity is about how to obtain eigenvectors from eigenvalues. Knowing that computing eigenvalues were a headache in class I was naturally eager to see if there was a different way of obtaining them.

As I read through the article and the corresponding paper I was struck by how simple and foundational this result was and the fact that it wasn’t in any textbook I could find means that this result, if it is indeed new, can be monumentally important in Linear Algebra and all the topics that depend on it.

Therefore it is important to see if we can understand it and make it work. There may be a chance that this result can pervade through application fields of Linear Algebra and make related optimisations since it seems a less computationally expensive method of determining absolute values of eigenvectors. Although I’ve not grasped the proof yet, I’m going to offer a description of the result and an example in R.

This post was motivated by Corey Chivers where he did the same for this result but in Python. His article is referenced below.

## The Result

Let $A$ be a $n \times n$ Hermitian matrix ($A = A^T$ for a real-valued matrix) with eigenvalues $\lambda_i(A)$ and normed eigenvectors $\textbf{v}_i$. A normed eigenvector is an eigenvector with norm = 1. In general to normalise some vector $\textbf{x}$ we divide each of it’s elements by the entire vector norm $||\textbf{x}||$

Let $M_j$ be a $(n-1) \times (n-1)$ sub-matrix of $A$ which results from deleting the $j$-th column and the $j$-th row with corresponding eigenvalues $\lambda_k(M_j)$. This submatrix is also called a minor.

The norm squared of the elements of the eigenvectors are related to the eigenvalues and the submatrix eigenvalue via the following identity

(1) $|\textbf{v}_{i,j}|^2 \prod_{k=1;k\neq i}^{n} (\lambda_{i}(A) – \lambda_{k}(A)) = \prod_{k = 1}^{n – 1} (\lambda_{i}(A) – \lambda_{k}(M_{j}))$

Rearranging gives a formula for directly computing the absolute squared component of the eigenvectors using only the eigenvalues of a matrix $A$ and it’s sub-matrices $M_{j}$.

(2) $|\textbf{v}_{i,j}|^2 = \frac{\prod_{k = 1}^{n – 1} (\lambda_{i}(A) – \lambda_{k}(M_{j}))}{\prod_{k=1;k\neq i}^{n} (\lambda_{i}(A) – \lambda_{k}(A))}$

To clarify notation, suppose there are $i = 1,2,3$ eigenvalues of some matrix $A$ with distinct corresponding eigenvectors. Each eigenvector has a vector index $i$ denoting it’s corresponding eigenvalue $\lambda_{i}(A)$ and an index $j$ denoting the vector element/ component. For example, let eigenvalues and eigenvectors be

where $k_{i}, a_{i}, b_{i}, c_{i}$ are constants for $i = 1,2,3$.

The term $\textbf{v}_{3,2}$ would refer to the vector corresponding to the eigenvalue $\lambda_{3}(A)$ (which is $\textbf{v}_{3}$) and to the second element/ component of vector $\textbf{v}_3$ which is the constant $c_{2}$

## Implementation in R

library(tidyverse)
library(Matrix)

# Generate a Symmetric Matrix
## Random matrix
B <- Matrix(rnorm(25), 5)

## Force it to be symmetric
A <- forceSymmetric(B)

## Is the matrix hermitian - if matrix is real valued then this is equivalent to checking if the matrix is symmetric
isSymmetric(A)

# Compute Eigenvalues and Eigenvectors
eigen(A)
eigenvalues <- eigen(A)$values eigenvectors <- eigen(A)$vectors

# Build functions to obtain minors and norm
## Computes norm of a matrix
norm_manual <- function(v) {
squared_elements <- vector()

for (i in 1:length(v)) {
squared_elements[i] <- v[i]^2
}

return(sqrt(sum(squared_elements)))

}

## Computes the ith minor of a matrix
minor <- function(matrix = A, index = i) {

matrix[-index, -index]
}

## Make the function which gives the eigenvector component from the eigenvalue
eigenvector_component_manual <- function(matrix = A, vector_index = i, vector_component = j) {

M <- minor(matrix = matrix, index = vector_component)

# Compute numerator of ratio
tb1 <- tibble(eigenvalue_index = rep(eigen(matrix)$values[vector_index], times = (nrow(matrix) - 1)), eigenvalue_minor = eigen(M)$values)
tb1 <- tb1 %>% mutate(eigenvalue_difference = eigenvalue_index - eigenvalue_minor)
numerator <- tb1 %>%
dplyr::select(eigenvalue_difference) %>%
sapply(prod)

# Compute denominator of ratio
tb2 <- tibble(eigenvalue_index = rep(eigen(matrix)$values[vector_index], times = nrow(matrix)), eigenvalue_full = eigen(matrix)$values)
tb2 <- tb2 %>%
filter(eigenvalue_index != eigenvalue_full)
tb2 <- tb2 %>% mutate(eigenvalue_difference = eigenvalue_index - eigenvalue_full)
denominator <- tb2 %>%
dplyr::select(eigenvalue_difference) %>%
sapply(prod)

return(eigenvector_element <- sqrt(numerator/denominator)[])

}

## Make a function which looks at a Hermitian matrix A and computes it's absolute value eigenvectors from it's eigenvalues
generate_absolute_eigenvectors <- function(matrix = A) {

output <- matrix(nrow = nrow(matrix), ncol = ncol(matrix))

for (i in 1:nrow(matrix)) {
for(j in 1:ncol(matrix)) {
output[i, j] <- eigenvector_component_manual(matrix = matrix, vector_index = i, vector_component = j)
}
}

transposed.output <- t(output)

return(transposed.output)

}

# See what the function generates
generate_absolute_eigenvectors(matrix = A)

# Compare with the built in R (absolute valued) eigenvectors
near(abs(eigen(A)$vectors), generate_absolute_eigenvectors(matrix = A), tol = .Machine$double.eps^0.5)

## References

1. Original paper – Eigenvectors from Eigenvalues. (P. Denton, S.Parke, T.Tao, X.Zhang, 2019) https://arxiv.org/pdf/1908.03795.pdf
2. Implementation in Python (helped to unlock the result for me) – http://predictivehealthcare.pennmedicine.org/Eigenvectors%20from%20Eigenvalues.html
3. Quanta article showing how the result was discovered – https://www.quantamagazine.org/neutrinos-lead-to-unexpected-discovery-in-basic-math-20191113/