11 KiB
import sys
sys.path.append('..')
import pyerrors as pe
import numpy as np
import scipy
As an example we look at a symmetric 2x2 matrix which positive semidefinte and has an error on all entries
obs11 = pe.pseudo_Obs(4.1, 0.2, 'e1')
obs22 = pe.pseudo_Obs(1, 0.01, 'e1')
obs12 = pe.pseudo_Obs(-1, 0.1, 'e1')
matrix = np.asarray([[obs11, obs12], [obs12, obs22]])
print(matrix)
We require to use np.asarray
here as it makes sure that we end up with a numpy array of Obs
.
The standard matrix product can be performed with @
print(matrix @ matrix)
Multiplication with unit matrix leaves the matrix unchanged
print(matrix @ np.identity(2))
Mathematical functions work elementwise
print(np.sinh(matrix))
For a vector of Obs
, we again use np.asarray to end up with the correct object
vec1 = pe.pseudo_Obs(2, 0.4, 'e1')
vec2 = pe.pseudo_Obs(1, 0.1, 'e1')
vector = np.asarray([vec1, vec2])
for (i), entry in np.ndenumerate(vector):
entry.gamma_method()
print(vector)
The matrix times vector product can then be computed via
product = matrix @ vector
for (i), entry in np.ndenumerate(product):
entry.gamma_method()
print(product)
Matrix to scalar operations¶
If we want to apply a numpy matrix function with a scalar return value we can use scalar_mat_op
. Here we need to use the autograd wrapped version of numpy (imported as anp) to use automatic differentiation.
import autograd.numpy as anp # Thinly-wrapped numpy
funcs = [anp.linalg.det, anp.trace, anp.linalg.norm]
for i, func in enumerate(funcs):
res = pe.linalg.scalar_mat_op(func, matrix)
res.gamma_method()
print(func.__name__, '\t', res)
For matrix operations which are not supported by autograd we can use numerical differentiation
funcs = [np.linalg.cond, scipy.linalg.expm_cond]
for i, func in enumerate(funcs):
res = pe.linalg.scalar_mat_op(func, matrix, num_grad=True)
res.gamma_method()
print(func.__name__, ' \t', res)
Matrix to matrix operations¶
For matrix operations with a matrix as return value we can use another wrapper mat_mat_op
. Take as an example the cholesky decompostion. Here we need to use the autograd wrapped version of numpy (imported as anp) to use automatic differentiation.
cholesky = pe.linalg.mat_mat_op(anp.linalg.cholesky, matrix)
for (i, j), entry in np.ndenumerate(cholesky):
entry.gamma_method()
print(cholesky)
We can now check if the decomposition was succesfull
check = cholesky @ cholesky.T
print(check - matrix)
We can now further compute the inverse of the cholesky decomposed matrix and check that the product with its inverse gives the unit matrix with zero error.
inv = pe.linalg.mat_mat_op(anp.linalg.inv, cholesky)
for (i, j), entry in np.ndenumerate(inv):
entry.gamma_method()
print(inv)
print('Check:')
check_inv = cholesky @ inv
print(check_inv)
Matrix to matrix operations which are not supported by autograd can also be computed with numeric differentiation
funcs = [scipy.linalg.orth, scipy.linalg.expm, scipy.linalg.logm, scipy.linalg.sinhm, scipy.linalg.sqrtm]
for i,func in enumerate(funcs):
res = pe.linalg.mat_mat_op(func, matrix, num_grad=True)
for (i, j), entry in np.ndenumerate(res):
entry.gamma_method()
print(func.__name__)
print(res)
Eigenvalues and eigenvectors¶
We can also compute eigenvalues and eigenvectors of symmetric matrices with a special wrapper eigh
e, v = pe.linalg.eigh(matrix)
for (i), entry in np.ndenumerate(e):
entry.gamma_method()
print('Eigenvalues:')
print(e)
for (i, j), entry in np.ndenumerate(v):
entry.gamma_method()
print('Eigenvectors:')
print(v)
We can check that we got the correct result
for i in range(2):
print('Check eigenvector', i + 1)
print(matrix @ v[:, i] - v[:, i] * e[i])