pyerrors/examples/04_matrix_operations.ipynb
2020-10-13 16:53:00 +02:00

11 KiB

None <html> <head> </head>
In [1]:
import sys
sys.path.append('..')
import pyerrors as pe
import numpy as np
import scipy

As an example we look at a symmetric 2x2 matrix which positive semidefinte and has an error on all entries

In [2]:
obs11 = pe.pseudo_Obs(4.1, 0.2, 'e1')
obs22 = pe.pseudo_Obs(1, 0.01, 'e1')
obs12 = pe.pseudo_Obs(-1, 0.1, 'e1')
matrix = np.asarray([[obs11, obs12], [obs12, obs22]])
print(matrix)
[[Obs[4.10(20)] Obs[-1.00(10)]]
 [Obs[-1.00(10)] Obs[1.000(10)]]]

We require to use np.asarray here as it makes sure that we end up with a numpy array of Obs.

The standard matrix product can be performed with @

In [3]:
print(matrix @ matrix)
[[Obs[17.81] Obs[-5.1]]
 [Obs[-5.1] Obs[2.0]]]

Multiplication with unit matrix leaves the matrix unchanged

In [4]:
print(matrix @ np.identity(2))
[[Obs[4.1] Obs[-1.0]]
 [Obs[-1.0] Obs[1.0]]]

Mathematical functions work elementwise

In [5]:
print(np.sinh(matrix))
[[Obs[30.161857460980094] Obs[-1.1752011936438014]]
 [Obs[-1.1752011936438014] Obs[1.1752011936438014]]]

For a vector of Obs, we again use np.asarray to end up with the correct object

In [6]:
vec1 = pe.pseudo_Obs(2, 0.4, 'e1')
vec2 = pe.pseudo_Obs(1, 0.1, 'e1')
vector = np.asarray([vec1, vec2])
for (i), entry in np.ndenumerate(vector):
    entry.gamma_method()
print(vector)
[Obs[2.00(40)] Obs[1.00(10)]]

The matrix times vector product can then be computed via

In [7]:
product = matrix @ vector
for (i), entry in np.ndenumerate(product):
    entry.gamma_method()
print(product)
[Obs[7.2(1.7)] Obs[-1.00(47)]]

Matrix to scalar operations

If we want to apply a numpy matrix function with a scalar return value we can use scalar_mat_op. Here we need to use the autograd wrapped version of numpy (imported as anp) to use automatic differentiation.

In [8]:
import autograd.numpy as anp  # Thinly-wrapped numpy
funcs = [anp.linalg.det, anp.trace, anp.linalg.norm]

for i, func in enumerate(funcs):
    res = pe.linalg.scalar_mat_op(func, matrix)
    res.gamma_method()
    print(func.__name__, '\t', res)
det 	 Obs[3.10(28)]
trace 	 Obs[5.10(20)]
norm 	 Obs[4.45(19)]

For matrix operations which are not supported by autograd we can use numerical differentiation

In [9]:
funcs = [np.linalg.cond, scipy.linalg.expm_cond]

for i, func in enumerate(funcs):
    res = pe.linalg.scalar_mat_op(func, matrix, num_grad=True)
    res.gamma_method()
    print(func.__name__, '   \t', res)
cond    	 Obs[6.23(59)]
expm_cond    	 Obs[4.45(19)]

Matrix to matrix operations

For matrix operations with a matrix as return value we can use another wrapper mat_mat_op. Take as an example the cholesky decompostion. Here we need to use the autograd wrapped version of numpy (imported as anp) to use automatic differentiation.

In [10]:
cholesky = pe.linalg.mat_mat_op(anp.linalg.cholesky, matrix)
for (i, j), entry in np.ndenumerate(cholesky):
    entry.gamma_method()
print(cholesky)
[[Obs[2.025(49)] Obs[0.0]]
 [Obs[-0.494(50)] Obs[0.870(29)]]]

We can now check if the decomposition was succesfull

In [11]:
check = cholesky @ cholesky.T
print(check - matrix)
[[Obs[-8.881784197001252e-16] Obs[0.0]]
 [Obs[0.0] Obs[0.0]]]

We can now further compute the inverse of the cholesky decomposed matrix and check that the product with its inverse gives the unit matrix with zero error.

In [12]:
inv = pe.linalg.mat_mat_op(anp.linalg.inv, cholesky)
for (i, j), entry in np.ndenumerate(inv):
    entry.gamma_method()
print(inv)
print('Check:')
check_inv = cholesky @ inv
print(check_inv)
[[Obs[0.494(12)] Obs[0.0]]
 [Obs[0.280(40)] Obs[1.150(39)]]]
Check:
[[Obs[1.0] Obs[0.0]]
 [Obs[0.0] Obs[1.0]]]

Matrix to matrix operations which are not supported by autograd can also be computed with numeric differentiation

In [13]:
funcs = [scipy.linalg.orth, scipy.linalg.expm, scipy.linalg.logm, scipy.linalg.sinhm, scipy.linalg.sqrtm]

for i,func in enumerate(funcs):
    res = pe.linalg.mat_mat_op(func, matrix, num_grad=True)
    for (i, j), entry in np.ndenumerate(res):
        entry.gamma_method()
    print(func.__name__)
    print(res)
orth
[[Obs[-0.9592(75)] Obs[0.283(25)]]
 [Obs[0.283(25)] Obs[0.9592(75)]]]
expm
[[Obs[75(15)] Obs[-21.4(4.2)]]
 [Obs[-21.4(4.2)] Obs[8.3(1.4)]]]
logm
[[Obs[1.334(57)] Obs[-0.496(61)]]
 [Obs[-0.496(61)] Obs[-0.203(50)]]]
sinhm
[[Obs[37.3(7.4)] Obs[-10.8(2.1)]]
 [Obs[-10.8(2.1)] Obs[3.94(69)]]]
sqrtm
[[Obs[1.996(51)] Obs[-0.341(37)]]
 [Obs[-0.341(37)] Obs[0.940(15)]]]

Eigenvalues and eigenvectors

We can also compute eigenvalues and eigenvectors of symmetric matrices with a special wrapper eigh

In [14]:
e, v = pe.linalg.eigh(matrix)
for (i), entry in np.ndenumerate(e):
    entry.gamma_method()
print('Eigenvalues:')
print(e)
for (i, j), entry in np.ndenumerate(v):
    entry.gamma_method()
print('Eigenvectors:')
print(v)
Eigenvalues:
[Obs[0.705(57)] Obs[4.39(19)]]
Eigenvectors:
[[Obs[-0.283(25)] Obs[-0.9592(75)]]
 [Obs[-0.9592(75)] Obs[0.283(25)]]]

We can check that we got the correct result

In [15]:
for i in range(2):
    print('Check eigenvector', i + 1)
    print(matrix @ v[:, i] - v[:, i] * e[i])
Check eigenvector 1
[Obs[-5.551115123125783e-17] Obs[0.0]]
Check eigenvector 2
[Obs[0.0] Obs[-2.220446049250313e-16]]
In [ ]:

</html>