{ "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import pyerrors as pe\n", "import numpy as np\n", "import scipy" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As an example we look at a symmetric 2x2 matrix which positive semidefinte and has an error on all entries" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[Obs[4.10(20)] Obs[-1.00(10)]]\n", " [Obs[-1.00(10)] Obs[1.000(10)]]]\n" ] } ], "source": [ "obs11 = pe.pseudo_Obs(4.1, 0.2, 'e1')\n", "obs22 = pe.pseudo_Obs(1, 0.01, 'e1')\n", "obs12 = pe.pseudo_Obs(-1, 0.1, 'e1')\n", "matrix = np.asarray([[obs11, obs12], [obs12, obs22]])\n", "print(matrix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We require to use `np.asarray` here as it makes sure that we end up with a numpy array of `Obs`." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The standard matrix product can be performed with @" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[Obs[17.81] Obs[-5.1]]\n", " [Obs[-5.1] Obs[2.0]]]\n" ] } ], "source": [ "print(matrix @ matrix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Multiplication with unit matrix leaves the matrix unchanged" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[Obs[4.1] Obs[-1.0]]\n", " [Obs[-1.0] Obs[1.0]]]\n" ] } ], "source": [ "print(matrix @ np.identity(2))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Mathematical functions work elementwise" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[Obs[30.161857460980094] Obs[-1.1752011936438014]]\n", " [Obs[-1.1752011936438014] Obs[1.1752011936438014]]]\n" ] } ], "source": [ "print(np.sinh(matrix))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For a vector of `Obs`, we again use np.asarray to end up with the correct object" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[Obs[2.00(40)] Obs[1.00(10)]]\n" ] } ], "source": [ "vec1 = pe.pseudo_Obs(2, 0.4, 'e1')\n", "vec2 = pe.pseudo_Obs(1, 0.1, 'e1')\n", "vector = np.asarray([vec1, vec2])\n", "for (i), entry in np.ndenumerate(vector):\n", " entry.gamma_method()\n", "print(vector)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The matrix times vector product can then be computed via" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[Obs[7.2(1.7)] Obs[-1.00(45)]]\n" ] } ], "source": [ "product = matrix @ vector\n", "for (i), entry in np.ndenumerate(product):\n", " entry.gamma_method()\n", "print(product)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Matrix to scalar operations\n", "If we want to apply a numpy matrix function with a scalar return value we can use `scalar_mat_op`. __Here we need to use the autograd wrapped version of numpy__ (imported as anp) to use automatic differentiation." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "det \t Obs[3.10(28)]\n", "trace \t Obs[5.10(20)]\n", "norm \t Obs[4.45(19)]\n" ] } ], "source": [ "import autograd.numpy as anp # Thinly-wrapped numpy\n", "funcs = [anp.linalg.det, anp.trace, anp.linalg.norm]\n", "\n", "for i, func in enumerate(funcs):\n", " res = pe.linalg.scalar_mat_op(func, matrix)\n", " res.gamma_method()\n", " print(func.__name__, '\\t', res)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For matrix operations which are not supported by autograd we can use numerical differentiation" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cond \t Obs[6.23(58)]\n", "expm_cond \t Obs[4.45(19)]\n" ] } ], "source": [ "funcs = [np.linalg.cond, scipy.linalg.expm_cond]\n", "\n", "for i, func in enumerate(funcs):\n", " res = pe.linalg.scalar_mat_op(func, matrix, num_grad=True)\n", " res.gamma_method()\n", " print(func.__name__, ' \\t', res)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Matrix to matrix operations\n", "For matrix operations with a matrix as return value we can use another wrapper `mat_mat_op`. Take as an example the cholesky decompostion. __Here we need to use the autograd wrapped version of numpy__ (imported as anp) to use automatic differentiation." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[Obs[2.025(49)] Obs[0.0]]\n", " [Obs[-0.494(51)] Obs[0.870(29)]]]\n" ] } ], "source": [ "cholesky = pe.linalg.mat_mat_op(anp.linalg.cholesky, matrix)\n", "for (i, j), entry in np.ndenumerate(cholesky):\n", " entry.gamma_method()\n", "print(cholesky)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now check if the decomposition was succesfull" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[Obs[-8.881784197001252e-16] Obs[0.0]]\n", " [Obs[0.0] Obs[0.0]]]\n" ] } ], "source": [ "check = cholesky @ cholesky.T\n", "print(check - matrix)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can now further compute the inverse of the cholesky decomposed matrix and check that the product with its inverse gives the unit matrix with zero error." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[Obs[0.494(12)] Obs[0.0]]\n", " [Obs[0.280(40)] Obs[1.150(39)]]]\n", "Check:\n", "[[Obs[1.0] Obs[0.0]]\n", " [Obs[0.0] Obs[1.0]]]\n" ] } ], "source": [ "inv = pe.linalg.mat_mat_op(anp.linalg.inv, cholesky)\n", "for (i, j), entry in np.ndenumerate(inv):\n", " entry.gamma_method()\n", "print(inv)\n", "print('Check:')\n", "check_inv = cholesky @ inv\n", "print(check_inv)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Matrix to matrix operations which are not supported by autograd can also be computed with numeric differentiation" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "orth\n", "[[Obs[-0.9592(76)] Obs[0.283(26)]]\n", " [Obs[0.283(26)] Obs[0.9592(76)]]]\n", "expm\n", "[[Obs[75(15)] Obs[-21.4(4.1)]]\n", " [Obs[-21.4(4.1)] Obs[8.3(1.4)]]]\n", "logm\n", "[[Obs[1.334(57)] Obs[-0.496(61)]]\n", " [Obs[-0.496(61)] Obs[-0.203(50)]]]\n", "sinhm\n", "[[Obs[37.3(7.4)] Obs[-10.8(2.1)]]\n", " [Obs[-10.8(2.1)] Obs[3.94(68)]]]\n", "sqrtm\n", "[[Obs[1.996(51)] Obs[-0.341(37)]]\n", " [Obs[-0.341(37)] Obs[0.940(14)]]]\n" ] } ], "source": [ "funcs = [scipy.linalg.orth, scipy.linalg.expm, scipy.linalg.logm, scipy.linalg.sinhm, scipy.linalg.sqrtm]\n", "\n", "for i,func in enumerate(funcs):\n", " res = pe.linalg.mat_mat_op(func, matrix, num_grad=True)\n", " for (i, j), entry in np.ndenumerate(res):\n", " entry.gamma_method()\n", " print(func.__name__)\n", " print(res)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Eigenvalues and eigenvectors\n", "We can also compute eigenvalues and eigenvectors of symmetric matrices with a special wrapper `eigh`" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Eigenvalues:\n", "[Obs[0.705(57)] Obs[4.39(19)]]\n", "Eigenvectors:\n", "[[Obs[-0.283(26)] Obs[-0.9592(76)]]\n", " [Obs[-0.9592(76)] Obs[0.283(26)]]]\n" ] } ], "source": [ "e, v = pe.linalg.eigh(matrix)\n", "for (i), entry in np.ndenumerate(e):\n", " entry.gamma_method()\n", "print('Eigenvalues:')\n", "print(e)\n", "for (i, j), entry in np.ndenumerate(v):\n", " entry.gamma_method()\n", "print('Eigenvectors:')\n", "print(v)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can check that we got the correct result" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Check eigenvector 1\n", "[Obs[-5.551115123125783e-17] Obs[0.0]]\n", "Check eigenvector 2\n", "[Obs[0.0] Obs[-2.220446049250313e-16]]\n" ] } ], "source": [ "for i in range(2):\n", " print('Check eigenvector', i + 1)\n", " print(matrix @ v[:, i] - v[:, i] * e[i])" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.9" } }, "nbformat": 4, "nbformat_minor": 4 }