feat: example 5 updated to new version

This commit is contained in:
Fabian Joswig 2022-01-06 12:15:26 +01:00
parent 7b3d7a76a5
commit 615411337c

View file

@ -97,6 +97,31 @@
"print(matrix @ np.identity(2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For large matrices overloading the standard operator `@` can become inefficient as pyerrors has to perform a large number of elementary opeations. For these situations pyerrors provides the function `linalg.matmul` which optimizes the required automatic differentiation. The function can take an arbitray number of operands."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[Obs[78.12099999999998] Obs[-22.909999999999997]]\n",
" [Obs[-22.909999999999997] Obs[7.1]]]\n"
]
}
],
"source": [
"print(pe.linalg.matmul(matrix, matrix, matrix)) # Equivalent to matrix @ matrix @ matrix but faster for large matrices"
]
},
{
"cell_type": "markdown",
"metadata": {},
@ -167,7 +192,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[Obs[7.2(1.7)] Obs[-1.00(45)]]\n"
"[Obs[7.2(1.7)] Obs[-1.00(46)]]\n"
]
}
],
@ -182,8 +207,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Matrix to scalar operations\n",
"If we want to apply a numpy matrix function with a scalar return value we can use `scalar_mat_op`. __Here we need to use the autograd wrapped version of numpy__ (imported as anp) to use automatic differentiation."
"`pyerrors` provides the user with wrappers to the `numpy.linalg` functions which work on `Obs` valued matrices. We can for example calculate the determinant of the matrix via"
]
},
{
@ -195,50 +219,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"det \t Obs[3.10(28)]\n",
"trace \t Obs[5.10(20)]\n",
"norm \t Obs[4.45(19)]\n"
"3.10(28)\n"
]
}
],
"source": [
"import autograd.numpy as anp # Thinly-wrapped numpy\n",
"funcs = [anp.linalg.det, anp.trace, anp.linalg.norm]\n",
"\n",
"for i, func in enumerate(funcs):\n",
" res = pe.linalg.scalar_mat_op(func, matrix)\n",
" res.gamma_method()\n",
" print(func.__name__, '\\t', res)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For matrix operations which are not supported by autograd we can use numerical differentiation"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"cond \t Obs[6.23(58)]\n",
"expm_cond \t Obs[4.45(19)]\n"
]
}
],
"source": [
"funcs = [np.linalg.cond, scipy.linalg.expm_cond]\n",
"\n",
"for i, func in enumerate(funcs):\n",
" res = pe.linalg.scalar_mat_op(func, matrix, num_grad=True)\n",
" res.gamma_method()\n",
" print(func.__name__, ' \\t', res)"
"det = pe.linalg.det(matrix)\n",
"det.gamma_method()\n",
"print(det)"
]
},
{
@ -251,7 +239,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"metadata": {},
"outputs": [
{
@ -259,12 +247,12 @@
"output_type": "stream",
"text": [
"[[Obs[2.025(49)] Obs[0.0]]\n",
" [Obs[-0.494(51)] Obs[0.870(29)]]]\n"
" [Obs[-0.494(50)] Obs[0.870(29)]]]\n"
]
}
],
"source": [
"cholesky = pe.linalg.mat_mat_op(anp.linalg.cholesky, matrix)\n",
"cholesky = pe.linalg.cholesky(matrix)\n",
"for (i, j), entry in np.ndenumerate(cholesky):\n",
" entry.gamma_method()\n",
"print(cholesky)"
@ -279,7 +267,7 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"metadata": {},
"outputs": [
{
@ -305,7 +293,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 11,
"metadata": {},
"outputs": [
{
@ -313,7 +301,7 @@
"output_type": "stream",
"text": [
"[[Obs[0.494(12)] Obs[0.0]]\n",
" [Obs[0.280(40)] Obs[1.150(39)]]]\n",
" [Obs[0.280(39)] Obs[1.150(38)]]]\n",
"Check:\n",
"[[Obs[1.0] Obs[0.0]]\n",
" [Obs[0.0] Obs[1.0]]]\n"
@ -321,7 +309,7 @@
}
],
"source": [
"inv = pe.linalg.mat_mat_op(anp.linalg.inv, cholesky)\n",
"inv = pe.linalg.inv(cholesky)\n",
"for (i, j), entry in np.ndenumerate(inv):\n",
" entry.gamma_method()\n",
"print(inv)\n",
@ -330,51 +318,6 @@
"print(check_inv)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Matrix to matrix operations which are not supported by autograd can also be computed with numeric differentiation"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"orth\n",
"[[Obs[-0.9592(76)] Obs[0.283(26)]]\n",
" [Obs[0.283(26)] Obs[0.9592(76)]]]\n",
"expm\n",
"[[Obs[75(15)] Obs[-21.4(4.1)]]\n",
" [Obs[-21.4(4.1)] Obs[8.3(1.4)]]]\n",
"logm\n",
"[[Obs[1.334(57)] Obs[-0.496(61)]]\n",
" [Obs[-0.496(61)] Obs[-0.203(50)]]]\n",
"sinhm\n",
"[[Obs[37.3(7.4)] Obs[-10.8(2.1)]]\n",
" [Obs[-10.8(2.1)] Obs[3.94(68)]]]\n",
"sqrtm\n",
"[[Obs[1.996(51)] Obs[-0.341(37)]]\n",
" [Obs[-0.341(37)] Obs[0.940(14)]]]\n"
]
}
],
"source": [
"funcs = [scipy.linalg.orth, scipy.linalg.expm, scipy.linalg.logm, scipy.linalg.sinhm, scipy.linalg.sqrtm]\n",
"\n",
"for i,func in enumerate(funcs):\n",
" res = pe.linalg.mat_mat_op(func, matrix, num_grad=True)\n",
" for (i, j), entry in np.ndenumerate(res):\n",
" entry.gamma_method()\n",
" print(func.__name__)\n",
" print(res)"
]
},
{
"cell_type": "markdown",
"metadata": {},
@ -385,7 +328,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 12,
"metadata": {},
"outputs": [
{
@ -393,10 +336,10 @@
"output_type": "stream",
"text": [
"Eigenvalues:\n",
"[Obs[0.705(57)] Obs[4.39(19)]]\n",
"[Obs[0.705(56)] Obs[4.39(20)]]\n",
"Eigenvectors:\n",
"[[Obs[-0.283(26)] Obs[-0.9592(76)]]\n",
" [Obs[-0.9592(76)] Obs[0.283(26)]]]\n"
"[[Obs[-0.283(25)] Obs[-0.9592(74)]]\n",
" [Obs[-0.9592(74)] Obs[0.283(25)]]]\n"
]
}
],
@ -421,7 +364,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 13,
"metadata": {},
"outputs": [
{
@ -451,7 +394,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@ -465,7 +408,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
"version": "3.8.10"
}
},
"nbformat": 4,