From d97d76bf115c60ca897ce13981e6fe0b766d94c8 Mon Sep 17 00:00:00 2001 From: Fabian Joswig Date: Tue, 16 Nov 2021 11:45:16 +0000 Subject: [PATCH] docs: mainpage extended --- CONTRIBUTING.md | 7 ++++--- pyerrors/__init__.py | 30 ++++++++++++------------------ 2 files changed, 16 insertions(+), 21 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index a6705807..cc4b5132 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -9,13 +9,14 @@ and create your own branch cd pyerrors git checkout -b feature/my_feature ``` -I find it convenient to install the package in editable mode in the local python environment +It can be convenient to install the package in editable mode in the local python environment when developing new features ``` pip install -e . ``` Please send any pull requests to the `develop` branch. + ### Documentation -Please add docstrings to any new function, class or method you implement. +Please add docstrings to any new function, class or method you implement. The documentation is automatically generated from these docstrings. The startpage of the documentation is generated from the docstring of `pyerrors/__init__.py`. ### Tests When implementing a new feature or fixing a bug please add meaningful tests to the files in the `tests` directory which cover the new code. @@ -23,7 +24,7 @@ When implementing a new feature or fixing a bug please add meaningful tests to t ### Continous integration For all pull requests tests are executed for the most recent python releases via ``` -pytest --cov=pyerrors -v +pytest --cov=pyerrors -vv ``` requiring `pytest`, `pytest-cov` and `pytest-benchmark` and the linter `flake8` is executed with the command diff --git a/pyerrors/__init__.py b/pyerrors/__init__.py index 7d1a7261..a91b3e11 100644 --- a/pyerrors/__init__.py +++ b/pyerrors/__init__.py @@ -8,22 +8,17 @@ It is based on the **gamma method** [arXiv:hep-lat/0306017](https://arxiv.org/ab - **non-linear fits with x- and y-errors** and exact linear error propagation based on automatic differentiation as introduced in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289) - **real and complex matrix operations** and their error propagation based on automatic differentiation (Cholesky decomposition, calculation of eigenvalues and eigenvectors, singular value decomposition...) -## Getting started +## Basic example ```python import numpy as np import pyerrors as pe -my_obs = pe.Obs([samples], ['ensemble_name']) -my_new_obs = 2 * np.log(my_obs) / my_obs ** 2 -my_new_obs.gamma_method() -print(my_new_obs) +my_obs = pe.Obs([samples], ['ensemble_name']) # Initialize an Obs object with Monte Carlo samples +my_new_obs = 2 * np.log(my_obs) / my_obs ** 2 # Construct derived Obs object +my_new_obs.gamma_method() # Estimate the error with the gamma_method +print(my_new_obs) # Print the result to stdout > 0.31498(72) - -iamzero = my_new_obs - my_new_obs -iamzero.gamma_method() -print(iamzero == 0.0) -> True ``` # The `Obs` class @@ -43,7 +38,7 @@ my_obs = pe.Obs([samples], ['ensemble_name']) ## Error propagation When performing mathematical operations on `Obs` objects the correct error propagation is intrinsically taken care using a first order Taylor expansion -$$\delta_f^i=\sum_\alpha \bar{f}_\alpha \delta_\alpha^i\,,\quad \delta_\alpha^i=a_\alpha^i-\bar{a}_\alpha$$ +$$\delta_f^i=\sum_\alpha \bar{f}_\alpha \delta_\alpha^i\,,\quad \delta_\alpha^i=a_\alpha^i-\bar{a}_\alpha\,,$$ as introduced in [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017). The required derivatives $\bar{f}_\alpha$ are evaluated up to machine precision via automatic differentiation as suggested in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289). @@ -61,6 +56,11 @@ my_obs2 = pe.Obs([samples2], ['ensemble_name']) my_sum = my_obs1 + my_obs2 my_m_eff = np.log(my_obs1 / my_obs2) + +iamzero = my_m_eff - my_m_eff +# Check that value and fluctuations are zero within machine precision +print(iamzero == 0.0) +> True ``` ## Error estimation @@ -82,7 +82,7 @@ my_sum.details() ``` We use the following definition of the integrated autocorrelation time established in [Madras & Sokal 1988](https://link.springer.com/article/10.1007/BF01022990) -$$\tau_\mathrm{int}=\frac{1}{2}+\sum_{t=1}^{W}\rho(t)\geq \frac{1}{2}$$ +$$\tau_\mathrm{int}=\frac{1}{2}+\sum_{t=1}^{W}\rho(t)\geq \frac{1}{2}\,.$$ The window $W$ is determined via the automatic windowing procedure described in [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017) The standard value for the parameter $S$ of this automatic windowing procedure is $S=2$. Other values for $S$ can be passed to the `gamma_method` as parameter. @@ -99,12 +99,6 @@ my_sum.details() The integrated autocorrelation time $\tau_\mathrm{int}$ and the autocorrelation function $\rho(W)$ can be monitored via the methods `pyerrors.obs.Obs.plot_tauint` and `pyerrors.obs.Obs.plot_tauint`. -Example: -```python -my_sum.plot_tauint() -my_sum.plot_rho() -``` - ### Exponential tails Slow modes in the Monte Carlo history can be accounted for by attaching an exponential tail to the autocorrelation function $\rho$ as suggested in [arXiv:1009.5228](https://arxiv.org/abs/1009.5228). The longest autocorrelation time in the history, $\tau_\mathrm{exp}$, can be passed to the `gamma_method` as parameter. In this case the automatic windowing procedure is vacated and the parameter $S$ does not affect the error estimate.