mirror of
https://github.com/fjosw/pyerrors.git
synced 2025-03-16 15:20:24 +01:00
Merge branch 'develop' into documentation
This commit is contained in:
commit
bf1cf43a2c
2 changed files with 13 additions and 10 deletions
|
@ -1,17 +1,14 @@
|
|||
r'''
|
||||
# What is pyerrors?
|
||||
`pyerrors` is a python package for error computation and propagation of Markov chain Monte Carlo data.
|
||||
It is based on the **gamma method** [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017). Some of its features are:
|
||||
- **automatic differentiation** as suggested in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289) (partly based on the [autograd](https://github.com/HIPS/autograd) package)
|
||||
- **treatment of slow modes** in the simulation as suggested in [arXiv:1009.5228](https://arxiv.org/abs/1009.5228)
|
||||
- coherent **error propagation** for data from **different Markov chains**
|
||||
- **non-linear fits with x- and y-errors** and exact linear error propagation based on automatic differentiation as introduced in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289)
|
||||
- **real and complex matrix operations** and their error propagation based on automatic differentiation (Cholesky decomposition, calculation of eigenvalues and eigenvectors, singular value decomposition...)
|
||||
It is based on the gamma method [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017). Some of its features are:
|
||||
- automatic differentiation for exact liner error propagation as suggested in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289) (partly based on the [autograd](https://github.com/HIPS/autograd) package).
|
||||
- treatment of slow modes in the simulation as suggested in [arXiv:1009.5228](https://arxiv.org/abs/1009.5228).
|
||||
- coherent error propagation for data from different Markov chains.
|
||||
- non-linear fits with x- and y-errors and exact linear error propagation based on automatic differentiation as introduced in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289).
|
||||
- real and complex matrix operations and their error propagation based on automatic differentiation (Matrix inverse, Cholesky decomposition, calculation of eigenvalues and eigenvectors, singular value decomposition...).
|
||||
|
||||
There exist similar publicly available implementations of gamma method error analysis suites in
|
||||
- [Fortran](https://gitlab.ift.uam-csic.es/alberto/aderrors)
|
||||
- [Julia](https://gitlab.ift.uam-csic.es/alberto/aderrors.jl)
|
||||
- [Python](https://github.com/mbruno46/pyobs)
|
||||
There exist similar publicly available implementations of gamma method error analysis suites in [Fortran](https://gitlab.ift.uam-csic.es/alberto/aderrors), [Julia](https://gitlab.ift.uam-csic.es/alberto/aderrors.jl) and [Python](https://github.com/mbruno46/pyobs).
|
||||
|
||||
## Basic example
|
||||
|
||||
|
|
|
@ -586,6 +586,12 @@ def test_covariance2_symmetry():
|
|||
assert np.abs(cov_ab) < test_obs1.dvalue * test_obs2.dvalue * (1 + 10 * np.finfo(np.float64).eps)
|
||||
|
||||
|
||||
def test_empty_obs():
|
||||
o = pe.Obs([np.random.rand(100)], ['test'])
|
||||
q = o + pe.Obs([], [])
|
||||
assert q == o
|
||||
|
||||
|
||||
def test_jackknife():
|
||||
full_data = np.random.normal(1.1, 0.87, 5487)
|
||||
|
||||
|
|
Loading…
Add table
Reference in a new issue