mirror of
https://github.com/fjosw/pyerrors.git
synced 2025-05-14 19:43:41 +02:00
Resolved merge conflict in tests
This commit is contained in:
commit
c9789a34e6
27 changed files with 695 additions and 142 deletions
15
.github/workflows/binder.yml
vendored
Normal file
15
.github/workflows/binder.yml
vendored
Normal file
|
@ -0,0 +1,15 @@
|
||||||
|
name: binder
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- develop
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
Create-MyBinderOrg-Cache:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: cache binder build on mybinder.org
|
||||||
|
uses: jupyterhub/repo2docker-action@master
|
||||||
|
with:
|
||||||
|
NO_PUSH: true
|
||||||
|
MYBINDERORG_TAG: ${{ github.event.ref }} # This builds the container on mybinder.org with the branch that was pushed on.
|
7
.github/workflows/pytest.yml
vendored
7
.github/workflows/pytest.yml
vendored
|
@ -13,15 +13,15 @@ jobs:
|
||||||
pytest:
|
pytest:
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ${{ matrix.os }}
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: true
|
fail-fast: true
|
||||||
matrix:
|
matrix:
|
||||||
os: [ubuntu-latest]
|
os: [ubuntu-latest]
|
||||||
python-version: ["3.7", "3.8", "3.9", "3.10"]
|
python-version: ["3.7", "3.8", "3.9", "3.10"]
|
||||||
include:
|
include:
|
||||||
- os: macos-latest
|
- os: macos-latest
|
||||||
python-version: 3.9
|
python-version: 3.9
|
||||||
- os: windows-latest
|
- os: windows-latest
|
||||||
python-version: 3.9
|
python-version: 3.9
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout source
|
- name: Checkout source
|
||||||
|
@ -40,6 +40,7 @@ jobs:
|
||||||
pip install pytest
|
pip install pytest
|
||||||
pip install pytest-cov
|
pip install pytest-cov
|
||||||
pip install pytest-benchmark
|
pip install pytest-benchmark
|
||||||
|
pip freeze
|
||||||
|
|
||||||
- name: Run tests
|
- name: Run tests
|
||||||
run: pytest --cov=pyerrors -vv
|
run: pytest --cov=pyerrors -vv
|
||||||
|
|
4
.gitignore
vendored
4
.gitignore
vendored
|
@ -6,6 +6,10 @@ __pycache__
|
||||||
.cache
|
.cache
|
||||||
examples/B1k2_pcac_plateau.p
|
examples/B1k2_pcac_plateau.p
|
||||||
examples/Untitled.*
|
examples/Untitled.*
|
||||||
|
examples/pcac_plateau_test_ensemble.json.gz
|
||||||
core.*
|
core.*
|
||||||
*.swp
|
*.swp
|
||||||
htmlcov
|
htmlcov
|
||||||
|
build
|
||||||
|
pyerrors.egg-info
|
||||||
|
dist
|
||||||
|
|
33
CHANGELOG.md
33
CHANGELOG.md
|
@ -2,6 +2,39 @@
|
||||||
|
|
||||||
All notable changes to this project will be documented in this file.
|
All notable changes to this project will be documented in this file.
|
||||||
|
|
||||||
|
## [2.1.3] - 2022-06-13
|
||||||
|
### Fixed
|
||||||
|
- Further bugs in connection with correlator objects which have arrays with None entries as content fixed.
|
||||||
|
|
||||||
|
## [2.1.2] - 2022-06-10
|
||||||
|
### Fixed
|
||||||
|
- Bug in `Corr.matrix_symmetric` fixed which appeared when a time slice contained an array with `None` entries.
|
||||||
|
|
||||||
|
## [2.1.1] - 2022-06-06
|
||||||
|
### Fixed
|
||||||
|
- Bug in error propagation of correlated least square fits fixed.
|
||||||
|
- `Fit_result.gamma_method` can now be called with kwargs
|
||||||
|
|
||||||
|
## [2.1.0] - 2022-05-31
|
||||||
|
### Added
|
||||||
|
- `obs.covariance` now has the option to smooth small eigenvalues of the matrix with the method described in hep-lat/9412087.
|
||||||
|
- `Corr.prune` was added which can reduce the size of a correlator matrix before solving the GEVP.
|
||||||
|
- `Corr.show` has two additional optional arguments. `hide_sigma` to hide data points with large errors and `references` to display reference values as horizontal lines.
|
||||||
|
- I/O routines for ALPHA dobs format added.
|
||||||
|
- `input.hadrons` functionality extended.
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- The standard behavior of the `Corr.GEVP` method has changed. It now returns all eigenvectors of the system instead of only the specified ones as default. The standard way of sorting the eigenvectors was changed to `Eigenvalue`. The argument `sorted_list` was deprecated in favor of `sort`.
|
||||||
|
- Before performing a correlated fit the routine first runs an uncorrelated one to obtain a better guess for the initial parameters.
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
- `obs.covariance` now also gives correct estimators if data defined on non-identical configurations is passed to the function.
|
||||||
|
- Rounding errors in estimating derivatives of fit parameters with respect to input data from the inverse hessian reduced. This should only make a difference when the magnitude of the errors of different fit parameters vary vastly.
|
||||||
|
- Bug in json.gz format fixed which did not properly store the replica mean values. Format version bumped to 1.1.
|
||||||
|
- The GEVP matrix is now symmetrized before solving the system for all sorting options not only the one with fixed `ts`.
|
||||||
|
- Automatic range estimation improved in `fits.residual_plot`.
|
||||||
|
- Bugs in `input.bdio` fixed.
|
||||||
|
|
||||||
## [2.0.0] - 2022-03-31
|
## [2.0.0] - 2022-03-31
|
||||||
### Added
|
### Added
|
||||||
- The possibility to work with Monte Carlo histories which are evenly or unevenly spaced was added.
|
- The possibility to work with Monte Carlo histories which are evenly or unevenly spaced was added.
|
||||||
|
|
13
README.md
13
README.md
|
@ -1,4 +1,4 @@
|
||||||
[](https://github.com/fjosw/pyerrors/actions/workflows/flake8.yml) [](https://github.com/fjosw/pyerrors/actions/workflows/pytest.yml) [](https://github.com/fjosw/pyerrors/actions/workflows/examples.yml) [](https://github.com/fjosw/pyerrors/actions/workflows/docs.yml) [](https://www.python.org/downloads/) [](https://opensource.org/licenses/MIT)
|
[](https://github.com/fjosw/pyerrors/actions/workflows/flake8.yml) [](https://github.com/fjosw/pyerrors/actions/workflows/pytest.yml) [](https://www.python.org/downloads/) [](https://opensource.org/licenses/MIT) [](https://mybinder.org/v2/gh/fjosw/pyerrors/HEAD?labpath=examples)
|
||||||
# pyerrors
|
# pyerrors
|
||||||
`pyerrors` is a python package for error computation and propagation of Markov chain Monte Carlo data.
|
`pyerrors` is a python package for error computation and propagation of Markov chain Monte Carlo data.
|
||||||
|
|
||||||
|
@ -8,11 +8,12 @@
|
||||||
- **Bug reports:** https://github.com/fjosw/pyerrors/issues
|
- **Bug reports:** https://github.com/fjosw/pyerrors/issues
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
To install the current `develop` version run
|
To install the most recent release run
|
||||||
|
```bash
|
||||||
|
pip install pyerrors # Fresh install
|
||||||
|
pip install -U pyerrors # Upgrade
|
||||||
|
```
|
||||||
|
to install the current `develop` version run
|
||||||
```bash
|
```bash
|
||||||
pip install git+https://github.com/fjosw/pyerrors.git@develop
|
pip install git+https://github.com/fjosw/pyerrors.git@develop
|
||||||
```
|
```
|
||||||
to install the most recent release run
|
|
||||||
```bash
|
|
||||||
pip install git+https://github.com/fjosw/pyerrors.git@master
|
|
||||||
```
|
|
||||||
|
|
|
@ -21,6 +21,7 @@
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
|
"import matplotlib\n",
|
||||||
"import matplotlib.pyplot as plt\n",
|
"import matplotlib.pyplot as plt\n",
|
||||||
"import pyerrors as pe"
|
"import pyerrors as pe"
|
||||||
]
|
]
|
||||||
|
@ -32,7 +33,8 @@
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"plt.style.use('./base_style.mplstyle')\n",
|
"plt.style.use('./base_style.mplstyle')\n",
|
||||||
"plt.rc('text', usetex=True)"
|
"usetex = matplotlib.checkdep_usetex(True)\n",
|
||||||
|
"plt.rc('text', usetex=usetex)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
@ -8,6 +8,7 @@
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
|
"import matplotlib\n",
|
||||||
"import matplotlib.pyplot as plt\n",
|
"import matplotlib.pyplot as plt\n",
|
||||||
"import pyerrors as pe"
|
"import pyerrors as pe"
|
||||||
]
|
]
|
||||||
|
@ -20,7 +21,8 @@
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"plt.style.use('./base_style.mplstyle')\n",
|
"plt.style.use('./base_style.mplstyle')\n",
|
||||||
"plt.rc('text', usetex=True)"
|
"usetex = matplotlib.checkdep_usetex(True)\n",
|
||||||
|
"plt.rc('text', usetex=usetex)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -480,7 +482,7 @@
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3 (ipykernel)",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
|
@ -494,7 +496,7 @@
|
||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.8.8"
|
"version": "3.8.10"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
|
|
@ -7,6 +7,7 @@
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
|
"import matplotlib\n",
|
||||||
"import matplotlib.pyplot as plt\n",
|
"import matplotlib.pyplot as plt\n",
|
||||||
"import pyerrors as pe"
|
"import pyerrors as pe"
|
||||||
]
|
]
|
||||||
|
@ -18,7 +19,8 @@
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"plt.style.use('./base_style.mplstyle')\n",
|
"plt.style.use('./base_style.mplstyle')\n",
|
||||||
"plt.rc('text', usetex=True)"
|
"usetex = matplotlib.checkdep_usetex(True)\n",
|
||||||
|
"plt.rc('text', usetex=usetex)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
|
|
@ -6,9 +6,10 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"import pyerrors as pe\n",
|
|
||||||
"import numpy as np\n",
|
"import numpy as np\n",
|
||||||
"import matplotlib.pyplot as plt"
|
"import matplotlib\n",
|
||||||
|
"import matplotlib.pyplot as plt\n",
|
||||||
|
"import pyerrors as pe"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -18,7 +19,8 @@
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"plt.style.use('./base_style.mplstyle')\n",
|
"plt.style.use('./base_style.mplstyle')\n",
|
||||||
"plt.rc('text', usetex=True)"
|
"usetex = matplotlib.checkdep_usetex(True)\n",
|
||||||
|
"plt.rc('text', usetex=usetex)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -439,7 +441,7 @@
|
||||||
"hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
|
"hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3.8.10 64-bit",
|
"display_name": "Python 3 (ipykernel)",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
|
|
|
@ -393,7 +393,7 @@
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3 (ipykernel)",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
|
@ -407,7 +407,7 @@
|
||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.8.8"
|
"version": "3.8.10"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
|
|
@ -7,6 +7,7 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
|
"import matplotlib\n",
|
||||||
"import matplotlib.pyplot as plt\n",
|
"import matplotlib.pyplot as plt\n",
|
||||||
"import pyerrors as pe"
|
"import pyerrors as pe"
|
||||||
]
|
]
|
||||||
|
@ -18,7 +19,9 @@
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"plt.style.use('./base_style.mplstyle'); plt.rc('text', usetex=True)"
|
"plt.style.use('./base_style.mplstyle')\n",
|
||||||
|
"usetex = matplotlib.checkdep_usetex(True)\n",
|
||||||
|
"plt.rc('text', usetex=usetex)"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -305,7 +308,7 @@
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3 (ipykernel)",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
|
|
|
@ -8,6 +8,8 @@ It is based on the gamma method [arXiv:hep-lat/0306017](https://arxiv.org/abs/he
|
||||||
- non-linear fits with x- and y-errors and exact linear error propagation based on automatic differentiation as introduced in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289).
|
- non-linear fits with x- and y-errors and exact linear error propagation based on automatic differentiation as introduced in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289).
|
||||||
- real and complex matrix operations and their error propagation based on automatic differentiation (Matrix inverse, Cholesky decomposition, calculation of eigenvalues and eigenvectors, singular value decomposition...).
|
- real and complex matrix operations and their error propagation based on automatic differentiation (Matrix inverse, Cholesky decomposition, calculation of eigenvalues and eigenvectors, singular value decomposition...).
|
||||||
|
|
||||||
|
More detailed examples can found in the [GitHub repository](https://github.com/fjosw/pyerrors/tree/develop/examples) [](https://mybinder.org/v2/gh/fjosw/pyerrors/HEAD?labpath=examples).
|
||||||
|
|
||||||
There exist similar publicly available implementations of gamma method error analysis suites in [Fortran](https://gitlab.ift.uam-csic.es/alberto/aderrors), [Julia](https://gitlab.ift.uam-csic.es/alberto/aderrors.jl) and [Python](https://github.com/mbruno46/pyobs).
|
There exist similar publicly available implementations of gamma method error analysis suites in [Fortran](https://gitlab.ift.uam-csic.es/alberto/aderrors), [Julia](https://gitlab.ift.uam-csic.es/alberto/aderrors.jl) and [Python](https://github.com/mbruno46/pyobs).
|
||||||
|
|
||||||
## Basic example
|
## Basic example
|
||||||
|
@ -443,8 +445,10 @@ Julia I/O routines for the json.gz format, compatible with [ADerrors.jl](https:/
|
||||||
# Citing
|
# Citing
|
||||||
If you use `pyerrors` for research that leads to a publication please consider citing:
|
If you use `pyerrors` for research that leads to a publication please consider citing:
|
||||||
- Ulli Wolff, *Monte Carlo errors with less errors*. Comput.Phys.Commun. 156 (2004) 143-153, Comput.Phys.Commun. 176 (2007) 383 (erratum).
|
- Ulli Wolff, *Monte Carlo errors with less errors*. Comput.Phys.Commun. 156 (2004) 143-153, Comput.Phys.Commun. 176 (2007) 383 (erratum).
|
||||||
- Stefan Schaefer, Rainer Sommer, Francesco Virotta, *Critical slowing down and error analysis in lattice QCD simulations*. Nucl.Phys.B 845 (2011) 93-119.
|
|
||||||
- Alberto Ramos, *Automatic differentiation for error analysis of Monte Carlo data*. Comput.Phys.Commun. 238 (2019) 19-35.
|
- Alberto Ramos, *Automatic differentiation for error analysis of Monte Carlo data*. Comput.Phys.Commun. 238 (2019) 19-35.
|
||||||
|
and
|
||||||
|
- Stefan Schaefer, Rainer Sommer, Francesco Virotta, *Critical slowing down and error analysis in lattice QCD simulations*. Nucl.Phys.B 845 (2011) 93-119.
|
||||||
|
where applicable.
|
||||||
'''
|
'''
|
||||||
from .obs import *
|
from .obs import *
|
||||||
from .correlators import *
|
from .correlators import *
|
||||||
|
|
|
@ -155,7 +155,7 @@ class Corr:
|
||||||
raise Exception("Vectors are of wrong shape!")
|
raise Exception("Vectors are of wrong shape!")
|
||||||
if normalize:
|
if normalize:
|
||||||
vector_l, vector_r = vector_l / np.sqrt((vector_l @ vector_l)), vector_r / np.sqrt(vector_r @ vector_r)
|
vector_l, vector_r = vector_l / np.sqrt((vector_l @ vector_l)), vector_r / np.sqrt(vector_r @ vector_r)
|
||||||
newcontent = [None if (item is None) else np.asarray([vector_l.T @ item @ vector_r]) for item in self.content]
|
newcontent = [None if _check_for_none(self, item) else np.asarray([vector_l.T @ item @ vector_r]) for item in self.content]
|
||||||
|
|
||||||
else:
|
else:
|
||||||
# There are no checks here yet. There are so many possible scenarios, where this can go wrong.
|
# There are no checks here yet. There are so many possible scenarios, where this can go wrong.
|
||||||
|
@ -163,7 +163,7 @@ class Corr:
|
||||||
for t in range(self.T):
|
for t in range(self.T):
|
||||||
vector_l[t], vector_r[t] = vector_l[t] / np.sqrt((vector_l[t] @ vector_l[t])), vector_r[t] / np.sqrt(vector_r[t] @ vector_r[t])
|
vector_l[t], vector_r[t] = vector_l[t] / np.sqrt((vector_l[t] @ vector_l[t])), vector_r[t] / np.sqrt(vector_r[t] @ vector_r[t])
|
||||||
|
|
||||||
newcontent = [None if (self.content[t] is None or vector_l[t] is None or vector_r[t] is None) else np.asarray([vector_l[t].T @ self.content[t] @ vector_r[t]]) for t in range(self.T)]
|
newcontent = [None if (_check_for_none(self, self.content[t]) or vector_l[t] is None or vector_r[t] is None) else np.asarray([vector_l[t].T @ self.content[t] @ vector_r[t]]) for t in range(self.T)]
|
||||||
return Corr(newcontent)
|
return Corr(newcontent)
|
||||||
|
|
||||||
def item(self, i, j):
|
def item(self, i, j):
|
||||||
|
@ -197,6 +197,8 @@ class Corr:
|
||||||
|
|
||||||
def symmetric(self):
|
def symmetric(self):
|
||||||
""" Symmetrize the correlator around x0=0."""
|
""" Symmetrize the correlator around x0=0."""
|
||||||
|
if self.N != 1:
|
||||||
|
raise Exception('symmetric cannot be safely applied to multi-dimensional correlators.')
|
||||||
if self.T % 2 != 0:
|
if self.T % 2 != 0:
|
||||||
raise Exception("Can not symmetrize odd T")
|
raise Exception("Can not symmetrize odd T")
|
||||||
|
|
||||||
|
@ -215,6 +217,8 @@ class Corr:
|
||||||
|
|
||||||
def anti_symmetric(self):
|
def anti_symmetric(self):
|
||||||
"""Anti-symmetrize the correlator around x0=0."""
|
"""Anti-symmetrize the correlator around x0=0."""
|
||||||
|
if self.N != 1:
|
||||||
|
raise Exception('anti_symmetric cannot be safely applied to multi-dimensional correlators.')
|
||||||
if self.T % 2 != 0:
|
if self.T % 2 != 0:
|
||||||
raise Exception("Can not symmetrize odd T")
|
raise Exception("Can not symmetrize odd T")
|
||||||
|
|
||||||
|
@ -236,7 +240,7 @@ class Corr:
|
||||||
def matrix_symmetric(self):
|
def matrix_symmetric(self):
|
||||||
"""Symmetrizes the correlator matrices on every timeslice."""
|
"""Symmetrizes the correlator matrices on every timeslice."""
|
||||||
if self.N > 1:
|
if self.N > 1:
|
||||||
transposed = [None if (G is None) else G.T for G in self.content]
|
transposed = [None if _check_for_none(self, G) else G.T for G in self.content]
|
||||||
return 0.5 * (Corr(transposed) + self)
|
return 0.5 * (Corr(transposed) + self)
|
||||||
if self.N == 1:
|
if self.N == 1:
|
||||||
raise Exception("Trying to symmetrize a correlator matrix, that already has N=1.")
|
raise Exception("Trying to symmetrize a correlator matrix, that already has N=1.")
|
||||||
|
@ -419,13 +423,15 @@ class Corr:
|
||||||
Can either be an Obs which is correlated with all entries of the
|
Can either be an Obs which is correlated with all entries of the
|
||||||
correlator or a Corr of same length.
|
correlator or a Corr of same length.
|
||||||
"""
|
"""
|
||||||
|
if self.N != 1:
|
||||||
|
raise Exception("Only one-dimensional correlators can be safely correlated.")
|
||||||
new_content = []
|
new_content = []
|
||||||
for x0, t_slice in enumerate(self.content):
|
for x0, t_slice in enumerate(self.content):
|
||||||
if t_slice is None:
|
if _check_for_none(self, t_slice):
|
||||||
new_content.append(None)
|
new_content.append(None)
|
||||||
else:
|
else:
|
||||||
if isinstance(partner, Corr):
|
if isinstance(partner, Corr):
|
||||||
if partner.content[x0] is None:
|
if _check_for_none(partner, partner.content[x0]):
|
||||||
new_content.append(None)
|
new_content.append(None)
|
||||||
else:
|
else:
|
||||||
new_content.append(np.array([correlate(o, partner.content[x0][0]) for o in t_slice]))
|
new_content.append(np.array([correlate(o, partner.content[x0][0]) for o in t_slice]))
|
||||||
|
@ -449,9 +455,11 @@ class Corr:
|
||||||
the reweighting factor on all configurations in weight.idl and not
|
the reweighting factor on all configurations in weight.idl and not
|
||||||
on the configurations in obs[i].idl.
|
on the configurations in obs[i].idl.
|
||||||
"""
|
"""
|
||||||
|
if self.N != 1:
|
||||||
|
raise Exception("Reweighting only implemented for one-dimensional correlators.")
|
||||||
new_content = []
|
new_content = []
|
||||||
for t_slice in self.content:
|
for t_slice in self.content:
|
||||||
if t_slice is None:
|
if _check_for_none(self, t_slice):
|
||||||
new_content.append(None)
|
new_content.append(None)
|
||||||
else:
|
else:
|
||||||
new_content.append(np.array(reweight(weight, t_slice, **kwargs)))
|
new_content.append(np.array(reweight(weight, t_slice, **kwargs)))
|
||||||
|
@ -467,6 +475,8 @@ class Corr:
|
||||||
partity : int
|
partity : int
|
||||||
Parity quantum number of the correlator, can be +1 or -1
|
Parity quantum number of the correlator, can be +1 or -1
|
||||||
"""
|
"""
|
||||||
|
if self.N != 1:
|
||||||
|
raise Exception("T_symmetry only implemented for one-dimensional correlators.")
|
||||||
if not isinstance(partner, Corr):
|
if not isinstance(partner, Corr):
|
||||||
raise Exception("T partner has to be a Corr object.")
|
raise Exception("T partner has to be a Corr object.")
|
||||||
if parity not in [+1, -1]:
|
if parity not in [+1, -1]:
|
||||||
|
@ -494,6 +504,8 @@ class Corr:
|
||||||
decides which definition of the finite differences derivative is used.
|
decides which definition of the finite differences derivative is used.
|
||||||
Available choice: symmetric, forward, backward, improved, default: symmetric
|
Available choice: symmetric, forward, backward, improved, default: symmetric
|
||||||
"""
|
"""
|
||||||
|
if self.N != 1:
|
||||||
|
raise Exception("deriv only implemented for one-dimensional correlators.")
|
||||||
if variant == "symmetric":
|
if variant == "symmetric":
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(1, self.T - 1):
|
for t in range(1, self.T - 1):
|
||||||
|
@ -546,6 +558,8 @@ class Corr:
|
||||||
decides which definition of the finite differences derivative is used.
|
decides which definition of the finite differences derivative is used.
|
||||||
Available choice: symmetric, improved, default: symmetric
|
Available choice: symmetric, improved, default: symmetric
|
||||||
"""
|
"""
|
||||||
|
if self.N != 1:
|
||||||
|
raise Exception("second_deriv only implemented for one-dimensional correlators.")
|
||||||
if variant == "symmetric":
|
if variant == "symmetric":
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(1, self.T - 1):
|
for t in range(1, self.T - 1):
|
||||||
|
@ -588,7 +602,7 @@ class Corr:
|
||||||
if variant == 'log':
|
if variant == 'log':
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(self.T - 1):
|
for t in range(self.T - 1):
|
||||||
if (self.content[t] is None) or (self.content[t + 1] is None):
|
if ((self.content[t] is None) or (self.content[t + 1] is None)) or (self.content[t + 1][0].value == 0):
|
||||||
newcontent.append(None)
|
newcontent.append(None)
|
||||||
else:
|
else:
|
||||||
newcontent.append(self.content[t] / self.content[t + 1])
|
newcontent.append(self.content[t] / self.content[t + 1])
|
||||||
|
@ -608,7 +622,7 @@ class Corr:
|
||||||
|
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(self.T - 1):
|
for t in range(self.T - 1):
|
||||||
if (self.content[t] is None) or (self.content[t + 1] is None):
|
if (self.content[t] is None) or (self.content[t + 1] is None) or (self.content[t + 1][0].value == 0):
|
||||||
newcontent.append(None)
|
newcontent.append(None)
|
||||||
# Fill the two timeslices in the middle of the lattice with their predecessors
|
# Fill the two timeslices in the middle of the lattice with their predecessors
|
||||||
elif variant == 'sinh' and t in [self.T / 2, self.T / 2 - 1]:
|
elif variant == 'sinh' and t in [self.T / 2, self.T / 2 - 1]:
|
||||||
|
@ -623,7 +637,7 @@ class Corr:
|
||||||
elif variant == 'arccosh':
|
elif variant == 'arccosh':
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(1, self.T - 1):
|
for t in range(1, self.T - 1):
|
||||||
if (self.content[t] is None) or (self.content[t + 1] is None) or (self.content[t - 1] is None):
|
if (self.content[t] is None) or (self.content[t + 1] is None) or (self.content[t - 1] is None) or (self.content[t][0].value == 0):
|
||||||
newcontent.append(None)
|
newcontent.append(None)
|
||||||
else:
|
else:
|
||||||
newcontent.append((self.content[t + 1] + self.content[t - 1]) / (2 * self.content[t]))
|
newcontent.append((self.content[t + 1] + self.content[t - 1]) / (2 * self.content[t]))
|
||||||
|
@ -758,7 +772,7 @@ class Corr:
|
||||||
|
|
||||||
x, y, y_err = self.plottable()
|
x, y, y_err = self.plottable()
|
||||||
if hide_sigma:
|
if hide_sigma:
|
||||||
hide_from = np.argmax((hide_sigma * np.array(y_err)) > np.abs(y)) - 1
|
hide_from = np.argmax((hide_sigma * np.array(y_err[1:])) > np.abs(y[1:])) - 1
|
||||||
else:
|
else:
|
||||||
hide_from = None
|
hide_from = None
|
||||||
ax1.errorbar(x[:hide_from], y[:hide_from], y_err[:hide_from], label=self.tag)
|
ax1.errorbar(x[:hide_from], y[:hide_from], y_err[:hide_from], label=self.tag)
|
||||||
|
@ -781,7 +795,7 @@ class Corr:
|
||||||
corr.gamma_method()
|
corr.gamma_method()
|
||||||
x, y, y_err = corr.plottable()
|
x, y, y_err = corr.plottable()
|
||||||
if hide_sigma:
|
if hide_sigma:
|
||||||
hide_from = np.argmax((hide_sigma * np.array(y_err)) > np.abs(y)) - 1
|
hide_from = np.argmax((hide_sigma * np.array(y_err[1:])) > np.abs(y[1:])) - 1
|
||||||
else:
|
else:
|
||||||
hide_from = None
|
hide_from = None
|
||||||
plt.errorbar(x[:hide_from], y[:hide_from], y_err[:hide_from], label=corr.tag, mfc=plt.rcParams['axes.facecolor'])
|
plt.errorbar(x[:hide_from], y[:hide_from], y_err[:hide_from], label=corr.tag, mfc=plt.rcParams['axes.facecolor'])
|
||||||
|
@ -883,12 +897,14 @@ class Corr:
|
||||||
else:
|
else:
|
||||||
raise Exception("Unknown datatype " + str(datatype))
|
raise Exception("Unknown datatype " + str(datatype))
|
||||||
|
|
||||||
def print(self, range=[0, None]):
|
def print(self, print_range=None):
|
||||||
print(self.__repr__(range))
|
print(self.__repr__(print_range))
|
||||||
|
|
||||||
|
def __repr__(self, print_range=None):
|
||||||
|
if print_range is None:
|
||||||
|
print_range = [0, None]
|
||||||
|
|
||||||
def __repr__(self, range=[0, None]):
|
|
||||||
content_string = ""
|
content_string = ""
|
||||||
|
|
||||||
content_string += "Corr T=" + str(self.T) + " N=" + str(self.N) + "\n" # +" filled with"+ str(type(self.content[0][0])) there should be a good solution here
|
content_string += "Corr T=" + str(self.T) + " N=" + str(self.N) + "\n" # +" filled with"+ str(type(self.content[0][0])) there should be a good solution here
|
||||||
|
|
||||||
if self.tag is not None:
|
if self.tag is not None:
|
||||||
|
@ -896,14 +912,14 @@ class Corr:
|
||||||
if self.N != 1:
|
if self.N != 1:
|
||||||
return content_string
|
return content_string
|
||||||
|
|
||||||
if range[1]:
|
if print_range[1]:
|
||||||
range[1] += 1
|
print_range[1] += 1
|
||||||
content_string += 'x0/a\tCorr(x0/a)\n------------------\n'
|
content_string += 'x0/a\tCorr(x0/a)\n------------------\n'
|
||||||
for i, sub_corr in enumerate(self.content[range[0]:range[1]]):
|
for i, sub_corr in enumerate(self.content[print_range[0]:print_range[1]]):
|
||||||
if sub_corr is None:
|
if sub_corr is None:
|
||||||
content_string += str(i + range[0]) + '\n'
|
content_string += str(i + print_range[0]) + '\n'
|
||||||
else:
|
else:
|
||||||
content_string += str(i + range[0])
|
content_string += str(i + print_range[0])
|
||||||
for element in sub_corr:
|
for element in sub_corr:
|
||||||
content_string += '\t' + ' ' * int(element >= 0) + str(element)
|
content_string += '\t' + ' ' * int(element >= 0) + str(element)
|
||||||
content_string += '\n'
|
content_string += '\n'
|
||||||
|
@ -923,7 +939,7 @@ class Corr:
|
||||||
raise Exception("Addition of Corrs with different shape")
|
raise Exception("Addition of Corrs with different shape")
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(self.T):
|
for t in range(self.T):
|
||||||
if (self.content[t] is None) or (y.content[t] is None):
|
if _check_for_none(self, self.content[t]) or _check_for_none(y, y.content[t]):
|
||||||
newcontent.append(None)
|
newcontent.append(None)
|
||||||
else:
|
else:
|
||||||
newcontent.append(self.content[t] + y.content[t])
|
newcontent.append(self.content[t] + y.content[t])
|
||||||
|
@ -932,7 +948,7 @@ class Corr:
|
||||||
elif isinstance(y, (Obs, int, float, CObs)):
|
elif isinstance(y, (Obs, int, float, CObs)):
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(self.T):
|
for t in range(self.T):
|
||||||
if (self.content[t] is None):
|
if _check_for_none(self, self.content[t]):
|
||||||
newcontent.append(None)
|
newcontent.append(None)
|
||||||
else:
|
else:
|
||||||
newcontent.append(self.content[t] + y)
|
newcontent.append(self.content[t] + y)
|
||||||
|
@ -951,7 +967,7 @@ class Corr:
|
||||||
raise Exception("Multiplication of Corr object requires N=N or N=1 and T=T")
|
raise Exception("Multiplication of Corr object requires N=N or N=1 and T=T")
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(self.T):
|
for t in range(self.T):
|
||||||
if (self.content[t] is None) or (y.content[t] is None):
|
if _check_for_none(self, self.content[t]) or _check_for_none(y, y.content[t]):
|
||||||
newcontent.append(None)
|
newcontent.append(None)
|
||||||
else:
|
else:
|
||||||
newcontent.append(self.content[t] * y.content[t])
|
newcontent.append(self.content[t] * y.content[t])
|
||||||
|
@ -960,7 +976,7 @@ class Corr:
|
||||||
elif isinstance(y, (Obs, int, float, CObs)):
|
elif isinstance(y, (Obs, int, float, CObs)):
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(self.T):
|
for t in range(self.T):
|
||||||
if (self.content[t] is None):
|
if _check_for_none(self, self.content[t]):
|
||||||
newcontent.append(None)
|
newcontent.append(None)
|
||||||
else:
|
else:
|
||||||
newcontent.append(self.content[t] * y)
|
newcontent.append(self.content[t] * y)
|
||||||
|
@ -979,12 +995,12 @@ class Corr:
|
||||||
raise Exception("Multiplication of Corr object requires N=N or N=1 and T=T")
|
raise Exception("Multiplication of Corr object requires N=N or N=1 and T=T")
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(self.T):
|
for t in range(self.T):
|
||||||
if (self.content[t] is None) or (y.content[t] is None):
|
if _check_for_none(self, self.content[t]) or _check_for_none(y, y.content[t]):
|
||||||
newcontent.append(None)
|
newcontent.append(None)
|
||||||
else:
|
else:
|
||||||
newcontent.append(self.content[t] / y.content[t])
|
newcontent.append(self.content[t] / y.content[t])
|
||||||
for t in range(self.T):
|
for t in range(self.T):
|
||||||
if newcontent[t] is None:
|
if _check_for_none(self, newcontent[t]):
|
||||||
continue
|
continue
|
||||||
if np.isnan(np.sum(newcontent[t]).value):
|
if np.isnan(np.sum(newcontent[t]).value):
|
||||||
newcontent[t] = None
|
newcontent[t] = None
|
||||||
|
@ -1003,7 +1019,7 @@ class Corr:
|
||||||
|
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(self.T):
|
for t in range(self.T):
|
||||||
if (self.content[t] is None):
|
if _check_for_none(self, self.content[t]):
|
||||||
newcontent.append(None)
|
newcontent.append(None)
|
||||||
else:
|
else:
|
||||||
newcontent.append(self.content[t] / y)
|
newcontent.append(self.content[t] / y)
|
||||||
|
@ -1014,7 +1030,7 @@ class Corr:
|
||||||
raise Exception('Division by zero will return undefined correlator')
|
raise Exception('Division by zero will return undefined correlator')
|
||||||
newcontent = []
|
newcontent = []
|
||||||
for t in range(self.T):
|
for t in range(self.T):
|
||||||
if (self.content[t] is None):
|
if _check_for_none(self, self.content[t]):
|
||||||
newcontent.append(None)
|
newcontent.append(None)
|
||||||
else:
|
else:
|
||||||
newcontent.append(self.content[t] / y)
|
newcontent.append(self.content[t] / y)
|
||||||
|
@ -1028,7 +1044,7 @@ class Corr:
|
||||||
raise TypeError('Corr / wrong type')
|
raise TypeError('Corr / wrong type')
|
||||||
|
|
||||||
def __neg__(self):
|
def __neg__(self):
|
||||||
newcontent = [None if (item is None) else -1. * item for item in self.content]
|
newcontent = [None if _check_for_none(self, item) else -1. * item for item in self.content]
|
||||||
return Corr(newcontent, prange=self.prange)
|
return Corr(newcontent, prange=self.prange)
|
||||||
|
|
||||||
def __sub__(self, y):
|
def __sub__(self, y):
|
||||||
|
@ -1036,31 +1052,31 @@ class Corr:
|
||||||
|
|
||||||
def __pow__(self, y):
|
def __pow__(self, y):
|
||||||
if isinstance(y, (Obs, int, float, CObs)):
|
if isinstance(y, (Obs, int, float, CObs)):
|
||||||
newcontent = [None if (item is None) else item**y for item in self.content]
|
newcontent = [None if _check_for_none(self, item) else item**y for item in self.content]
|
||||||
return Corr(newcontent, prange=self.prange)
|
return Corr(newcontent, prange=self.prange)
|
||||||
else:
|
else:
|
||||||
raise TypeError('Type of exponent not supported')
|
raise TypeError('Type of exponent not supported')
|
||||||
|
|
||||||
def __abs__(self):
|
def __abs__(self):
|
||||||
newcontent = [None if (item is None) else np.abs(item) for item in self.content]
|
newcontent = [None if _check_for_none(self, item) else np.abs(item) for item in self.content]
|
||||||
return Corr(newcontent, prange=self.prange)
|
return Corr(newcontent, prange=self.prange)
|
||||||
|
|
||||||
# The numpy functions:
|
# The numpy functions:
|
||||||
def sqrt(self):
|
def sqrt(self):
|
||||||
return self**0.5
|
return self ** 0.5
|
||||||
|
|
||||||
def log(self):
|
def log(self):
|
||||||
newcontent = [None if (item is None) else np.log(item) for item in self.content]
|
newcontent = [None if _check_for_none(self, item) else np.log(item) for item in self.content]
|
||||||
return Corr(newcontent, prange=self.prange)
|
return Corr(newcontent, prange=self.prange)
|
||||||
|
|
||||||
def exp(self):
|
def exp(self):
|
||||||
newcontent = [None if (item is None) else np.exp(item) for item in self.content]
|
newcontent = [None if _check_for_none(self, item) else np.exp(item) for item in self.content]
|
||||||
return Corr(newcontent, prange=self.prange)
|
return Corr(newcontent, prange=self.prange)
|
||||||
|
|
||||||
def _apply_func_to_corr(self, func):
|
def _apply_func_to_corr(self, func):
|
||||||
newcontent = [None if (item is None) else func(item) for item in self.content]
|
newcontent = [None if _check_for_none(self, item) else func(item) for item in self.content]
|
||||||
for t in range(self.T):
|
for t in range(self.T):
|
||||||
if newcontent[t] is None:
|
if _check_for_none(self, newcontent[t]):
|
||||||
continue
|
continue
|
||||||
if np.isnan(np.sum(newcontent[t]).value):
|
if np.isnan(np.sum(newcontent[t]).value):
|
||||||
newcontent[t] = None
|
newcontent[t] = None
|
||||||
|
@ -1222,6 +1238,11 @@ def _sort_vectors(vec_set, ts):
|
||||||
return sorted_vec_set
|
return sorted_vec_set
|
||||||
|
|
||||||
|
|
||||||
|
def _check_for_none(corr, entry):
|
||||||
|
"""Checks if entry for correlator corr is None"""
|
||||||
|
return len(list(filter(None, np.asarray(entry).flatten()))) < corr.N ** 2
|
||||||
|
|
||||||
|
|
||||||
def _GEVP_solver(Gt, G0):
|
def _GEVP_solver(Gt, G0):
|
||||||
"""Helper function for solving the GEVP and sorting the eigenvectors.
|
"""Helper function for solving the GEVP and sorting the eigenvectors.
|
||||||
|
|
||||||
|
|
|
@ -8,7 +8,6 @@ import scipy.stats
|
||||||
import matplotlib.pyplot as plt
|
import matplotlib.pyplot as plt
|
||||||
from matplotlib import gridspec
|
from matplotlib import gridspec
|
||||||
from scipy.odr import ODR, Model, RealData
|
from scipy.odr import ODR, Model, RealData
|
||||||
from scipy.stats import chi2
|
|
||||||
import iminuit
|
import iminuit
|
||||||
from autograd import jacobian
|
from autograd import jacobian
|
||||||
from autograd import elementwise_grad as egrad
|
from autograd import elementwise_grad as egrad
|
||||||
|
@ -34,9 +33,9 @@ class Fit_result(Sequence):
|
||||||
def __len__(self):
|
def __len__(self):
|
||||||
return len(self.fit_parameters)
|
return len(self.fit_parameters)
|
||||||
|
|
||||||
def gamma_method(self):
|
def gamma_method(self, **kwargs):
|
||||||
"""Apply the gamma method to all fit parameters"""
|
"""Apply the gamma method to all fit parameters"""
|
||||||
[o.gamma_method() for o in self.fit_parameters]
|
[o.gamma_method(**kwargs) for o in self.fit_parameters]
|
||||||
|
|
||||||
def __str__(self):
|
def __str__(self):
|
||||||
my_str = 'Goodness of fit:\n'
|
my_str = 'Goodness of fit:\n'
|
||||||
|
@ -95,7 +94,8 @@ def least_squares(x, y, func, priors=None, silent=False, **kwargs):
|
||||||
If true all output to the console is omitted (default False).
|
If true all output to the console is omitted (default False).
|
||||||
initial_guess : list
|
initial_guess : list
|
||||||
can provide an initial guess for the input parameters. Relevant for
|
can provide an initial guess for the input parameters. Relevant for
|
||||||
non-linear fits with many parameters.
|
non-linear fits with many parameters. In case of correlated fits the guess is used to perform
|
||||||
|
an uncorrelated fit which then serves as guess for the correlated fit.
|
||||||
method : str, optional
|
method : str, optional
|
||||||
can be used to choose an alternative method for the minimization of chisquare.
|
can be used to choose an alternative method for the minimization of chisquare.
|
||||||
The possible methods are the ones which can be used for scipy.optimize.minimize and
|
The possible methods are the ones which can be used for scipy.optimize.minimize and
|
||||||
|
@ -264,7 +264,7 @@ def total_least_squares(x, y, func, silent=False, **kwargs):
|
||||||
|
|
||||||
fitp = out.beta
|
fitp = out.beta
|
||||||
try:
|
try:
|
||||||
hess_inv = np.linalg.pinv(jacobian(jacobian(odr_chisquare))(np.concatenate((fitp, out.xplus.ravel()))))
|
hess = jacobian(jacobian(odr_chisquare))(np.concatenate((fitp, out.xplus.ravel())))
|
||||||
except TypeError:
|
except TypeError:
|
||||||
raise Exception("It is required to use autograd.numpy instead of numpy within fit functions, see the documentation for details.") from None
|
raise Exception("It is required to use autograd.numpy instead of numpy within fit functions, see the documentation for details.") from None
|
||||||
|
|
||||||
|
@ -275,7 +275,11 @@ def total_least_squares(x, y, func, silent=False, **kwargs):
|
||||||
|
|
||||||
jac_jac_x = jacobian(jacobian(odr_chisquare_compact_x))(np.concatenate((fitp, out.xplus.ravel(), x_f.ravel())))
|
jac_jac_x = jacobian(jacobian(odr_chisquare_compact_x))(np.concatenate((fitp, out.xplus.ravel(), x_f.ravel())))
|
||||||
|
|
||||||
deriv_x = -hess_inv @ jac_jac_x[:n_parms + m, n_parms + m:]
|
# Compute hess^{-1} @ jac_jac_x[:n_parms + m, n_parms + m:] using LAPACK dgesv
|
||||||
|
try:
|
||||||
|
deriv_x = -scipy.linalg.solve(hess, jac_jac_x[:n_parms + m, n_parms + m:])
|
||||||
|
except np.linalg.LinAlgError:
|
||||||
|
raise Exception("Cannot invert hessian matrix.")
|
||||||
|
|
||||||
def odr_chisquare_compact_y(d):
|
def odr_chisquare_compact_y(d):
|
||||||
model = func(d[:n_parms], d[n_parms:n_parms + m].reshape(x_shape))
|
model = func(d[:n_parms], d[n_parms:n_parms + m].reshape(x_shape))
|
||||||
|
@ -284,7 +288,11 @@ def total_least_squares(x, y, func, silent=False, **kwargs):
|
||||||
|
|
||||||
jac_jac_y = jacobian(jacobian(odr_chisquare_compact_y))(np.concatenate((fitp, out.xplus.ravel(), y_f)))
|
jac_jac_y = jacobian(jacobian(odr_chisquare_compact_y))(np.concatenate((fitp, out.xplus.ravel(), y_f)))
|
||||||
|
|
||||||
deriv_y = -hess_inv @ jac_jac_y[:n_parms + m, n_parms + m:]
|
# Compute hess^{-1} @ jac_jac_y[:n_parms + m, n_parms + m:] using LAPACK dgesv
|
||||||
|
try:
|
||||||
|
deriv_y = -scipy.linalg.solve(hess, jac_jac_y[:n_parms + m, n_parms + m:])
|
||||||
|
except np.linalg.LinAlgError:
|
||||||
|
raise Exception("Cannot invert hessian matrix.")
|
||||||
|
|
||||||
result = []
|
result = []
|
||||||
for i in range(n_parms):
|
for i in range(n_parms):
|
||||||
|
@ -294,7 +302,7 @@ def total_least_squares(x, y, func, silent=False, **kwargs):
|
||||||
|
|
||||||
output.odr_chisquare = odr_chisquare(np.concatenate((out.beta, out.xplus.ravel())))
|
output.odr_chisquare = odr_chisquare(np.concatenate((out.beta, out.xplus.ravel())))
|
||||||
output.dof = x.shape[-1] - n_parms
|
output.dof = x.shape[-1] - n_parms
|
||||||
output.p_value = 1 - chi2.cdf(output.odr_chisquare, output.dof)
|
output.p_value = 1 - scipy.stats.chi2.cdf(output.odr_chisquare, output.dof)
|
||||||
|
|
||||||
return output
|
return output
|
||||||
|
|
||||||
|
@ -469,18 +477,17 @@ def _standard_fit(x, y, func, silent=False, **kwargs):
|
||||||
if condn > 1 / np.sqrt(np.finfo(float).eps):
|
if condn > 1 / np.sqrt(np.finfo(float).eps):
|
||||||
warnings.warn("Correlation matrix may be ill-conditioned, condition number: {%1.2e}" % (condn), RuntimeWarning)
|
warnings.warn("Correlation matrix may be ill-conditioned, condition number: {%1.2e}" % (condn), RuntimeWarning)
|
||||||
chol = np.linalg.cholesky(corr)
|
chol = np.linalg.cholesky(corr)
|
||||||
chol_inv = np.linalg.inv(chol)
|
chol_inv = scipy.linalg.solve_triangular(chol, covdiag, lower=True)
|
||||||
chol_inv = np.dot(chol_inv, covdiag)
|
|
||||||
|
|
||||||
def chisqfunc(p):
|
def chisqfunc_corr(p):
|
||||||
model = func(p, x)
|
model = func(p, x)
|
||||||
chisq = anp.sum(anp.dot(chol_inv, (y_f - model)) ** 2)
|
chisq = anp.sum(anp.dot(chol_inv, (y_f - model)) ** 2)
|
||||||
return chisq
|
return chisq
|
||||||
else:
|
|
||||||
def chisqfunc(p):
|
def chisqfunc(p):
|
||||||
model = func(p, x)
|
model = func(p, x)
|
||||||
chisq = anp.sum(((y_f - model) / dy_f) ** 2)
|
chisq = anp.sum(((y_f - model) / dy_f) ** 2)
|
||||||
return chisq
|
return chisq
|
||||||
|
|
||||||
output.method = kwargs.get('method', 'Levenberg-Marquardt')
|
output.method = kwargs.get('method', 'Levenberg-Marquardt')
|
||||||
if not silent:
|
if not silent:
|
||||||
|
@ -489,29 +496,38 @@ def _standard_fit(x, y, func, silent=False, **kwargs):
|
||||||
if output.method != 'Levenberg-Marquardt':
|
if output.method != 'Levenberg-Marquardt':
|
||||||
if output.method == 'migrad':
|
if output.method == 'migrad':
|
||||||
fit_result = iminuit.minimize(chisqfunc, x0, tol=1e-4) # Stopping criterion 0.002 * tol * errordef
|
fit_result = iminuit.minimize(chisqfunc, x0, tol=1e-4) # Stopping criterion 0.002 * tol * errordef
|
||||||
|
if kwargs.get('correlated_fit') is True:
|
||||||
|
fit_result = iminuit.minimize(chisqfunc_corr, fit_result.x, tol=1e-4) # Stopping criterion 0.002 * tol * errordef
|
||||||
output.iterations = fit_result.nfev
|
output.iterations = fit_result.nfev
|
||||||
else:
|
else:
|
||||||
fit_result = scipy.optimize.minimize(chisqfunc, x0, method=kwargs.get('method'), tol=1e-12)
|
fit_result = scipy.optimize.minimize(chisqfunc, x0, method=kwargs.get('method'), tol=1e-12)
|
||||||
|
if kwargs.get('correlated_fit') is True:
|
||||||
|
fit_result = scipy.optimize.minimize(chisqfunc_corr, fit_result.x, method=kwargs.get('method'), tol=1e-12)
|
||||||
output.iterations = fit_result.nit
|
output.iterations = fit_result.nit
|
||||||
|
|
||||||
chisquare = fit_result.fun
|
chisquare = fit_result.fun
|
||||||
|
|
||||||
else:
|
else:
|
||||||
if kwargs.get('correlated_fit') is True:
|
if kwargs.get('correlated_fit') is True:
|
||||||
def chisqfunc_residuals(p):
|
def chisqfunc_residuals_corr(p):
|
||||||
model = func(p, x)
|
model = func(p, x)
|
||||||
chisq = anp.dot(chol_inv, (y_f - model))
|
chisq = anp.dot(chol_inv, (y_f - model))
|
||||||
return chisq
|
return chisq
|
||||||
|
|
||||||
else:
|
def chisqfunc_residuals(p):
|
||||||
def chisqfunc_residuals(p):
|
model = func(p, x)
|
||||||
model = func(p, x)
|
chisq = ((y_f - model) / dy_f)
|
||||||
chisq = ((y_f - model) / dy_f)
|
return chisq
|
||||||
return chisq
|
|
||||||
|
|
||||||
fit_result = scipy.optimize.least_squares(chisqfunc_residuals, x0, method='lm', ftol=1e-15, gtol=1e-15, xtol=1e-15)
|
fit_result = scipy.optimize.least_squares(chisqfunc_residuals, x0, method='lm', ftol=1e-15, gtol=1e-15, xtol=1e-15)
|
||||||
|
if kwargs.get('correlated_fit') is True:
|
||||||
|
fit_result = scipy.optimize.least_squares(chisqfunc_residuals_corr, fit_result.x, method='lm', ftol=1e-15, gtol=1e-15, xtol=1e-15)
|
||||||
|
|
||||||
chisquare = np.sum(fit_result.fun ** 2)
|
chisquare = np.sum(fit_result.fun ** 2)
|
||||||
|
if kwargs.get('correlated_fit') is True:
|
||||||
|
assert np.isclose(chisquare, chisqfunc_corr(fit_result.x), atol=1e-14)
|
||||||
|
else:
|
||||||
|
assert np.isclose(chisquare, chisqfunc(fit_result.x), atol=1e-14)
|
||||||
|
|
||||||
output.iterations = fit_result.nfev
|
output.iterations = fit_result.nfev
|
||||||
|
|
||||||
|
@ -542,7 +558,10 @@ def _standard_fit(x, y, func, silent=False, **kwargs):
|
||||||
|
|
||||||
fitp = fit_result.x
|
fitp = fit_result.x
|
||||||
try:
|
try:
|
||||||
hess_inv = np.linalg.pinv(jacobian(jacobian(chisqfunc))(fitp))
|
if kwargs.get('correlated_fit') is True:
|
||||||
|
hess = jacobian(jacobian(chisqfunc_corr))(fitp)
|
||||||
|
else:
|
||||||
|
hess = jacobian(jacobian(chisqfunc))(fitp)
|
||||||
except TypeError:
|
except TypeError:
|
||||||
raise Exception("It is required to use autograd.numpy instead of numpy within fit functions, see the documentation for details.") from None
|
raise Exception("It is required to use autograd.numpy instead of numpy within fit functions, see the documentation for details.") from None
|
||||||
|
|
||||||
|
@ -560,7 +579,11 @@ def _standard_fit(x, y, func, silent=False, **kwargs):
|
||||||
|
|
||||||
jac_jac = jacobian(jacobian(chisqfunc_compact))(np.concatenate((fitp, y_f)))
|
jac_jac = jacobian(jacobian(chisqfunc_compact))(np.concatenate((fitp, y_f)))
|
||||||
|
|
||||||
deriv = -hess_inv @ jac_jac[:n_parms, n_parms:]
|
# Compute hess^{-1} @ jac_jac[:n_parms, n_parms:] using LAPACK dgesv
|
||||||
|
try:
|
||||||
|
deriv = -scipy.linalg.solve(hess, jac_jac[:n_parms, n_parms:])
|
||||||
|
except np.linalg.LinAlgError:
|
||||||
|
raise Exception("Cannot invert hessian matrix.")
|
||||||
|
|
||||||
result = []
|
result = []
|
||||||
for i in range(n_parms):
|
for i in range(n_parms):
|
||||||
|
@ -568,9 +591,9 @@ def _standard_fit(x, y, func, silent=False, **kwargs):
|
||||||
|
|
||||||
output.fit_parameters = result
|
output.fit_parameters = result
|
||||||
|
|
||||||
output.chisquare = chisqfunc(fit_result.x)
|
output.chisquare = chisquare
|
||||||
output.dof = x.shape[-1] - n_parms
|
output.dof = x.shape[-1] - n_parms
|
||||||
output.p_value = 1 - chi2.cdf(output.chisquare, output.dof)
|
output.p_value = 1 - scipy.stats.chi2.cdf(output.chisquare, output.dof)
|
||||||
|
|
||||||
if kwargs.get('resplot') is True:
|
if kwargs.get('resplot') is True:
|
||||||
residual_plot(x, y, func, result)
|
residual_plot(x, y, func, result)
|
||||||
|
|
|
@ -61,7 +61,7 @@ def read_ADerrors(file_path, bdio_path='./libbdio.so', **kwargs):
|
||||||
return_list = []
|
return_list = []
|
||||||
|
|
||||||
print('Reading of bdio file started')
|
print('Reading of bdio file started')
|
||||||
while 1 > 0:
|
while True:
|
||||||
bdio_seek_record(fbdio)
|
bdio_seek_record(fbdio)
|
||||||
ruinfo = bdio_get_ruinfo(fbdio)
|
ruinfo = bdio_get_ruinfo(fbdio)
|
||||||
|
|
||||||
|
@ -373,7 +373,7 @@ def read_mesons(file_path, bdio_path='./libbdio.so', **kwargs):
|
||||||
fbdio = bdio_open(ctypes.c_char_p(b_path), ctypes.c_char_p(b_read), ctypes.c_char_p(b_form))
|
fbdio = bdio_open(ctypes.c_char_p(b_path), ctypes.c_char_p(b_read), ctypes.c_char_p(b_form))
|
||||||
|
|
||||||
print('Reading of bdio file started')
|
print('Reading of bdio file started')
|
||||||
while 1 > 0:
|
while True:
|
||||||
bdio_seek_record(fbdio)
|
bdio_seek_record(fbdio)
|
||||||
ruinfo = bdio_get_ruinfo(fbdio)
|
ruinfo = bdio_get_ruinfo(fbdio)
|
||||||
if ruinfo < 0:
|
if ruinfo < 0:
|
||||||
|
@ -582,7 +582,7 @@ def read_dSdm(file_path, bdio_path='./libbdio.so', **kwargs):
|
||||||
fbdio = bdio_open(ctypes.c_char_p(b_path), ctypes.c_char_p(b_read), ctypes.c_char_p(b_form))
|
fbdio = bdio_open(ctypes.c_char_p(b_path), ctypes.c_char_p(b_read), ctypes.c_char_p(b_form))
|
||||||
|
|
||||||
print('Reading of bdio file started')
|
print('Reading of bdio file started')
|
||||||
while 1 > 0:
|
while True:
|
||||||
bdio_seek_record(fbdio)
|
bdio_seek_record(fbdio)
|
||||||
ruinfo = bdio_get_ruinfo(fbdio)
|
ruinfo = bdio_get_ruinfo(fbdio)
|
||||||
if ruinfo < 0:
|
if ruinfo < 0:
|
||||||
|
|
|
@ -648,7 +648,7 @@ def _dobsdict_to_xmlstring_spaces(d, space=' '):
|
||||||
return o
|
return o
|
||||||
|
|
||||||
|
|
||||||
def create_dobs_string(obsl, name, spec='dobs v1.0', origin='', symbol=[], who=None, enstags={}):
|
def create_dobs_string(obsl, name, spec='dobs v1.0', origin='', symbol=[], who=None, enstags=None):
|
||||||
"""Generate the string for the export of a list of Obs or structures containing Obs
|
"""Generate the string for the export of a list of Obs or structures containing Obs
|
||||||
to a .xml.gz file according to the Zeuthen dobs format.
|
to a .xml.gz file according to the Zeuthen dobs format.
|
||||||
|
|
||||||
|
@ -674,6 +674,8 @@ def create_dobs_string(obsl, name, spec='dobs v1.0', origin='', symbol=[], who=N
|
||||||
Provide alternative enstag for ensembles in the form enstags = {ename: enstag}
|
Provide alternative enstag for ensembles in the form enstags = {ename: enstag}
|
||||||
Otherwise, the ensemble name is used.
|
Otherwise, the ensemble name is used.
|
||||||
"""
|
"""
|
||||||
|
if enstags is None:
|
||||||
|
enstags = {}
|
||||||
od = {}
|
od = {}
|
||||||
r_names = []
|
r_names = []
|
||||||
for o in obsl:
|
for o in obsl:
|
||||||
|
@ -831,7 +833,7 @@ def create_dobs_string(obsl, name, spec='dobs v1.0', origin='', symbol=[], who=N
|
||||||
return rs
|
return rs
|
||||||
|
|
||||||
|
|
||||||
def write_dobs(obsl, fname, name, spec='dobs v1.0', origin='', symbol=[], who=None, enstags={}, gz=True):
|
def write_dobs(obsl, fname, name, spec='dobs v1.0', origin='', symbol=[], who=None, enstags=None, gz=True):
|
||||||
"""Export a list of Obs or structures containing Obs to a .xml.gz file
|
"""Export a list of Obs or structures containing Obs to a .xml.gz file
|
||||||
according to the Zeuthen dobs format.
|
according to the Zeuthen dobs format.
|
||||||
|
|
||||||
|
@ -861,6 +863,8 @@ def write_dobs(obsl, fname, name, spec='dobs v1.0', origin='', symbol=[], who=No
|
||||||
gz : bool
|
gz : bool
|
||||||
If True, the output is a gzipped XML. If False, the output is a XML file.
|
If True, the output is a gzipped XML. If False, the output is a XML file.
|
||||||
"""
|
"""
|
||||||
|
if enstags is None:
|
||||||
|
enstags = {}
|
||||||
|
|
||||||
dobsstring = create_dobs_string(obsl, name, spec, origin, symbol, who, enstags=enstags)
|
dobsstring = create_dobs_string(obsl, name, spec, origin, symbol, who, enstags=enstags)
|
||||||
|
|
||||||
|
|
|
@ -55,8 +55,8 @@ def _get_files(path, filestem, idl):
|
||||||
return filtered_files, idx
|
return filtered_files, idx
|
||||||
|
|
||||||
|
|
||||||
def read_meson_hd5(path, filestem, ens_id, meson='meson_0', idl=None):
|
def read_meson_hd5(path, filestem, ens_id, meson='meson_0', idl=None, gammas=None):
|
||||||
"""Read hadrons meson hdf5 file and extract the meson labeled 'meson'
|
r'''Read hadrons meson hdf5 file and extract the meson labeled 'meson'
|
||||||
|
|
||||||
Parameters
|
Parameters
|
||||||
-----------------
|
-----------------
|
||||||
|
@ -69,13 +69,31 @@ def read_meson_hd5(path, filestem, ens_id, meson='meson_0', idl=None):
|
||||||
meson : str
|
meson : str
|
||||||
label of the meson to be extracted, standard value meson_0 which
|
label of the meson to be extracted, standard value meson_0 which
|
||||||
corresponds to the pseudoscalar pseudoscalar two-point function.
|
corresponds to the pseudoscalar pseudoscalar two-point function.
|
||||||
|
gammas : tuple of strings
|
||||||
|
Instrad of a meson label one can also provide a tuple of two strings
|
||||||
|
indicating the gamma matrices at source and sink.
|
||||||
|
("Gamma5", "Gamma5") corresponds to the pseudoscalar pseudoscalar
|
||||||
|
two-point function. The gammas argument dominateds over meson.
|
||||||
idl : range
|
idl : range
|
||||||
If specified only configurations in the given range are read in.
|
If specified only configurations in the given range are read in.
|
||||||
"""
|
'''
|
||||||
|
|
||||||
files, idx = _get_files(path, filestem, idl)
|
files, idx = _get_files(path, filestem, idl)
|
||||||
|
|
||||||
tree = meson.rsplit('_')[0]
|
tree = meson.rsplit('_')[0]
|
||||||
|
if gammas is not None:
|
||||||
|
h5file = h5py.File(path + '/' + files[0], "r")
|
||||||
|
found_meson = None
|
||||||
|
for key in h5file[tree].keys():
|
||||||
|
if gammas[0] == h5file[tree][key].attrs["gamma_snk"][0].decode() and h5file[tree][key].attrs["gamma_src"][0].decode() == gammas[1]:
|
||||||
|
found_meson = key
|
||||||
|
break
|
||||||
|
h5file.close()
|
||||||
|
if found_meson:
|
||||||
|
meson = found_meson
|
||||||
|
else:
|
||||||
|
raise Exception("Source Sink combination " + str(gammas) + " not found.")
|
||||||
|
|
||||||
corr_data = []
|
corr_data = []
|
||||||
infos = []
|
infos = []
|
||||||
for hd5_file in files:
|
for hd5_file in files:
|
||||||
|
@ -153,10 +171,6 @@ def read_DistillationContraction_hd5(path, ens_id, diagrams=["direct"], idl=None
|
||||||
identifier = tuple(identifier)
|
identifier = tuple(identifier)
|
||||||
# "DistillationContraction/Metadata/DmfSuffix" contains info about different quarks, irrelevant in the SU(3) case.
|
# "DistillationContraction/Metadata/DmfSuffix" contains info about different quarks, irrelevant in the SU(3) case.
|
||||||
|
|
||||||
check_traj = h5file["DistillationContraction/Metadata"].attrs.get("Traj")[0]
|
|
||||||
|
|
||||||
assert check_traj == n_traj
|
|
||||||
|
|
||||||
for diagram in diagrams:
|
for diagram in diagrams:
|
||||||
real_data = np.zeros(Nt)
|
real_data = np.zeros(Nt)
|
||||||
for x0 in range(Nt):
|
for x0 in range(Nt):
|
||||||
|
|
|
@ -165,7 +165,7 @@ def create_json_string(ol, description='', indent=1):
|
||||||
|
|
||||||
d = {}
|
d = {}
|
||||||
d['program'] = 'pyerrors %s' % (pyerrorsversion.__version__)
|
d['program'] = 'pyerrors %s' % (pyerrorsversion.__version__)
|
||||||
d['version'] = '1.0'
|
d['version'] = '1.1'
|
||||||
d['who'] = getpass.getuser()
|
d['who'] = getpass.getuser()
|
||||||
d['date'] = datetime.datetime.now().astimezone().strftime('%Y-%m-%d %H:%M:%S %z')
|
d['date'] = datetime.datetime.now().astimezone().strftime('%Y-%m-%d %H:%M:%S %z')
|
||||||
d['host'] = socket.gethostname() + ', ' + platform.platform()
|
d['host'] = socket.gethostname() + ', ' + platform.platform()
|
||||||
|
@ -294,6 +294,7 @@ def _parse_json_dict(json_dict, verbose=True, full_output=False):
|
||||||
|
|
||||||
if od:
|
if od:
|
||||||
ret = Obs([[ddi[0] + values[0] for ddi in di] for di in od['deltas']], od['names'], idl=od['idl'])
|
ret = Obs([[ddi[0] + values[0] for ddi in di] for di in od['deltas']], od['names'], idl=od['idl'])
|
||||||
|
ret._value = values[0]
|
||||||
ret.is_merged = od['is_merged']
|
ret.is_merged = od['is_merged']
|
||||||
else:
|
else:
|
||||||
ret = Obs([], [], means=[])
|
ret = Obs([], [], means=[])
|
||||||
|
@ -319,6 +320,7 @@ def _parse_json_dict(json_dict, verbose=True, full_output=False):
|
||||||
for i in range(layout):
|
for i in range(layout):
|
||||||
if od:
|
if od:
|
||||||
ret.append(Obs([list(di[:, i] + values[i]) for di in od['deltas']], od['names'], idl=od['idl']))
|
ret.append(Obs([list(di[:, i] + values[i]) for di in od['deltas']], od['names'], idl=od['idl']))
|
||||||
|
ret[-1]._value = values[i]
|
||||||
ret[-1].is_merged = od['is_merged']
|
ret[-1].is_merged = od['is_merged']
|
||||||
else:
|
else:
|
||||||
ret.append(Obs([], [], means=[]))
|
ret.append(Obs([], [], means=[]))
|
||||||
|
@ -346,6 +348,7 @@ def _parse_json_dict(json_dict, verbose=True, full_output=False):
|
||||||
for i in range(N):
|
for i in range(N):
|
||||||
if od:
|
if od:
|
||||||
ret.append(Obs([di[:, i] + values[i] for di in od['deltas']], od['names'], idl=od['idl']))
|
ret.append(Obs([di[:, i] + values[i] for di in od['deltas']], od['names'], idl=od['idl']))
|
||||||
|
ret[-1]._value = values[i]
|
||||||
ret[-1].is_merged = od['is_merged']
|
ret[-1].is_merged = od['is_merged']
|
||||||
else:
|
else:
|
||||||
ret.append(Obs([], [], means=[]))
|
ret.append(Obs([], [], means=[]))
|
||||||
|
|
|
@ -17,8 +17,6 @@ def read_pbp(path, prefix, **kwargs):
|
||||||
list which contains the last config to be read for each replicum
|
list which contains the last config to be read for each replicum
|
||||||
"""
|
"""
|
||||||
|
|
||||||
extract_nfct = 1
|
|
||||||
|
|
||||||
ls = []
|
ls = []
|
||||||
for (dirpath, dirnames, filenames) in os.walk(path):
|
for (dirpath, dirnames, filenames) in os.walk(path):
|
||||||
ls.extend(filenames)
|
ls.extend(filenames)
|
||||||
|
@ -78,14 +76,10 @@ def read_pbp(path, prefix, **kwargs):
|
||||||
|
|
||||||
# This block is necessary for openQCD1.6 ms1 files
|
# This block is necessary for openQCD1.6 ms1 files
|
||||||
nfct = []
|
nfct = []
|
||||||
if extract_nfct == 1:
|
for i in range(nrw):
|
||||||
for i in range(nrw):
|
t = fp.read(4)
|
||||||
t = fp.read(4)
|
nfct.append(struct.unpack('i', t)[0])
|
||||||
nfct.append(struct.unpack('i', t)[0])
|
print('nfct: ', nfct) # Hasenbusch factor, 1 for rat reweighting
|
||||||
print('nfct: ', nfct) # Hasenbusch factor, 1 for rat reweighting
|
|
||||||
else:
|
|
||||||
for i in range(nrw):
|
|
||||||
nfct.append(1)
|
|
||||||
|
|
||||||
nsrc = []
|
nsrc = []
|
||||||
for i in range(nrw):
|
for i in range(nrw):
|
||||||
|
@ -93,7 +87,7 @@ def read_pbp(path, prefix, **kwargs):
|
||||||
nsrc.append(struct.unpack('i', t)[0])
|
nsrc.append(struct.unpack('i', t)[0])
|
||||||
|
|
||||||
# body
|
# body
|
||||||
while 0 < 1:
|
while True:
|
||||||
t = fp.read(4)
|
t = fp.read(4)
|
||||||
if len(t) < 4:
|
if len(t) < 4:
|
||||||
break
|
break
|
||||||
|
|
|
@ -150,7 +150,7 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
|
||||||
print('something is wrong!')
|
print('something is wrong!')
|
||||||
|
|
||||||
configlist.append([])
|
configlist.append([])
|
||||||
while 0 < 1:
|
while True:
|
||||||
t = fp.read(4)
|
t = fp.read(4)
|
||||||
if len(t) < 4:
|
if len(t) < 4:
|
||||||
break
|
break
|
||||||
|
@ -362,7 +362,7 @@ def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwar
|
||||||
Ysl = []
|
Ysl = []
|
||||||
|
|
||||||
configlist.append([])
|
configlist.append([])
|
||||||
while 0 < 1:
|
while True:
|
||||||
t = fp.read(4)
|
t = fp.read(4)
|
||||||
if(len(t) < 4):
|
if(len(t) < 4):
|
||||||
break
|
break
|
||||||
|
@ -657,7 +657,7 @@ def read_qtop(path, prefix, c, dtr_cnfg=1, version="openQCD", **kwargs):
|
||||||
nfl = 1
|
nfl = 1
|
||||||
iobs = 8 * nfl # number of flow observables calculated
|
iobs = 8 * nfl # number of flow observables calculated
|
||||||
|
|
||||||
while 0 < 1:
|
while True:
|
||||||
t = fp.read(4)
|
t = fp.read(4)
|
||||||
if(len(t) < 4):
|
if(len(t) < 4):
|
||||||
break
|
break
|
||||||
|
@ -682,7 +682,7 @@ def read_qtop(path, prefix, c, dtr_cnfg=1, version="openQCD", **kwargs):
|
||||||
t = fp.read(8)
|
t = fp.read(8)
|
||||||
eps = struct.unpack('d', t)[0]
|
eps = struct.unpack('d', t)[0]
|
||||||
|
|
||||||
while 0 < 1:
|
while True:
|
||||||
t = fp.read(4)
|
t = fp.read(4)
|
||||||
if(len(t) < 4):
|
if(len(t) < 4):
|
||||||
break
|
break
|
||||||
|
|
100
pyerrors/obs.py
100
pyerrors/obs.py
|
@ -1,5 +1,7 @@
|
||||||
import warnings
|
import warnings
|
||||||
import pickle
|
import pickle
|
||||||
|
from math import gcd
|
||||||
|
from functools import reduce
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import autograd.numpy as anp # Thinly-wrapped numpy
|
import autograd.numpy as anp # Thinly-wrapped numpy
|
||||||
from autograd import jacobian
|
from autograd import jacobian
|
||||||
|
@ -981,6 +983,39 @@ def _merge_idx(idl):
|
||||||
return sorted(set().union(*idl))
|
return sorted(set().union(*idl))
|
||||||
|
|
||||||
|
|
||||||
|
def _intersection_idx(idl):
|
||||||
|
"""Returns the intersection of all lists in idl as sorted list
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
idl : list
|
||||||
|
List of lists or ranges.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def _lcm(*args):
|
||||||
|
"""Returns the lowest common multiple of args.
|
||||||
|
|
||||||
|
From python 3.9 onwards the math library contains an lcm function."""
|
||||||
|
return reduce(lambda a, b: a * b // gcd(a, b), args)
|
||||||
|
|
||||||
|
# Use groupby to efficiently check whether all elements of idl are identical
|
||||||
|
try:
|
||||||
|
g = groupby(idl)
|
||||||
|
if next(g, True) and not next(g, False):
|
||||||
|
return idl[0]
|
||||||
|
except Exception:
|
||||||
|
pass
|
||||||
|
|
||||||
|
if np.all([type(idx) is range for idx in idl]):
|
||||||
|
if len(set([idx[0] for idx in idl])) == 1:
|
||||||
|
idstart = max([idx.start for idx in idl])
|
||||||
|
idstop = min([idx.stop for idx in idl])
|
||||||
|
idstep = _lcm(*[idx.step for idx in idl])
|
||||||
|
return range(idstart, idstop, idstep)
|
||||||
|
|
||||||
|
return sorted(set.intersection(*[set(o) for o in idl]))
|
||||||
|
|
||||||
|
|
||||||
def _expand_deltas_for_merge(deltas, idx, shape, new_idx):
|
def _expand_deltas_for_merge(deltas, idx, shape, new_idx):
|
||||||
"""Expand deltas defined on idx to the list of configs that is defined by new_idx.
|
"""Expand deltas defined on idx to the list of configs that is defined by new_idx.
|
||||||
New, empty entries are filled by 0. If idx and new_idx are of type range, the smallest
|
New, empty entries are filled by 0. If idx and new_idx are of type range, the smallest
|
||||||
|
@ -1008,6 +1043,34 @@ def _expand_deltas_for_merge(deltas, idx, shape, new_idx):
|
||||||
return np.array([ret[new_idx[i] - new_idx[0]] for i in range(len(new_idx))])
|
return np.array([ret[new_idx[i] - new_idx[0]] for i in range(len(new_idx))])
|
||||||
|
|
||||||
|
|
||||||
|
def _collapse_deltas_for_merge(deltas, idx, shape, new_idx):
|
||||||
|
"""Collapse deltas defined on idx to the list of configs that is defined by new_idx.
|
||||||
|
If idx and new_idx are of type range, the smallest
|
||||||
|
common divisor of the step sizes is used as new step size.
|
||||||
|
|
||||||
|
Parameters
|
||||||
|
----------
|
||||||
|
deltas : list
|
||||||
|
List of fluctuations
|
||||||
|
idx : list
|
||||||
|
List or range of configs on which the deltas are defined.
|
||||||
|
Has to be a subset of new_idx and has to be sorted in ascending order.
|
||||||
|
shape : list
|
||||||
|
Number of configs in idx.
|
||||||
|
new_idx : list
|
||||||
|
List of configs that defines the new range, has to be sorted in ascending order.
|
||||||
|
"""
|
||||||
|
|
||||||
|
if type(idx) is range and type(new_idx) is range:
|
||||||
|
if idx == new_idx:
|
||||||
|
return deltas
|
||||||
|
ret = np.zeros(new_idx[-1] - new_idx[0] + 1)
|
||||||
|
for i in range(shape):
|
||||||
|
if idx[i] in new_idx:
|
||||||
|
ret[idx[i] - new_idx[0]] = deltas[i]
|
||||||
|
return np.array([ret[new_idx[i] - new_idx[0]] for i in range(len(new_idx))])
|
||||||
|
|
||||||
|
|
||||||
def _filter_zeroes(deltas, idx, eps=Obs.filter_eps):
|
def _filter_zeroes(deltas, idx, eps=Obs.filter_eps):
|
||||||
"""Filter out all configurations with vanishing fluctuation such that they do not
|
"""Filter out all configurations with vanishing fluctuation such that they do not
|
||||||
contribute to the error estimate anymore. Returns the new deltas and
|
contribute to the error estimate anymore. Returns the new deltas and
|
||||||
|
@ -1389,13 +1452,6 @@ def covariance(obs, visualize=False, correlation=False, smooth=None, **kwargs):
|
||||||
if isinstance(smooth, int):
|
if isinstance(smooth, int):
|
||||||
corr = _smooth_eigenvalues(corr, smooth)
|
corr = _smooth_eigenvalues(corr, smooth)
|
||||||
|
|
||||||
errors = [o.dvalue for o in obs]
|
|
||||||
cov = np.diag(errors) @ corr @ np.diag(errors)
|
|
||||||
|
|
||||||
eigenvalues = np.linalg.eigh(cov)[0]
|
|
||||||
if not np.all(eigenvalues >= 0):
|
|
||||||
warnings.warn("Covariance matrix is not positive semi-definite (Eigenvalues: " + str(eigenvalues) + ")", RuntimeWarning)
|
|
||||||
|
|
||||||
if visualize:
|
if visualize:
|
||||||
plt.matshow(corr, vmin=-1, vmax=1)
|
plt.matshow(corr, vmin=-1, vmax=1)
|
||||||
plt.set_cmap('RdBu')
|
plt.set_cmap('RdBu')
|
||||||
|
@ -1404,8 +1460,15 @@ def covariance(obs, visualize=False, correlation=False, smooth=None, **kwargs):
|
||||||
|
|
||||||
if correlation is True:
|
if correlation is True:
|
||||||
return corr
|
return corr
|
||||||
else:
|
|
||||||
return cov
|
errors = [o.dvalue for o in obs]
|
||||||
|
cov = np.diag(errors) @ corr @ np.diag(errors)
|
||||||
|
|
||||||
|
eigenvalues = np.linalg.eigh(cov)[0]
|
||||||
|
if not np.all(eigenvalues >= 0):
|
||||||
|
warnings.warn("Covariance matrix is not positive semi-definite (Eigenvalues: " + str(eigenvalues) + ")", RuntimeWarning)
|
||||||
|
|
||||||
|
return cov
|
||||||
|
|
||||||
|
|
||||||
def _smooth_eigenvalues(corr, E):
|
def _smooth_eigenvalues(corr, E):
|
||||||
|
@ -1429,8 +1492,8 @@ def _covariance_element(obs1, obs2):
|
||||||
"""Estimates the covariance of two Obs objects, neglecting autocorrelations."""
|
"""Estimates the covariance of two Obs objects, neglecting autocorrelations."""
|
||||||
|
|
||||||
def calc_gamma(deltas1, deltas2, idx1, idx2, new_idx):
|
def calc_gamma(deltas1, deltas2, idx1, idx2, new_idx):
|
||||||
deltas1 = _expand_deltas_for_merge(deltas1, idx1, len(idx1), new_idx)
|
deltas1 = _collapse_deltas_for_merge(deltas1, idx1, len(idx1), new_idx)
|
||||||
deltas2 = _expand_deltas_for_merge(deltas2, idx2, len(idx2), new_idx)
|
deltas2 = _collapse_deltas_for_merge(deltas2, idx2, len(idx2), new_idx)
|
||||||
return np.sum(deltas1 * deltas2)
|
return np.sum(deltas1 * deltas2)
|
||||||
|
|
||||||
if set(obs1.names).isdisjoint(set(obs2.names)):
|
if set(obs1.names).isdisjoint(set(obs2.names)):
|
||||||
|
@ -1450,29 +1513,30 @@ def _covariance_element(obs1, obs2):
|
||||||
for r_name in obs1.e_content[e_name]:
|
for r_name in obs1.e_content[e_name]:
|
||||||
if r_name not in obs2.e_content[e_name]:
|
if r_name not in obs2.e_content[e_name]:
|
||||||
continue
|
continue
|
||||||
idl_d[r_name] = _merge_idx([obs1.idl[r_name], obs2.idl[r_name]])
|
idl_d[r_name] = _intersection_idx([obs1.idl[r_name], obs2.idl[r_name]])
|
||||||
|
|
||||||
gamma = 0.0
|
gamma = 0.0
|
||||||
|
|
||||||
for r_name in obs1.e_content[e_name]:
|
for r_name in obs1.e_content[e_name]:
|
||||||
if r_name not in obs2.e_content[e_name]:
|
if r_name not in obs2.e_content[e_name]:
|
||||||
continue
|
continue
|
||||||
|
if len(idl_d[r_name]) == 0:
|
||||||
|
continue
|
||||||
gamma += calc_gamma(obs1.deltas[r_name], obs2.deltas[r_name], obs1.idl[r_name], obs2.idl[r_name], idl_d[r_name])
|
gamma += calc_gamma(obs1.deltas[r_name], obs2.deltas[r_name], obs1.idl[r_name], obs2.idl[r_name], idl_d[r_name])
|
||||||
|
|
||||||
if gamma == 0.0:
|
if gamma == 0.0:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
gamma_div = 0.0
|
gamma_div = 0.0
|
||||||
e_N = 0
|
|
||||||
for r_name in obs1.e_content[e_name]:
|
for r_name in obs1.e_content[e_name]:
|
||||||
if r_name not in obs2.e_content[e_name]:
|
if r_name not in obs2.e_content[e_name]:
|
||||||
continue
|
continue
|
||||||
gamma_div += calc_gamma(np.ones(obs1.shape[r_name]), np.ones(obs2.shape[r_name]), obs1.idl[r_name], obs2.idl[r_name], idl_d[r_name])
|
if len(idl_d[r_name]) == 0:
|
||||||
e_N += len(idl_d[r_name])
|
continue
|
||||||
gamma /= max(gamma_div, 1.0)
|
gamma_div += np.sqrt(calc_gamma(obs1.deltas[r_name], obs1.deltas[r_name], obs1.idl[r_name], obs1.idl[r_name], idl_d[r_name]) * calc_gamma(obs2.deltas[r_name], obs2.deltas[r_name], obs2.idl[r_name], obs2.idl[r_name], idl_d[r_name]))
|
||||||
|
gamma /= gamma_div
|
||||||
|
|
||||||
# Bias correction hep-lat/0306017 eq. (49)
|
dvalue += gamma
|
||||||
dvalue += (1 + 1 / e_N) * gamma / e_N
|
|
||||||
|
|
||||||
for e_name in obs1.cov_names:
|
for e_name in obs1.cov_names:
|
||||||
|
|
||||||
|
|
|
@ -1 +1 @@
|
||||||
__version__ = "2.1.0+dev"
|
__version__ = "2.2.0+dev"
|
||||||
|
|
28
setup.py
28
setup.py
|
@ -1,16 +1,40 @@
|
||||||
from setuptools import setup, find_packages
|
from setuptools import setup, find_packages
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
from distutils.util import convert_path
|
||||||
|
|
||||||
this_directory = Path(__file__).parent
|
this_directory = Path(__file__).parent
|
||||||
long_description = (this_directory / "README.md").read_text()
|
long_description = (this_directory / "README.md").read_text()
|
||||||
|
|
||||||
|
version = {}
|
||||||
|
with open(convert_path('pyerrors/version.py')) as ver_file:
|
||||||
|
exec(ver_file.read(), version)
|
||||||
|
|
||||||
setup(name='pyerrors',
|
setup(name='pyerrors',
|
||||||
version='2.1.0+dev',
|
version=version['__version__'],
|
||||||
description='Error analysis for lattice QCD',
|
description='Error analysis for lattice QCD',
|
||||||
long_description=long_description,
|
long_description=long_description,
|
||||||
long_description_content_type='text/markdown',
|
long_description_content_type='text/markdown',
|
||||||
|
url="https://github.com/fjosw/pyerrors",
|
||||||
|
project_urls= {
|
||||||
|
'Documentation': 'https://fjosw.github.io/pyerrors/pyerrors.html',
|
||||||
|
'Bug Tracker': 'https://github.com/fjosw/pyerrors/issues',
|
||||||
|
'Changelog' : 'https://github.com/fjosw/pyerrors/blob/master/CHANGELOG.md'
|
||||||
|
},
|
||||||
author='Fabian Joswig',
|
author='Fabian Joswig',
|
||||||
author_email='fabian.joswig@ed.ac.uk',
|
author_email='fabian.joswig@ed.ac.uk',
|
||||||
|
license="MIT",
|
||||||
packages=find_packages(),
|
packages=find_packages(),
|
||||||
python_requires='>=3.6.0',
|
python_requires='>=3.6.0',
|
||||||
install_requires=['numpy>=1.16', 'autograd>=1.4', 'numdifftools', 'matplotlib>=3.3', 'scipy', 'iminuit>=2', 'h5py', 'lxml', 'python-rapidjson']
|
install_requires=['numpy>=1.16', 'autograd>=1.4', 'numdifftools', 'matplotlib>=3.3', 'scipy>=1', 'iminuit>=2', 'h5py>=3', 'lxml>=4', 'python-rapidjson>=1'],
|
||||||
|
classifiers=[
|
||||||
|
'Development Status :: 5 - Production/Stable',
|
||||||
|
'Intended Audience :: Science/Research',
|
||||||
|
'License :: OSI Approved :: MIT License',
|
||||||
|
'Programming Language :: Python :: 3',
|
||||||
|
'Programming Language :: Python :: 3.7',
|
||||||
|
'Programming Language :: Python :: 3.8',
|
||||||
|
'Programming Language :: Python :: 3.9',
|
||||||
|
'Programming Language :: Python :: 3.10',
|
||||||
|
'Topic :: Scientific/Engineering :: Physics'
|
||||||
|
],
|
||||||
)
|
)
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
import os
|
import os
|
||||||
import numpy as np
|
import numpy as np
|
||||||
import scipy
|
import scipy
|
||||||
|
import matplotlib.pyplot as plt
|
||||||
import pyerrors as pe
|
import pyerrors as pe
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
|
@ -246,6 +247,21 @@ def test_matrix_corr():
|
||||||
corr_mat.Eigenvalue(2, state=0)
|
corr_mat.Eigenvalue(2, state=0)
|
||||||
|
|
||||||
|
|
||||||
|
def test_corr_none_entries():
|
||||||
|
a = pe.pseudo_Obs(1.0, 0.1, 'a')
|
||||||
|
l = np.asarray([[a, a], [a, a]])
|
||||||
|
n = np.asarray([[None, None], [None, None]])
|
||||||
|
x = [l, n]
|
||||||
|
matr = pe.Corr(x)
|
||||||
|
matr.projected(np.asarray([1.0, 0.0]))
|
||||||
|
|
||||||
|
matr * 2 - 2 * matr
|
||||||
|
matr * matr + matr ** 2 / matr
|
||||||
|
|
||||||
|
for func in [np.sqrt, np.log, np.exp, np.sin, np.cos, np.tan, np.sinh, np.cosh, np.tanh]:
|
||||||
|
func(matr)
|
||||||
|
|
||||||
|
|
||||||
def test_GEVP_warnings():
|
def test_GEVP_warnings():
|
||||||
corr_aa = _gen_corr(1)
|
corr_aa = _gen_corr(1)
|
||||||
corr_ab = 0.5 * corr_aa
|
corr_ab = 0.5 * corr_aa
|
||||||
|
@ -332,6 +348,15 @@ def test_matrix_symmetric():
|
||||||
|
|
||||||
assert np.all([np.all(o == o.T) for o in sym_corr_mat])
|
assert np.all([np.all(o == o.T) for o in sym_corr_mat])
|
||||||
|
|
||||||
|
t_obs = pe.pseudo_Obs(1.0, 0.1, 'test')
|
||||||
|
o_mat = np.array([[t_obs, t_obs], [t_obs, t_obs]])
|
||||||
|
corr1 = pe.Corr([o_mat, None, o_mat])
|
||||||
|
corr2 = pe.Corr([o_mat, np.array([[None, None], [None, None]]), o_mat])
|
||||||
|
corr3 = pe.Corr([o_mat, np.array([[t_obs, None], [None, t_obs]], dtype=object), o_mat])
|
||||||
|
corr1.matrix_symmetric()
|
||||||
|
corr2.matrix_symmetric()
|
||||||
|
corr3.matrix_symmetric()
|
||||||
|
|
||||||
|
|
||||||
def test_GEVP_solver():
|
def test_GEVP_solver():
|
||||||
|
|
||||||
|
@ -347,6 +372,17 @@ def test_GEVP_solver():
|
||||||
assert np.allclose(sp_vecs, pe.correlators._GEVP_solver(mat1, mat2), atol=1e-14)
|
assert np.allclose(sp_vecs, pe.correlators._GEVP_solver(mat1, mat2), atol=1e-14)
|
||||||
|
|
||||||
|
|
||||||
|
def test_GEVP_none_entries():
|
||||||
|
t_obs = pe.pseudo_Obs(1.0, 0.1, 'test')
|
||||||
|
t_obs2 = pe.pseudo_Obs(0.1, 0.1, 'test')
|
||||||
|
|
||||||
|
o_mat = np.array([[t_obs, t_obs2], [t_obs2, t_obs2]])
|
||||||
|
n_arr = np.array([[None, None], [None, None]])
|
||||||
|
|
||||||
|
corr = pe.Corr([o_mat, o_mat, o_mat, o_mat, o_mat, o_mat, None, o_mat, n_arr, None, o_mat])
|
||||||
|
corr.GEVP(t0=2)
|
||||||
|
|
||||||
|
|
||||||
def test_hankel():
|
def test_hankel():
|
||||||
corr_content = []
|
corr_content = []
|
||||||
for t in range(8):
|
for t in range(8):
|
||||||
|
@ -405,6 +441,7 @@ def test_spaghetti_plot():
|
||||||
|
|
||||||
corr.spaghetti_plot(True)
|
corr.spaghetti_plot(True)
|
||||||
corr.spaghetti_plot(False)
|
corr.spaghetti_plot(False)
|
||||||
|
plt.close('all')
|
||||||
|
|
||||||
|
|
||||||
def _gen_corr(val, samples=2000):
|
def _gen_corr(val, samples=2000):
|
||||||
|
|
|
@ -149,8 +149,10 @@ def test_correlated_fit():
|
||||||
return p[1] * anp.exp(-p[0] * x)
|
return p[1] * anp.exp(-p[0] * x)
|
||||||
|
|
||||||
fitp = pe.least_squares(x, data, fitf, expected_chisquare=True)
|
fitp = pe.least_squares(x, data, fitf, expected_chisquare=True)
|
||||||
|
assert np.isclose(fitp.chisquare / fitp.dof, fitp.chisquare_by_dof, atol=1e-14)
|
||||||
|
|
||||||
fitpc = pe.least_squares(x, data, fitf, correlated_fit=True)
|
fitpc = pe.least_squares(x, data, fitf, correlated_fit=True)
|
||||||
|
assert np.isclose(fitpc.chisquare / fitpc.dof, fitpc.chisquare_by_dof, atol=1e-14)
|
||||||
for i in range(2):
|
for i in range(2):
|
||||||
diff = fitp[i] - fitpc[i]
|
diff = fitp[i] - fitpc[i]
|
||||||
diff.gamma_method()
|
diff.gamma_method()
|
||||||
|
@ -171,12 +173,35 @@ def test_fit_corr_independent():
|
||||||
y = a[0] * anp.exp(-a[1] * x)
|
y = a[0] * anp.exp(-a[1] * x)
|
||||||
return y
|
return y
|
||||||
|
|
||||||
out = pe.least_squares(x, oy, func)
|
for method in ["Levenberg-Marquardt", "migrad", "Nelder-Mead"]:
|
||||||
out_corr = pe.least_squares(x, oy, func, correlated_fit=True)
|
out = pe.least_squares(x, oy, func, method=method)
|
||||||
|
out_corr = pe.least_squares(x, oy, func, correlated_fit=True, method=method)
|
||||||
|
|
||||||
assert np.isclose(out.chisquare, out_corr.chisquare)
|
assert np.isclose(out.chisquare, out_corr.chisquare)
|
||||||
assert (out[0] - out_corr[0]).is_zero(atol=1e-5)
|
assert np.isclose(out.dof, out_corr.dof)
|
||||||
assert (out[1] - out_corr[1]).is_zero(atol=1e-5)
|
assert np.isclose(out.chisquare_by_dof, out_corr.chisquare_by_dof)
|
||||||
|
assert (out[0] - out_corr[0]).is_zero(atol=1e-5)
|
||||||
|
assert (out[1] - out_corr[1]).is_zero(atol=1e-5)
|
||||||
|
|
||||||
|
|
||||||
|
def test_linear_fit_guesses():
|
||||||
|
for err in [10, 0.1, 0.001]:
|
||||||
|
xvals = []
|
||||||
|
yvals = []
|
||||||
|
for x in range(1, 8, 2):
|
||||||
|
xvals.append(x)
|
||||||
|
yvals.append(pe.pseudo_Obs(x + np.random.normal(0.0, err), err, 'test1') + pe.pseudo_Obs(0, err / 100, 'test2', samples=87))
|
||||||
|
lin_func = lambda a, x: a[0] + a[1] * x
|
||||||
|
with pytest.raises(Exception):
|
||||||
|
pe.least_squares(xvals, yvals, lin_func)
|
||||||
|
[o.gamma_method() for o in yvals];
|
||||||
|
with pytest.raises(Exception):
|
||||||
|
pe.least_squares(xvals, yvals, lin_func, initial_guess=[5])
|
||||||
|
|
||||||
|
bad_guess = pe.least_squares(xvals, yvals, lin_func, initial_guess=[999, 999])
|
||||||
|
good_guess = pe.least_squares(xvals, yvals, lin_func, initial_guess=[0, 1])
|
||||||
|
assert np.isclose(bad_guess.chisquare, good_guess.chisquare, atol=1e-8)
|
||||||
|
assert np.all([(go - ba).is_zero(atol=1e-6) for (go, ba) in zip(good_guess, bad_guess)])
|
||||||
|
|
||||||
|
|
||||||
def test_total_least_squares():
|
def test_total_least_squares():
|
||||||
|
@ -218,7 +243,7 @@ def test_total_least_squares():
|
||||||
beta[i].gamma_method(S=1.0)
|
beta[i].gamma_method(S=1.0)
|
||||||
assert math.isclose(beta[i].value, output.beta[i], rel_tol=1e-5)
|
assert math.isclose(beta[i].value, output.beta[i], rel_tol=1e-5)
|
||||||
assert math.isclose(output.cov_beta[i, i], beta[i].dvalue ** 2, rel_tol=2.5e-1), str(output.cov_beta[i, i]) + ' ' + str(beta[i].dvalue ** 2)
|
assert math.isclose(output.cov_beta[i, i], beta[i].dvalue ** 2, rel_tol=2.5e-1), str(output.cov_beta[i, i]) + ' ' + str(beta[i].dvalue ** 2)
|
||||||
assert math.isclose(pe.covariance([beta[0], beta[1]])[0, 1], output.cov_beta[0, 1], rel_tol=2.5e-1)
|
assert math.isclose(pe.covariance([beta[0], beta[1]])[0, 1], output.cov_beta[0, 1], rel_tol=3.5e-1)
|
||||||
|
|
||||||
out = pe.total_least_squares(ox, oy, func, const_par=[beta[1]])
|
out = pe.total_least_squares(ox, oy, func, const_par=[beta[1]])
|
||||||
|
|
||||||
|
@ -241,7 +266,7 @@ def test_total_least_squares():
|
||||||
betac[i].gamma_method(S=1.0)
|
betac[i].gamma_method(S=1.0)
|
||||||
assert math.isclose(betac[i].value, output.beta[i], rel_tol=1e-3)
|
assert math.isclose(betac[i].value, output.beta[i], rel_tol=1e-3)
|
||||||
assert math.isclose(output.cov_beta[i, i], betac[i].dvalue ** 2, rel_tol=2.5e-1), str(output.cov_beta[i, i]) + ' ' + str(betac[i].dvalue ** 2)
|
assert math.isclose(output.cov_beta[i, i], betac[i].dvalue ** 2, rel_tol=2.5e-1), str(output.cov_beta[i, i]) + ' ' + str(betac[i].dvalue ** 2)
|
||||||
assert math.isclose(pe.covariance([betac[0], betac[1]])[0, 1], output.cov_beta[0, 1], rel_tol=2.5e-1)
|
assert math.isclose(pe.covariance([betac[0], betac[1]])[0, 1], output.cov_beta[0, 1], rel_tol=3.5e-1)
|
||||||
|
|
||||||
outc = pe.total_least_squares(oxc, oyc, func, const_par=[betac[1]])
|
outc = pe.total_least_squares(oxc, oyc, func, const_par=[betac[1]])
|
||||||
|
|
||||||
|
@ -256,7 +281,7 @@ def test_total_least_squares():
|
||||||
betac[i].gamma_method(S=1.0)
|
betac[i].gamma_method(S=1.0)
|
||||||
assert math.isclose(betac[i].value, output.beta[i], rel_tol=1e-3)
|
assert math.isclose(betac[i].value, output.beta[i], rel_tol=1e-3)
|
||||||
assert math.isclose(output.cov_beta[i, i], betac[i].dvalue ** 2, rel_tol=2.5e-1), str(output.cov_beta[i, i]) + ' ' + str(betac[i].dvalue ** 2)
|
assert math.isclose(output.cov_beta[i, i], betac[i].dvalue ** 2, rel_tol=2.5e-1), str(output.cov_beta[i, i]) + ' ' + str(betac[i].dvalue ** 2)
|
||||||
assert math.isclose(pe.covariance([betac[0], betac[1]])[0, 1], output.cov_beta[0, 1], rel_tol=2.5e-1)
|
assert math.isclose(pe.covariance([betac[0], betac[1]])[0, 1], output.cov_beta[0, 1], rel_tol=3.5e-1)
|
||||||
|
|
||||||
outc = pe.total_least_squares(oxc, oy, func, const_par=[betac[1]])
|
outc = pe.total_least_squares(oxc, oy, func, const_par=[betac[1]])
|
||||||
|
|
||||||
|
@ -376,6 +401,80 @@ def test_error_band():
|
||||||
pe.fits.error_band(x, f, fitp.fit_parameters)
|
pe.fits.error_band(x, f, fitp.fit_parameters)
|
||||||
|
|
||||||
|
|
||||||
|
def test_fit_vs_jackknife():
|
||||||
|
od = 0.9999999999
|
||||||
|
cov1 = np.array([[1, od, od], [od, 1.0, od], [od, od, 1.0]])
|
||||||
|
cov1 *= 0.05
|
||||||
|
nod = -0.4
|
||||||
|
cov2 = np.array([[1, nod, nod], [nod, 1.0, nod], [nod, nod, 1.0]])
|
||||||
|
cov2 *= 0.05
|
||||||
|
cov3 = np.identity(3)
|
||||||
|
cov3 *= 0.05
|
||||||
|
samples = 500
|
||||||
|
|
||||||
|
for i, cov in enumerate([cov1, cov2, cov3]):
|
||||||
|
dat = pe.misc.gen_correlated_data(np.arange(1, 4), cov, 'test', 0.5, samples=samples)
|
||||||
|
[o.gamma_method(S=0) for o in dat];
|
||||||
|
func = lambda a, x: a[0] + a[1] * x
|
||||||
|
fr = pe.least_squares(np.arange(1, 4), dat, func)
|
||||||
|
fr.gamma_method(S=0)
|
||||||
|
|
||||||
|
jd = np.array([o.export_jackknife() for o in dat]).T
|
||||||
|
jfr = []
|
||||||
|
for jacks in jd:
|
||||||
|
|
||||||
|
def chisqfunc_residuals(p):
|
||||||
|
model = func(p, np.arange(1, 4))
|
||||||
|
chisq = ((jacks - model) / [o.dvalue for o in dat])
|
||||||
|
return chisq
|
||||||
|
|
||||||
|
tf = scipy.optimize.least_squares(chisqfunc_residuals, [0.0, 0.0], method='lm', ftol=1e-15, gtol=1e-15, xtol=1e-15)
|
||||||
|
jfr.append(tf.x)
|
||||||
|
ajfr = np.array(jfr).T
|
||||||
|
err = np.array([np.sqrt(np.var(ajfr[j][1:], ddof=0) * (samples - 1)) for j in range(2)])
|
||||||
|
assert np.allclose(err, [o.dvalue for o in fr], atol=1e-8)
|
||||||
|
|
||||||
|
def test_correlated_fit_vs_jackknife():
|
||||||
|
od = 0.999999
|
||||||
|
cov1 = np.array([[1, od, od], [od, 1.0, od], [od, od, 1.0]])
|
||||||
|
cov1 *= 0.1
|
||||||
|
nod = -0.44
|
||||||
|
cov2 = np.array([[1, nod, nod], [nod, 1.0, nod], [nod, nod, 1.0]])
|
||||||
|
cov2 *= 0.1
|
||||||
|
cov3 = np.identity(3)
|
||||||
|
cov3 *= 0.01
|
||||||
|
|
||||||
|
samples = 250
|
||||||
|
x_val = np.arange(1, 6, 2)
|
||||||
|
for i, cov in enumerate([cov1, cov2, cov3]):
|
||||||
|
dat = pe.misc.gen_correlated_data(x_val + x_val ** 2 + np.random.normal(0.0, 0.1, 3), cov, 'test', 0.5, samples=samples)
|
||||||
|
[o.gamma_method(S=0) for o in dat];
|
||||||
|
dat
|
||||||
|
func = lambda a, x: a[0] * x + a[1] * x ** 2
|
||||||
|
fr = pe.least_squares(x_val, dat, func, correlated_fit=True, silent=True)
|
||||||
|
[o.gamma_method(S=0) for o in fr]
|
||||||
|
|
||||||
|
cov = pe.covariance(dat)
|
||||||
|
chol = np.linalg.cholesky(cov)
|
||||||
|
chol_inv = np.linalg.inv(chol)
|
||||||
|
|
||||||
|
jd = np.array([o.export_jackknife() for o in dat]).T
|
||||||
|
jfr = []
|
||||||
|
for jacks in jd:
|
||||||
|
|
||||||
|
def chisqfunc_residuals(p):
|
||||||
|
model = func(p, x_val)
|
||||||
|
chisq = np.dot(chol_inv, (jacks - model))
|
||||||
|
return chisq
|
||||||
|
|
||||||
|
tf = scipy.optimize.least_squares(chisqfunc_residuals, [0.0, 0.0], method='lm', ftol=1e-15, gtol=1e-15, xtol=1e-15)
|
||||||
|
jfr.append(tf.x)
|
||||||
|
ajfr = np.array(jfr).T
|
||||||
|
err = np.array([np.sqrt(np.var(ajfr[j][1:], ddof=0) * (samples - 1)) for j in range(2)])
|
||||||
|
assert np.allclose(err, [o.dvalue for o in fr], atol=1e-7)
|
||||||
|
assert np.allclose(ajfr.T[0], [o.value for o in fr], atol=1e-8)
|
||||||
|
|
||||||
|
|
||||||
def test_fit_no_autograd():
|
def test_fit_no_autograd():
|
||||||
dim = 10
|
dim = 10
|
||||||
x = np.arange(dim)
|
x = np.arange(dim)
|
||||||
|
|
|
@ -354,3 +354,55 @@ def test_dobsio():
|
||||||
if isinstance(ol[i], pe.Obs):
|
if isinstance(ol[i], pe.Obs):
|
||||||
for name in ol[i].r_values:
|
for name in ol[i].r_values:
|
||||||
assert(np.isclose(ol[i].r_values[name], rl[i].r_values[name]))
|
assert(np.isclose(ol[i].r_values[name], rl[i].r_values[name]))
|
||||||
|
|
||||||
|
|
||||||
|
def test_reconstruct_non_linear_r_obs(tmp_path):
|
||||||
|
to = pe.Obs([np.random.rand(500), np.random.rand(500), np.random.rand(111)],
|
||||||
|
["e|r1", "e|r2", "my_new_ensemble_54^£$|8'[@124435%6^7&()~#"],
|
||||||
|
idl=[range(1, 501), range(0, 500), range(1, 999, 9)])
|
||||||
|
to = np.log(to ** 2) / to
|
||||||
|
to.dump((tmp_path / "test_equality").as_posix())
|
||||||
|
ro = pe.input.json.load_json((tmp_path / "test_equality").as_posix())
|
||||||
|
assert assert_equal_Obs(to, ro)
|
||||||
|
|
||||||
|
|
||||||
|
def test_reconstruct_non_linear_r_obs_list(tmp_path):
|
||||||
|
to = pe.Obs([np.random.rand(500), np.random.rand(500), np.random.rand(111)],
|
||||||
|
["e|r1", "e|r2", "my_new_ensemble_54^£$|8'[@124435%6^7&()~#"],
|
||||||
|
idl=[range(1, 501), range(0, 500), range(1, 999, 9)])
|
||||||
|
to = np.log(to ** 2) / to
|
||||||
|
for to_list in [[to, to, to], np.array([to, to, to])]:
|
||||||
|
pe.input.json.dump_to_json(to_list, (tmp_path / "test_equality_list").as_posix())
|
||||||
|
ro_list = pe.input.json.load_json((tmp_path / "test_equality_list").as_posix())
|
||||||
|
for oa, ob in zip(to_list, ro_list):
|
||||||
|
assert assert_equal_Obs(oa, ob)
|
||||||
|
|
||||||
|
|
||||||
|
def assert_equal_Obs(to, ro):
|
||||||
|
for kw in ["N", "cov_names", "covobs", "ddvalue", "dvalue", "e_content",
|
||||||
|
"e_names", "idl", "mc_names", "names",
|
||||||
|
"reweighted", "shape", "tag"]:
|
||||||
|
if not getattr(to, kw) == getattr(ro, kw):
|
||||||
|
print(kw, "does not match.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
for kw in ["value"]:
|
||||||
|
if not np.isclose(getattr(to, kw), getattr(ro, kw), atol=1e-14):
|
||||||
|
print(kw, "does not match.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
for kw in ["r_values", "deltas"]:
|
||||||
|
for (k, v), (k2, v2) in zip(getattr(to, kw).items(), getattr(ro, kw).items()):
|
||||||
|
assert k == k2
|
||||||
|
if not np.allclose(v, v2, atol=1e-14):
|
||||||
|
print(kw, "does not match.")
|
||||||
|
return False
|
||||||
|
|
||||||
|
m_to = getattr(to, "is_merged")
|
||||||
|
m_ro = getattr(ro, "is_merged")
|
||||||
|
if not m_to == m_ro:
|
||||||
|
if not (all(value is False for value in m_ro.values()) and all(value is False for value in m_to.values())):
|
||||||
|
print("is_merged", "does not match.")
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
import autograd.numpy as np
|
import autograd.numpy as np
|
||||||
import os
|
import os
|
||||||
import copy
|
import copy
|
||||||
|
import matplotlib.pyplot as plt
|
||||||
import pyerrors as pe
|
import pyerrors as pe
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
|
@ -56,6 +57,7 @@ def test_Obs_exceptions():
|
||||||
one.gamma_method()
|
one.gamma_method()
|
||||||
with pytest.raises(Exception):
|
with pytest.raises(Exception):
|
||||||
one.plot_piechart()
|
one.plot_piechart()
|
||||||
|
plt.close('all')
|
||||||
|
|
||||||
def test_dump():
|
def test_dump():
|
||||||
value = np.random.normal(5, 10)
|
value = np.random.normal(5, 10)
|
||||||
|
@ -368,6 +370,7 @@ def test_utils():
|
||||||
assert my_obs < (my_obs + 1)
|
assert my_obs < (my_obs + 1)
|
||||||
float(my_obs)
|
float(my_obs)
|
||||||
str(my_obs)
|
str(my_obs)
|
||||||
|
plt.close('all')
|
||||||
|
|
||||||
|
|
||||||
def test_cobs():
|
def test_cobs():
|
||||||
|
@ -515,6 +518,35 @@ def test_merge_idx():
|
||||||
assert pe.obs._merge_idx([range(500, 6050, 50), range(500, 6250, 250)]) == range(500, 6250, 50)
|
assert pe.obs._merge_idx([range(500, 6050, 50), range(500, 6250, 250)]) == range(500, 6250, 50)
|
||||||
|
|
||||||
|
|
||||||
|
def test_intersection_idx():
|
||||||
|
assert pe.obs._intersection_idx([range(1, 100), range(1, 100), range(1, 100)]) == range(1, 100)
|
||||||
|
assert pe.obs._intersection_idx([range(1, 100, 10), range(1, 100, 2)]) == range(1, 100, 10)
|
||||||
|
assert pe.obs._intersection_idx([range(10, 1010, 10), range(10, 1010, 50)]) == range(10, 1010, 50)
|
||||||
|
assert pe.obs._intersection_idx([range(500, 6050, 50), range(500, 6250, 250)]) == range(500, 6050, 250)
|
||||||
|
|
||||||
|
for ids in [[list(range(1, 80, 3)), list(range(1, 100, 2))], [range(1, 80, 3), range(1, 100, 2), range(1, 100, 7)]]:
|
||||||
|
assert list(pe.obs._intersection_idx(ids)) == pe.obs._intersection_idx([list(o) for o in ids])
|
||||||
|
|
||||||
|
def test_merge_intersection():
|
||||||
|
for idl_list in [[range(1, 100), range(1, 100), range(1, 100)],
|
||||||
|
[range(4, 80, 6), range(4, 80, 6)],
|
||||||
|
[[0, 2, 8, 19, 205], [0, 2, 8, 19, 205]]]:
|
||||||
|
assert pe.obs._merge_idx(idl_list) == pe.obs._intersection_idx(idl_list)
|
||||||
|
|
||||||
|
|
||||||
|
def test_intersection_collapse():
|
||||||
|
range1 = range(1, 2000, 2)
|
||||||
|
range2 = range(2, 2001, 8)
|
||||||
|
|
||||||
|
obs1 = pe.Obs([np.random.normal(1.0, 0.1, len(range1))], ["ens"], idl=[range1])
|
||||||
|
obs_merge = obs1 + pe.Obs([np.random.normal(1.0, 0.1, len(range2))], ["ens"], idl=[range2])
|
||||||
|
|
||||||
|
intersection = pe.obs._intersection_idx([o.idl["ens"] for o in [obs1, obs_merge]])
|
||||||
|
coll = pe.obs._collapse_deltas_for_merge(obs_merge.deltas["ens"], obs_merge.idl["ens"], len(obs_merge.idl["ens"]), range1)
|
||||||
|
|
||||||
|
assert np.all(coll == obs1.deltas["ens"])
|
||||||
|
|
||||||
|
|
||||||
def test_irregular_error_propagation():
|
def test_irregular_error_propagation():
|
||||||
obs_list = [pe.Obs([np.random.rand(100)], ['t']),
|
obs_list = [pe.Obs([np.random.rand(100)], ['t']),
|
||||||
pe.Obs([np.random.rand(50)], ['t'], idl=[range(1, 100, 2)]),
|
pe.Obs([np.random.rand(50)], ['t'], idl=[range(1, 100, 2)]),
|
||||||
|
@ -619,6 +651,26 @@ def test_covariance_is_variance():
|
||||||
assert np.isclose(test_obs.dvalue ** 2, pe.covariance([test_obs, test_obs])[0, 1])
|
assert np.isclose(test_obs.dvalue ** 2, pe.covariance([test_obs, test_obs])[0, 1])
|
||||||
|
|
||||||
|
|
||||||
|
def test_covariance_vs_numpy():
|
||||||
|
N = 1078
|
||||||
|
data1 = np.random.normal(2.5, 0.2, N)
|
||||||
|
data2 = np.random.normal(0.5, 0.08, N)
|
||||||
|
data3 = np.random.normal(-178, 5, N)
|
||||||
|
uncorr = np.row_stack([data1, data2, data3])
|
||||||
|
corr = np.random.multivariate_normal([0.0, 17, -0.0487], [[1.0, 0.6, -0.22], [0.6, 0.8, 0.01], [-0.22, 0.01, 1.9]], N).T
|
||||||
|
|
||||||
|
for X in [uncorr, corr]:
|
||||||
|
obs1 = pe.Obs([X[0]], ["ens1"])
|
||||||
|
obs2 = pe.Obs([X[1]], ["ens1"])
|
||||||
|
obs3 = pe.Obs([X[2]], ["ens1"])
|
||||||
|
obs1.gamma_method(S=0.0)
|
||||||
|
obs2.gamma_method(S=0.0)
|
||||||
|
obs3.gamma_method(S=0.0)
|
||||||
|
pe_cov = pe.covariance([obs1, obs2, obs3])
|
||||||
|
np_cov = np.cov(X) / N
|
||||||
|
assert np.allclose(pe_cov, np_cov, atol=1e-14)
|
||||||
|
|
||||||
|
|
||||||
def test_covariance_symmetry():
|
def test_covariance_symmetry():
|
||||||
value1 = np.random.normal(5, 10)
|
value1 = np.random.normal(5, 10)
|
||||||
dvalue1 = np.abs(np.random.normal(0, 1))
|
dvalue1 = np.abs(np.random.normal(0, 1))
|
||||||
|
@ -687,6 +739,23 @@ def test_covariance_factorizing():
|
||||||
assert np.isclose(pe.covariance([mt0, tt[1]])[0, 1], -pe.covariance(tt)[0, 1])
|
assert np.isclose(pe.covariance([mt0, tt[1]])[0, 1], -pe.covariance(tt)[0, 1])
|
||||||
|
|
||||||
|
|
||||||
|
def test_covariance_smooth_eigenvalues():
|
||||||
|
for c_coeff in range(0, 14, 2):
|
||||||
|
length = 14
|
||||||
|
sm = 5
|
||||||
|
t_fac = 1.5
|
||||||
|
tt = pe.misc.gen_correlated_data(np.zeros(length), 1 - 0.1 ** c_coeff * np.ones((length, length)) + 0.1 ** c_coeff * np.diag(np.ones(length)), 'test', tau=0.5 + t_fac * np.random.rand(length), samples=200)
|
||||||
|
[o.gamma_method() for o in tt]
|
||||||
|
full_corr = pe.covariance(tt, correlation=True)
|
||||||
|
cov = pe.covariance(tt, smooth=sm, correlation=True)
|
||||||
|
|
||||||
|
full_evals = np.linalg.eigh(full_corr)[0]
|
||||||
|
sm_length = np.where(full_evals < np.mean(full_evals[:-sm]))[0][-1]
|
||||||
|
|
||||||
|
evals = np.linalg.eigh(cov)[0]
|
||||||
|
assert np.all(np.isclose(evals[:sm_length], evals[0], atol=1e-8))
|
||||||
|
|
||||||
|
|
||||||
def test_covariance_alternation():
|
def test_covariance_alternation():
|
||||||
length = 12
|
length = 12
|
||||||
t_fac = 2.5
|
t_fac = 2.5
|
||||||
|
@ -729,6 +798,86 @@ def test_covariance_idl():
|
||||||
pe.covariance([obs1, obs2])
|
pe.covariance([obs1, obs2])
|
||||||
|
|
||||||
|
|
||||||
|
def test_correlation_intersection_of_idls():
|
||||||
|
range1 = range(1, 2000, 2)
|
||||||
|
range2 = range(2, 2001, 2)
|
||||||
|
|
||||||
|
obs1 = pe.Obs([np.random.normal(1.0, 0.1, len(range1))], ["ens"], idl=[range1])
|
||||||
|
obs2_a = 0.4 * pe.Obs([np.random.normal(1.0, 0.1, len(range1))], ["ens"], idl=[range1]) + 0.6 * obs1
|
||||||
|
obs1.gamma_method()
|
||||||
|
obs2_a.gamma_method()
|
||||||
|
|
||||||
|
cov1 = pe.covariance([obs1, obs2_a])
|
||||||
|
corr1 = pe.covariance([obs1, obs2_a], correlation=True)
|
||||||
|
|
||||||
|
obs2_b = obs2_a + pe.Obs([np.random.normal(1.0, 0.1, len(range2))], ["ens"], idl=[range2])
|
||||||
|
obs2_b.gamma_method()
|
||||||
|
|
||||||
|
cov2 = pe.covariance([obs1, obs2_b])
|
||||||
|
corr2 = pe.covariance([obs1, obs2_b], correlation=True)
|
||||||
|
|
||||||
|
assert np.isclose(corr1[0, 1], corr2[0, 1], atol=1e-14)
|
||||||
|
assert cov1[0, 1] > cov2[0, 1]
|
||||||
|
|
||||||
|
obs2_c = pe.Obs([np.random.normal(1.0, 0.1, len(range2))], ["ens"], idl=[range2])
|
||||||
|
obs2_c.gamma_method()
|
||||||
|
assert np.isclose(0, pe.covariance([obs1, obs2_c])[0, 1], atol=1e-14)
|
||||||
|
|
||||||
|
|
||||||
|
def test_covariance_non_identical_objects():
|
||||||
|
obs1 = pe.Obs([np.random.normal(1.0, 0.1, 1000), np.random.normal(1.0, 0.1, 1000), np.random.normal(1.0, 0.1, 732)], ["ens|r1", "ens|r2", "ens2"])
|
||||||
|
obs1.gamma_method()
|
||||||
|
obs2 = obs1 + 1e-18
|
||||||
|
obs2.gamma_method()
|
||||||
|
assert obs1 == obs2
|
||||||
|
assert obs1 is not obs2
|
||||||
|
assert np.allclose(np.ones((2, 2)), pe.covariance([obs1, obs2], correlation=True), atol=1e-14)
|
||||||
|
|
||||||
|
|
||||||
|
def test_covariance_additional_non_overlapping_data():
|
||||||
|
range1 = range(1, 20, 2)
|
||||||
|
|
||||||
|
data2 = np.random.normal(0.0, 0.1, len(range1))
|
||||||
|
|
||||||
|
obs1 = pe.Obs([np.random.normal(1.0, 0.1, len(range1))], ["ens"], idl=[range1])
|
||||||
|
obs2_a = pe.Obs([data2], ["ens"], idl=[range1])
|
||||||
|
obs1.gamma_method()
|
||||||
|
obs2_a.gamma_method()
|
||||||
|
|
||||||
|
corr1 = pe.covariance([obs1, obs2_a], correlation=True)
|
||||||
|
|
||||||
|
added_data = np.random.normal(0.0, 0.1, len(range1))
|
||||||
|
added_data -= np.mean(added_data) - np.mean(data2)
|
||||||
|
data2_extended = np.ravel([data2, added_data], 'F')
|
||||||
|
|
||||||
|
obs2_b = pe.Obs([data2_extended], ["ens"])
|
||||||
|
obs2_b.gamma_method()
|
||||||
|
|
||||||
|
corr2 = pe.covariance([obs1, obs2_b], correlation=True)
|
||||||
|
|
||||||
|
assert np.isclose(corr1[0, 1], corr2[0, 1], atol=1e-14)
|
||||||
|
|
||||||
|
|
||||||
|
def test_coavariance_reorder_non_overlapping_data():
|
||||||
|
range1 = range(1, 20, 2)
|
||||||
|
range2 = range(1, 41, 2)
|
||||||
|
|
||||||
|
obs1 = pe.Obs([np.random.normal(1.0, 0.1, len(range1))], ["ens"], idl=[range1])
|
||||||
|
obs2_b = pe.Obs([np.random.normal(1.0, 0.1, len(range2))], ["ens"], idl=[range2])
|
||||||
|
obs1.gamma_method()
|
||||||
|
obs2_b.gamma_method()
|
||||||
|
|
||||||
|
corr1 = pe.covariance([obs1, obs2_b], correlation=True)
|
||||||
|
|
||||||
|
deltas = list(obs2_b.deltas['ens'][:len(range1)]) + sorted(obs2_b.deltas['ens'][len(range1):])
|
||||||
|
obs2_a = pe.Obs([obs2_b.value + np.array(deltas)], ["ens"], idl=[range2])
|
||||||
|
obs2_a.gamma_method()
|
||||||
|
|
||||||
|
corr2 = pe.covariance([obs1, obs2_a], correlation=True)
|
||||||
|
|
||||||
|
assert np.isclose(corr1[0, 1], corr2[0, 1], atol=1e-14)
|
||||||
|
|
||||||
|
|
||||||
def test_empty_obs():
|
def test_empty_obs():
|
||||||
o = pe.Obs([np.random.rand(100)], ['test'])
|
o = pe.Obs([np.random.rand(100)], ['test'])
|
||||||
q = o + pe.Obs([], [], means=[])
|
q = o + pe.Obs([], [], means=[])
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue