mirror of
https://github.com/fjosw/pyerrors.git
synced 2025-03-15 14:50:25 +01:00
Merge branch 'develop' into feature/boot_matmul
This commit is contained in:
commit
b187b2ec7c
47 changed files with 3552 additions and 2605 deletions
2
.github/workflows/flake8.yml
vendored
2
.github/workflows/flake8.yml
vendored
|
@ -21,6 +21,6 @@ jobs:
|
|||
- name: flake8 Lint
|
||||
uses: py-actions/flake8@v1
|
||||
with:
|
||||
ignore: "E501,E722"
|
||||
ignore: "E501"
|
||||
exclude: "__init__.py, input/__init__.py"
|
||||
path: "pyerrors"
|
||||
|
|
13
.github/workflows/pytest.yml
vendored
13
.github/workflows/pytest.yml
vendored
|
@ -6,14 +6,22 @@ on:
|
|||
- master
|
||||
- develop
|
||||
pull_request:
|
||||
schedule:
|
||||
- cron: '0 4 1 * *'
|
||||
|
||||
jobs:
|
||||
pytest:
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
fail-fast: true
|
||||
matrix:
|
||||
python-version: ["3.6", "3.7", "3.8", "3.9"]
|
||||
os: [ubuntu-latest]
|
||||
python-version: ["3.7", "3.8", "3.9", "3.10"]
|
||||
include:
|
||||
- os: macos-latest
|
||||
python-version: 3.9
|
||||
- os: windows-latest
|
||||
python-version: 3.9
|
||||
|
||||
steps:
|
||||
- name: Checkout source
|
||||
|
@ -27,6 +35,7 @@ jobs:
|
|||
- name: Install
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install wheel
|
||||
pip install .
|
||||
pip install pytest
|
||||
pip install pytest-cov
|
||||
|
|
1
.gitignore
vendored
1
.gitignore
vendored
|
@ -8,3 +8,4 @@ examples/B1k2_pcac_plateau.p
|
|||
examples/Untitled.*
|
||||
core.*
|
||||
*.swp
|
||||
htmlcov
|
||||
|
|
27
CHANGELOG.md
27
CHANGELOG.md
|
@ -4,20 +4,33 @@ All notable changes to this project will be documented in this file.
|
|||
|
||||
## [2.0.0] - 2021-??-??
|
||||
### Added
|
||||
- `CObs` class added which can handle complex valued Markov chain Monte Carlo data and the corresponding error propagation
|
||||
- Matrix to matrix operations like the matrix inverse now also work for complex matrices and matrices containing entries that are not `Obs` but `float` or `int`
|
||||
- `Obs` objects now have methods `is_zero` and `is_zero_within_error`
|
||||
- `CObs` class added which can handle complex valued Markov chain Monte Carlo data and the corresponding error propagation.
|
||||
- Matrix to matrix operations like the matrix inverse now also work for complex matrices and matrices containing entries that are not `Obs` but `float` or `int`.
|
||||
- The possibility to work with Monte Carlo histories which are evenly or unevenly spaced was added.
|
||||
- The Corr class now has additional methods like `reverse`, `T_symmetry`, `correlate` and `reweight`.
|
||||
- `linalg` module now has explicit functions `inv` and `cholesky`.
|
||||
- `Obs` objects now have methods `is_zero` and `is_zero_within_error` as well as overloaded comparison operations.
|
||||
- Functions to convert Obs data to or from jackknife was added.
|
||||
- Alternative matrix multiplication routine `jack_matmul` was added to `linalg` module which makes use of the jackknife approximation and is much faster for large matrices.
|
||||
- Additional input routines for npr data added to `input.hadrons`.
|
||||
- Version number added.
|
||||
|
||||
### Changed
|
||||
- Additional attributes can no longer be added to existing `Obs`. This makes it no longer possible to import `Obs` created with previous versions of pyerrors
|
||||
- The default value for `Corr.prange` is now `None`
|
||||
- The `input` module was restructured to contain one submodule per data source
|
||||
- The internal bookkeeping system for ensembles/replica was changed. The separator for replica is now `|`.
|
||||
- The fit functions were renamed to `least_squares` and `total_least_squares`.
|
||||
- The fit functions can now deal with provided covariance matrices.
|
||||
- The convention for the fit range in the Corr class has been changed.
|
||||
- Obs.print was renamed to Obs.details and the output was improved.
|
||||
- The default value for `Corr.prange` is now `None`.
|
||||
- The `input` module was restructured to contain one submodule per data source.
|
||||
- Performance of Obs.__init__ improved.
|
||||
|
||||
### Deprecated
|
||||
- The function `plot_corrs` was deprecated as all its functionality is now contained within `Corr.show`
|
||||
- The kwarg `bias_correction` in `derived_observable` was removed
|
||||
- Obs no longer have an attribute `e_Q`
|
||||
- Removed `fits.fit_exp`
|
||||
- Removed jackknife module
|
||||
|
||||
## [1.1.0] - 2021-10-11
|
||||
### Added
|
||||
|
@ -77,7 +90,7 @@ All notable changes to this project will be documented in this file.
|
|||
|
||||
## [0.7.0] - 2020-03-10
|
||||
### Added
|
||||
- New fit funtions for fitting with and without x-errors added which use automatic differentiation and should be more reliable than the old ones.
|
||||
- New fit functions for fitting with and without x-errors added which use automatic differentiation and should be more reliable than the old ones.
|
||||
- Fitting with Bayesian priors added.
|
||||
- New functions for visualization of fits which can be activated via the kwargs resplot and qqplot.
|
||||
- chisquare/expected_chisquared which takes into account correlations in the data and non-linearities in the fit function can now be activated with the kwarg expected_chisquare.
|
||||
|
|
|
@ -20,15 +20,16 @@ Please add docstrings to any new function, class or method you implement. The do
|
|||
|
||||
### Tests
|
||||
When implementing a new feature or fixing a bug please add meaningful tests to the files in the `tests` directory which cover the new code.
|
||||
|
||||
### Continous integration
|
||||
For all pull requests tests are executed for the most recent python releases via
|
||||
```
|
||||
pytest --cov=pyerrors -vv
|
||||
```
|
||||
requiring `pytest`, `pytest-cov` and `pytest-benchmark`
|
||||
and the linter `flake8` is executed with the command
|
||||
requiring `pytest`, `pytest-cov` and `pytest-benchmark`. To get a coverage report in html run
|
||||
```
|
||||
flake8 --ignore=E501,E722 --exclude=__init__.py pyerrors
|
||||
pytest --cov=pyerrors --cov-report html
|
||||
```
|
||||
The linter `flake8` is executed with the command
|
||||
```
|
||||
flake8 --ignore=E501 --exclude=__init__.py pyerrors
|
||||
```
|
||||
Please make sure that all tests are passed for a new pull requests.
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
[](https://github.com/fjosw/pyerrors/actions/workflows/flake8.yml) [](https://github.com/fjosw/pyerrors/actions/workflows/pytest.yml) [](https://github.com/fjosw/pyerrors/actions/workflows/docs.yml) [](https://www.python.org/downloads/) [](https://opensource.org/licenses/MIT)
|
||||
[](https://github.com/fjosw/pyerrors/actions/workflows/flake8.yml) [](https://github.com/fjosw/pyerrors/actions/workflows/pytest.yml) [](https://github.com/fjosw/pyerrors/actions/workflows/docs.yml) [](https://www.python.org/downloads/) [](https://opensource.org/licenses/MIT)
|
||||
# pyerrors
|
||||
`pyerrors` is a python package for error computation and propagation of Markov chain Monte Carlo data.
|
||||
|
||||
|
@ -16,9 +16,3 @@ to install the most recent release run
|
|||
```bash
|
||||
pip install git+https://github.com/fjosw/pyerrors.git@master
|
||||
```
|
||||
|
||||
## Other implementations
|
||||
There exist similar publicly available implementations of gamma method error analysis suites in
|
||||
- [Fortran](https://gitlab.ift.uam-csic.es/alberto/aderrors)
|
||||
- [Julia](https://gitlab.ift.uam-csic.es/alberto/aderrors.jl)
|
||||
- [Python](https://github.com/mbruno46/pyobs)
|
||||
|
|
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -51,7 +51,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The standard matrix product can be performed with @"
|
||||
"The standard matrix product can be performed with `@`"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -101,13 +101,38 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Mathematical functions work elementwise"
|
||||
"For large matrices overloading the standard operator `@` can become inefficient as pyerrors has to perform a large number of elementary opeations. For these situations pyerrors provides the function `linalg.matmul` which optimizes the required automatic differentiation. The function can take an arbitray number of operands."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[[Obs[78.12099999999998] Obs[-22.909999999999997]]\n",
|
||||
" [Obs[-22.909999999999997] Obs[7.1]]]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(pe.linalg.matmul(matrix, matrix, matrix)) # Equivalent to matrix @ matrix @ matrix but faster for large matrices"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Mathematical functions work elementwise"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
|
@ -126,12 +151,12 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For a vector of `Obs`, we again use np.asarray to end up with the correct object"
|
||||
"For a vector of `Obs`, we again use `np.asarray` to end up with the correct object"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
|
@ -160,14 +185,14 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[Obs[7.2(1.7)] Obs[-1.00(45)]]\n"
|
||||
"[Obs[7.2(1.7)] Obs[-1.00(46)]]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
@ -182,40 +207,7 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Matrix to scalar operations\n",
|
||||
"If we want to apply a numpy matrix function with a scalar return value we can use `scalar_mat_op`. __Here we need to use the autograd wrapped version of numpy__ (imported as anp) to use automatic differentiation."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"det \t Obs[3.10(28)]\n",
|
||||
"trace \t Obs[5.10(20)]\n",
|
||||
"norm \t Obs[4.45(19)]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import autograd.numpy as anp # Thinly-wrapped numpy\n",
|
||||
"funcs = [anp.linalg.det, anp.trace, anp.linalg.norm]\n",
|
||||
"\n",
|
||||
"for i, func in enumerate(funcs):\n",
|
||||
" res = pe.linalg.scalar_mat_op(func, matrix)\n",
|
||||
" res.gamma_method()\n",
|
||||
" print(func.__name__, '\\t', res)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"For matrix operations which are not supported by autograd we can use numerical differentiation"
|
||||
"`pyerrors` provides the user with wrappers to the `numpy.linalg` functions which work on `Obs` valued matrices. We can for example calculate the determinant of the matrix via"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -227,26 +219,21 @@
|
|||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"cond \t Obs[6.23(58)]\n",
|
||||
"expm_cond \t Obs[4.45(19)]\n"
|
||||
"3.10(28)\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"funcs = [np.linalg.cond, scipy.linalg.expm_cond]\n",
|
||||
"\n",
|
||||
"for i, func in enumerate(funcs):\n",
|
||||
" res = pe.linalg.scalar_mat_op(func, matrix, num_grad=True)\n",
|
||||
" res.gamma_method()\n",
|
||||
" print(func.__name__, ' \\t', res)"
|
||||
"det = pe.linalg.det(matrix)\n",
|
||||
"det.gamma_method()\n",
|
||||
"print(det)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Matrix to matrix operations\n",
|
||||
"For matrix operations with a matrix as return value we can use another wrapper `mat_mat_op`. Take as an example the cholesky decompostion. __Here we need to use the autograd wrapped version of numpy__ (imported as anp) to use automatic differentiation."
|
||||
"The cholesky decomposition can be obtained as follows"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
@ -259,12 +246,12 @@
|
|||
"output_type": "stream",
|
||||
"text": [
|
||||
"[[Obs[2.025(49)] Obs[0.0]]\n",
|
||||
" [Obs[-0.494(51)] Obs[0.870(29)]]]\n"
|
||||
" [Obs[-0.494(50)] Obs[0.870(29)]]]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"cholesky = pe.linalg.mat_mat_op(anp.linalg.cholesky, matrix)\n",
|
||||
"cholesky = pe.linalg.cholesky(matrix)\n",
|
||||
"for (i, j), entry in np.ndenumerate(cholesky):\n",
|
||||
" entry.gamma_method()\n",
|
||||
"print(cholesky)"
|
||||
|
@ -321,7 +308,7 @@
|
|||
}
|
||||
],
|
||||
"source": [
|
||||
"inv = pe.linalg.mat_mat_op(anp.linalg.inv, cholesky)\n",
|
||||
"inv = pe.linalg.inv(cholesky)\n",
|
||||
"for (i, j), entry in np.ndenumerate(inv):\n",
|
||||
" entry.gamma_method()\n",
|
||||
"print(inv)\n",
|
||||
|
@ -334,59 +321,14 @@
|
|||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Matrix to matrix operations which are not supported by autograd can also be computed with numeric differentiation"
|
||||
"## Eigenvalues and eigenvectors\n",
|
||||
"We can also compute eigenvalues and eigenvectors of symmetric matrices with a special wrapper `eigh`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"orth\n",
|
||||
"[[Obs[-0.9592(76)] Obs[0.283(26)]]\n",
|
||||
" [Obs[0.283(26)] Obs[0.9592(76)]]]\n",
|
||||
"expm\n",
|
||||
"[[Obs[75(15)] Obs[-21.4(4.1)]]\n",
|
||||
" [Obs[-21.4(4.1)] Obs[8.3(1.4)]]]\n",
|
||||
"logm\n",
|
||||
"[[Obs[1.334(57)] Obs[-0.496(61)]]\n",
|
||||
" [Obs[-0.496(61)] Obs[-0.203(50)]]]\n",
|
||||
"sinhm\n",
|
||||
"[[Obs[37.3(7.4)] Obs[-10.8(2.1)]]\n",
|
||||
" [Obs[-10.8(2.1)] Obs[3.94(68)]]]\n",
|
||||
"sqrtm\n",
|
||||
"[[Obs[1.996(51)] Obs[-0.341(37)]]\n",
|
||||
" [Obs[-0.341(37)] Obs[0.940(14)]]]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"funcs = [scipy.linalg.orth, scipy.linalg.expm, scipy.linalg.logm, scipy.linalg.sinhm, scipy.linalg.sqrtm]\n",
|
||||
"\n",
|
||||
"for i,func in enumerate(funcs):\n",
|
||||
" res = pe.linalg.mat_mat_op(func, matrix, num_grad=True)\n",
|
||||
" for (i, j), entry in np.ndenumerate(res):\n",
|
||||
" entry.gamma_method()\n",
|
||||
" print(func.__name__)\n",
|
||||
" print(res)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Eigenvalues and eigenvectors\n",
|
||||
"We can also compute eigenvalues and eigenvectors of symmetric matrices with a special wrapper `eigh`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
|
@ -395,8 +337,8 @@
|
|||
"Eigenvalues:\n",
|
||||
"[Obs[0.705(57)] Obs[4.39(19)]]\n",
|
||||
"Eigenvectors:\n",
|
||||
"[[Obs[-0.283(26)] Obs[-0.9592(76)]]\n",
|
||||
" [Obs[-0.9592(76)] Obs[0.283(26)]]]\n"
|
||||
"[[Obs[-0.283(26)] Obs[-0.9592(75)]]\n",
|
||||
" [Obs[-0.9592(75)] Obs[0.283(26)]]]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
@ -421,7 +363,7 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
|
@ -451,7 +393,7 @@
|
|||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
|
@ -465,7 +407,7 @@
|
|||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.6.9"
|
||||
"version": "3.8.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
|
Binary file not shown.
Binary file not shown.
BIN
examples/data/correlator_test.json.gz
Normal file
BIN
examples/data/correlator_test.json.gz
Normal file
Binary file not shown.
Binary file not shown.
BIN
examples/data/f_A.json.gz
Normal file
BIN
examples/data/f_A.json.gz
Normal file
Binary file not shown.
BIN
examples/data/f_P.json.gz
Normal file
BIN
examples/data/f_P.json.gz
Normal file
Binary file not shown.
|
@ -1,12 +1,14 @@
|
|||
r'''
|
||||
# What is pyerrors?
|
||||
`pyerrors` is a python package for error computation and propagation of Markov chain Monte Carlo data.
|
||||
It is based on the **gamma method** [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017). Some of its features are:
|
||||
- **automatic differentiation** as suggested in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289) (partly based on the [autograd](https://github.com/HIPS/autograd) package)
|
||||
- **treatment of slow modes** in the simulation as suggested in [arXiv:1009.5228](https://arxiv.org/abs/1009.5228)
|
||||
- coherent **error propagation** for data from **different Markov chains**
|
||||
- **non-linear fits with x- and y-errors** and exact linear error propagation based on automatic differentiation as introduced in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289)
|
||||
- **real and complex matrix operations** and their error propagation based on automatic differentiation (Cholesky decomposition, calculation of eigenvalues and eigenvectors, singular value decomposition...)
|
||||
It is based on the gamma method [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017). Some of its features are:
|
||||
- automatic differentiation for exact liner error propagation as suggested in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289) (partly based on the [autograd](https://github.com/HIPS/autograd) package).
|
||||
- treatment of slow modes in the simulation as suggested in [arXiv:1009.5228](https://arxiv.org/abs/1009.5228).
|
||||
- coherent error propagation for data from different Markov chains.
|
||||
- non-linear fits with x- and y-errors and exact linear error propagation based on automatic differentiation as introduced in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289).
|
||||
- real and complex matrix operations and their error propagation based on automatic differentiation (Matrix inverse, Cholesky decomposition, calculation of eigenvalues and eigenvectors, singular value decomposition...).
|
||||
|
||||
There exist similar publicly available implementations of gamma method error analysis suites in [Fortran](https://gitlab.ift.uam-csic.es/alberto/aderrors), [Julia](https://gitlab.ift.uam-csic.es/alberto/aderrors.jl) and [Python](https://github.com/mbruno46/pyobs).
|
||||
|
||||
## Basic example
|
||||
|
||||
|
@ -28,7 +30,6 @@ An `Obs` object can be initialized with two arguments, the first is a list conta
|
|||
The samples can either be provided as python list or as numpy array.
|
||||
The second argument is a list containing the names of the respective Monte Carlo chains as strings. These strings uniquely identify a Monte Carlo chain/ensemble.
|
||||
|
||||
Example:
|
||||
```python
|
||||
import pyerrors as pe
|
||||
|
||||
|
@ -44,7 +45,6 @@ The required derivatives $\bar{f}_\alpha$ are evaluated up to machine precision
|
|||
|
||||
The `Obs` class is designed such that mathematical numpy functions can be used on `Obs` just as for regular floats.
|
||||
|
||||
Example:
|
||||
```python
|
||||
import numpy as np
|
||||
import pyerrors as pe
|
||||
|
@ -67,7 +67,6 @@ print(iamzero == 0.0)
|
|||
The error estimation within `pyerrors` is based on the gamma method introduced in [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017).
|
||||
After having arrived at the derived quantity of interest the `gamma_method` can be called as detailed in the following example.
|
||||
|
||||
Example:
|
||||
```python
|
||||
my_sum.gamma_method()
|
||||
print(my_sum)
|
||||
|
@ -82,10 +81,9 @@ my_sum.details()
|
|||
|
||||
We use the following definition of the integrated autocorrelation time established in [Madras & Sokal 1988](https://link.springer.com/article/10.1007/BF01022990)
|
||||
$$\tau_\mathrm{int}=\frac{1}{2}+\sum_{t=1}^{W}\rho(t)\geq \frac{1}{2}\,.$$
|
||||
The window $W$ is determined via the automatic windowing procedure described in [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017)
|
||||
The window $W$ is determined via the automatic windowing procedure described in [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017).
|
||||
The standard value for the parameter $S$ of this automatic windowing procedure is $S=2$. Other values for $S$ can be passed to the `gamma_method` as parameter.
|
||||
|
||||
Example:
|
||||
```python
|
||||
my_sum.gamma_method(S=3.0)
|
||||
my_sum.details()
|
||||
|
@ -105,7 +103,6 @@ In this case the error estimate is identical to the sample standard error.
|
|||
|
||||
Slow modes in the Monte Carlo history can be accounted for by attaching an exponential tail to the autocorrelation function $\rho$ as suggested in [arXiv:1009.5228](https://arxiv.org/abs/1009.5228). The longest autocorrelation time in the history, $\tau_\mathrm{exp}$, can be passed to the `gamma_method` as parameter. In this case the automatic windowing procedure is vacated and the parameter $S$ does not affect the error estimate.
|
||||
|
||||
Example:
|
||||
```python
|
||||
my_sum.gamma_method(tau_exp=7.2)
|
||||
my_sum.details()
|
||||
|
@ -115,13 +112,12 @@ my_sum.details()
|
|||
> · Ensemble 'ensemble_name' : 1000 configurations (from 1 to 1000)
|
||||
```
|
||||
|
||||
For the full API see `pyerrors.obs.Obs.gamma_method`
|
||||
For the full API see `pyerrors.obs.Obs.gamma_method`.
|
||||
|
||||
## Multiple ensembles/replica
|
||||
|
||||
Error propagation for multiple ensembles (Markov chains with different simulation parameters) is handled automatically. Ensembles are uniquely identified by their `name`.
|
||||
|
||||
Example:
|
||||
```python
|
||||
obs1 = pe.Obs([samples1], ['ensemble1'])
|
||||
obs2 = pe.Obs([samples2], ['ensemble2'])
|
||||
|
@ -136,7 +132,6 @@ my_sum.details()
|
|||
|
||||
`pyerrors` identifies multiple replica (independent Markov chains with identical simulation parameters) by the vertical bar `|` in the name of the data set.
|
||||
|
||||
Example:
|
||||
```python
|
||||
obs1 = pe.Obs([samples1], ['ensemble1|r01'])
|
||||
obs2 = pe.Obs([samples2], ['ensemble1|r02'])
|
||||
|
@ -154,7 +149,6 @@ obs2 = pe.Obs([samples2], ['ensemble1|r02'])
|
|||
|
||||
In order to keep track of different error analysis parameters for different ensembles one can make use of global dictionaries as detailed in the following example.
|
||||
|
||||
Example:
|
||||
```python
|
||||
pe.Obs.S_dict['ensemble1'] = 2.5
|
||||
pe.Obs.tau_exp_dict['ensemble2'] = 8.0
|
||||
|
@ -167,9 +161,8 @@ Passing arguments to the `gamma_method` still dominates over the dictionaries.
|
|||
|
||||
## Irregular Monte Carlo chains
|
||||
|
||||
Irregular Monte Carlo chains can be initialized with the parameter `idl`.
|
||||
`Obs` objects defined on irregular Monte Carlo chains can be initialized with the parameter `idl`.
|
||||
|
||||
Example:
|
||||
```python
|
||||
# Observable defined on configurations 20 to 519
|
||||
obs1 = pe.Obs([samples1], ['ensemble1'], idl=[range(20, 520)])
|
||||
|
@ -194,20 +187,71 @@ obs3.details()
|
|||
|
||||
```
|
||||
|
||||
`Obs` objects defined on regular and irregular histories of the same ensemble can be computed with each other and the correct error propagation and estimation is automatically taken care of.
|
||||
|
||||
**Warning:** Irregular Monte Carlo chains can result in odd patterns in the autocorrelation functions.
|
||||
Make sure to check the autocorrelation time with e.g. `pyerrors.obs.Obs.plot_rho` or `pyerrors.obs.Obs.plot_tauint`.
|
||||
|
||||
For the full API see `pyerrors.obs.Obs`
|
||||
For the full API see `pyerrors.obs.Obs`.
|
||||
|
||||
# Correlators
|
||||
For the full API see `pyerrors.correlators.Corr`
|
||||
When one is not interested in single observables but correlation functions, `pyerrors` offers the `Corr` class which simplifies the corresponding error propagation and provides the user with a set of standard methods. In order to initialize a `Corr` objects one needs to arrange the data as a list of `Obs´
|
||||
```python
|
||||
my_corr = pe.Corr([obs_0, obs_1, obs_2, obs_3])
|
||||
print(my_corr)
|
||||
> x0/a Corr(x0/a)
|
||||
> ------------------
|
||||
> 0 0.7957(80)
|
||||
> 1 0.5156(51)
|
||||
> 2 0.3227(33)
|
||||
> 3 0.2041(21)
|
||||
```
|
||||
In case the correlation functions are not defined on the outermost timeslices, for example because of fixed boundary conditions, a padding can be introduced.
|
||||
```python
|
||||
my_corr = pe.Corr([obs_0, obs_1, obs_2, obs_3], padding=[1, 1])
|
||||
print(my_corr)
|
||||
> x0/a Corr(x0/a)
|
||||
> ------------------
|
||||
> 0
|
||||
> 1 0.7957(80)
|
||||
> 2 0.5156(51)
|
||||
> 3 0.3227(33)
|
||||
> 4 0.2041(21)
|
||||
> 5
|
||||
```
|
||||
The individual entries of a correlator can be accessed via slicing
|
||||
```python
|
||||
print(my_corr[3])
|
||||
> 0.3227(33)
|
||||
```
|
||||
Error propagation with the `Corr` class works very similar to `Obs` objects. Mathematical operations are overloaded and `Corr` objects can be computed together with other `Corr` objects, `Obs` objects or real numbers and integers.
|
||||
```python
|
||||
my_new_corr = 0.3 * my_corr[2] * my_corr * my_corr + 12 / my_corr
|
||||
```
|
||||
|
||||
`pyerrors` provides the user with a set of regularly used methods for the manipulation of correlator objects:
|
||||
- `Corr.gamma_method` applies the gamma method to all entries of the correlator.
|
||||
- `Corr.m_eff` to construct effective masses. Various variants for periodic and fixed temporal boundary conditions are available.
|
||||
- `Corr.deriv` returns the first derivative of the correlator as `Corr`. Different discretizations of the numerical derivative are available.
|
||||
- `Corr.second_deriv` returns the second derivative of the correlator as `Corr`. Different discretizations of the numerical derivative are available.
|
||||
- `Corr.symmetric` symmetrizes parity even correlations functions, assuming periodic boundary conditions.
|
||||
- `Corr.anti_symmetric` anti-symmetrizes parity odd correlations functions, assuming periodic boundary conditions.
|
||||
- `Corr.T_symmetry` averages a correlator with its time symmetry partner, assuming fixed boundary conditions.
|
||||
- `Corr.plateau` extracts a plateau value from the correlator in a given range.
|
||||
- `Corr.roll` periodically shifts the correlator.
|
||||
- `Corr.reverse` reverses the time ordering of the correlator.
|
||||
- `Corr.correlate` constructs a disconnected correlation function from the correlator and another `Corr` or `Obs` object.
|
||||
- `Corr.reweight` reweights the correlator.
|
||||
|
||||
`pyerrors` can also handle matrices of correlation functions and extract energy states from these matrices via a generalized eigenvalue problem (see `pyerrors.correlators.Corr.GEVP`).
|
||||
|
||||
For the full API see `pyerrors.correlators.Corr`.
|
||||
|
||||
# Complex observables
|
||||
|
||||
`pyerrors` can handle complex valued observables via the class `pyerrors.obs.CObs`.
|
||||
`CObs` are initialized with a real and an imaginary part which both can be `Obs` valued.
|
||||
|
||||
Example:
|
||||
```python
|
||||
my_real_part = pe.Obs([samples1], ['ensemble1'])
|
||||
my_imag_part = pe.Obs([samples2], ['ensemble1'])
|
||||
|
@ -231,27 +275,35 @@ print(my_derived_cobs)
|
|||
`pyerrors.roots`
|
||||
|
||||
# Matrix operations
|
||||
`pyerrors.linalg`
|
||||
`pyerrors` provides wrappers for `Obs`-valued matrix operations based on `numpy.linalg`. The supported functions include:
|
||||
- `inv` for the matrix inverse.
|
||||
- `cholseky` for the Cholesky decomposition.
|
||||
- `det` for the matrix determinant.
|
||||
- `eigh` for eigenvalues and eigenvectors of hermitean matrices.
|
||||
- `eig` for eigenvalues of general matrices.
|
||||
- `pinv` for the Moore-Penrose pseudoinverse.
|
||||
- `svd` for the singular-value-decomposition.
|
||||
|
||||
For the full API see `pyerrors.linalg`.
|
||||
|
||||
# Export data
|
||||
The preferred exported file format within `pyerrors` is
|
||||
The preferred exported file format within `pyerrors` is json.gz
|
||||
|
||||
## Jackknife samples
|
||||
For comparison with other analysis workflows `pyerrors` can generate jackknife samples from an `Obs` object.
|
||||
See `pyerrors.obs.Obs.export_jackknife` for details.
|
||||
For comparison with other analysis workflows `pyerrors` can generate jackknife samples from an `Obs` object or import jackknife samples into an `Obs` object.
|
||||
See `pyerrors.obs.Obs.export_jackknife` and `pyerrors.obs.import_jackknife` for details.
|
||||
|
||||
# Input
|
||||
`pyerrors.input`
|
||||
`pyerrors` includes an `input` submodule in which input routines and parsers for the output of various numerical programs are contained. For details see `pyerrors.input`.
|
||||
'''
|
||||
from .obs import *
|
||||
from .correlators import *
|
||||
from .fits import *
|
||||
from .misc import *
|
||||
from . import dirac
|
||||
from . import input
|
||||
from . import linalg
|
||||
from . import misc
|
||||
from . import mpm
|
||||
from . import npr
|
||||
from . import roots
|
||||
|
||||
from .version import __version__
|
||||
|
|
|
@ -3,7 +3,8 @@ import numpy as np
|
|||
import autograd.numpy as anp
|
||||
import matplotlib.pyplot as plt
|
||||
import scipy.linalg
|
||||
from .obs import Obs, dump_object, reweight, correlate
|
||||
from .obs import Obs, reweight, correlate, CObs
|
||||
from .misc import dump_object, _assert_equal_properties
|
||||
from .fits import least_squares
|
||||
from .linalg import eigh, inv, cholesky
|
||||
from .roots import find_root
|
||||
|
@ -18,60 +19,68 @@ class Corr:
|
|||
to iterate over all timeslices for every operation. This is especially true, when dealing with smearing matrices.
|
||||
|
||||
The correlator can have two types of content: An Obs at every timeslice OR a GEVP
|
||||
smearing matrix at every timeslice. Other dependency (eg. spacial) are not supported.
|
||||
smearing matrix at every timeslice. Other dependency (eg. spatial) are not supported.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, data_input, padding_front=0, padding_back=0, prange=None):
|
||||
# All data_input should be a list of things at different timeslices. This needs to be verified
|
||||
def __init__(self, data_input, padding=[0, 0], prange=None):
|
||||
""" Initialize a Corr object.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
data_input : list
|
||||
list of Obs or list of arrays of Obs.
|
||||
padding : list, optional
|
||||
List with two entries where the first labels the padding
|
||||
at the front of the correlator and the second the padding
|
||||
at the back.
|
||||
prange : list, optional
|
||||
List containing the first and last timeslice of the plateau
|
||||
region indentified for this correlator.
|
||||
"""
|
||||
|
||||
if not isinstance(data_input, list):
|
||||
raise TypeError('Corr__init__ expects a list of timeslices.')
|
||||
# data_input can have multiple shapes. The simplest one is a list of Obs.
|
||||
# We check, if this is the case
|
||||
if all([isinstance(item, Obs) for item in data_input]):
|
||||
self.content = [np.asarray([item]) for item in data_input]
|
||||
# Wrapping the Obs in an array ensures that the data structure is consistent with smearing matrices.
|
||||
self.N = 1 # number of smearings
|
||||
|
||||
# data_input in the form [np.array(Obs,NxN)]
|
||||
if all([(isinstance(item, Obs) or isinstance(item, CObs)) for item in data_input]):
|
||||
_assert_equal_properties(data_input)
|
||||
self.content = [np.asarray([item]) for item in data_input]
|
||||
self.N = 1
|
||||
|
||||
elif all([isinstance(item, np.ndarray) or item is None for item in data_input]) and any([isinstance(item, np.ndarray) for item in data_input]):
|
||||
self.content = data_input
|
||||
|
||||
noNull = [a for a in self.content if not (a is None)] # To check if the matrices are correct for all undefined elements
|
||||
self.N = noNull[0].shape[0]
|
||||
# The checks are now identical to the case above
|
||||
if self.N > 1 and noNull[0].shape[0] != noNull[0].shape[1]:
|
||||
raise Exception("Smearing matrices are not NxN")
|
||||
if (not all([item.shape == noNull[0].shape for item in noNull])):
|
||||
raise Exception("Items in data_input are not of identical shape." + str(noNull))
|
||||
else: # In case its a list of something else.
|
||||
else:
|
||||
raise Exception("data_input contains item of wrong type")
|
||||
|
||||
self.tag = None
|
||||
|
||||
# We now apply some padding to our list. In case that our list represents a correlator of length T but is not defined at every value.
|
||||
# An undefined timeslice is represented by the None object
|
||||
self.content = [None] * padding_front + self.content + [None] * padding_back
|
||||
self.T = len(self.content) # for convenience: will be used a lot
|
||||
self.content = [None] * padding[0] + self.content + [None] * padding[1]
|
||||
self.T = len(self.content)
|
||||
|
||||
# The attribute "range" [start,end] marks a range of two timeslices.
|
||||
# This is useful for keeping track of plateaus and fitranges.
|
||||
# The range can be inherited from other Corrs, if the operation should not alter a chosen range eg. multiplication with a constant.
|
||||
self.prange = prange
|
||||
|
||||
self.gamma_method()
|
||||
|
||||
def __getitem__(self, idx):
|
||||
"""Return the content of timeslice idx"""
|
||||
if len(self.content[idx]) == 1:
|
||||
if self.content[idx] is None:
|
||||
return None
|
||||
elif len(self.content[idx]) == 1:
|
||||
return self.content[idx][0]
|
||||
else:
|
||||
return self.content[idx]
|
||||
|
||||
@property
|
||||
def reweighted(self):
|
||||
bool_array = np.array([list(map(lambda x: x.reweighted, o)) for o in self.content])
|
||||
bool_array = np.array([list(map(lambda x: x.reweighted, o)) for o in [x for x in self.content if x is not None]])
|
||||
if np.all(bool_array == 1):
|
||||
return True
|
||||
elif np.all(bool_array == 0):
|
||||
|
@ -90,15 +99,15 @@ class Corr:
|
|||
for j in range(self.N):
|
||||
item[i, j].gamma_method(**kwargs)
|
||||
|
||||
# We need to project the Correlator with a Vector to get a single value at each timeslice.
|
||||
# The method can use one or two vectors.
|
||||
# If two are specified it returns v1@G@v2 (the order might be very important.)
|
||||
# By default it will return the lowest source, which usually means unsmeared-unsmeared (0,0), but it does not have to
|
||||
def projected(self, vector_l=None, vector_r=None):
|
||||
def projected(self, vector_l=None, vector_r=None, normalize=False):
|
||||
"""We need to project the Correlator with a Vector to get a single value at each timeslice.
|
||||
|
||||
The method can use one or two vectors.
|
||||
If two are specified it returns v1@G@v2 (the order might be very important.)
|
||||
By default it will return the lowest source, which usually means unsmeared-unsmeared (0,0), but it does not have to
|
||||
"""
|
||||
if self.N == 1:
|
||||
raise Exception("Trying to project a Corr, that already has N=1.")
|
||||
# This Exception is in no way necessary. One could just return self
|
||||
# But there is no scenario, where a user would want that to happen and the error message might be more informative.
|
||||
|
||||
self.gamma_method()
|
||||
|
||||
|
@ -106,17 +115,32 @@ class Corr:
|
|||
vector_l, vector_r = np.asarray([1.] + (self.N - 1) * [0.]), np.asarray([1.] + (self.N - 1) * [0.])
|
||||
elif(vector_r is None):
|
||||
vector_r = vector_l
|
||||
if isinstance(vector_l, list) and not isinstance(vector_r, list):
|
||||
if len(vector_l) != self.T:
|
||||
raise Exception("Length of vector list must be equal to T")
|
||||
vector_r = [vector_r] * self.T
|
||||
if isinstance(vector_r, list) and not isinstance(vector_l, list):
|
||||
if len(vector_r) != self.T:
|
||||
raise Exception("Length of vector list must be equal to T")
|
||||
vector_l = [vector_l] * self.T
|
||||
|
||||
if not isinstance(vector_l, list):
|
||||
if not vector_l.shape == vector_r.shape == (self.N,):
|
||||
raise Exception("Vectors are of wrong shape!")
|
||||
|
||||
# We always normalize before projecting! But we only raise a warning, when it is clear, they where not meant to be normalized.
|
||||
if (not (0.95 < vector_r @ vector_r < 1.05)) or (not (0.95 < vector_l @ vector_l < 1.05)):
|
||||
print("Vectors are normalized before projection!")
|
||||
|
||||
if normalize:
|
||||
vector_l, vector_r = vector_l / np.sqrt((vector_l @ vector_l)), vector_r / np.sqrt(vector_r @ vector_r)
|
||||
# if (not (0.95 < vector_r @ vector_r < 1.05)) or (not (0.95 < vector_l @ vector_l < 1.05)):
|
||||
# print("Vectors are normalized before projection!")
|
||||
|
||||
newcontent = [None if (item is None) else np.asarray([vector_l.T @ item @ vector_r]) for item in self.content]
|
||||
|
||||
else:
|
||||
# There are no checks here yet. There are so many possible scenarios, where this can go wrong.
|
||||
if normalize:
|
||||
for t in range(self.T):
|
||||
vector_l[t], vector_r[t] = vector_l[t] / np.sqrt((vector_l[t] @ vector_l[t])), vector_r[t] / np.sqrt(vector_r[t] @ vector_r[t])
|
||||
|
||||
newcontent = [None if (self.content[t] is None or vector_l[t] is None or vector_r[t] is None) else np.asarray([vector_l[t].T @ self.content[t] @ vector_r[t]]) for t in range(self.T)]
|
||||
return Corr(newcontent)
|
||||
|
||||
def sum(self):
|
||||
|
@ -129,8 +153,6 @@ class Corr:
|
|||
newcontent = [None if(item is None) else item[i, j] for item in self.content]
|
||||
return Corr(newcontent)
|
||||
|
||||
# Obs and Matplotlib do not play nicely
|
||||
# We often want to retrieve x,y,y_err as lists to pass them to something like pyplot.errorbar
|
||||
def plottable(self):
|
||||
"""Outputs the correlator in a plotable format.
|
||||
|
||||
|
@ -145,9 +167,6 @@ class Corr:
|
|||
|
||||
return x_list, y_list, y_err_list
|
||||
|
||||
# symmetric returns a Corr, that has been symmetrized.
|
||||
# A symmetry checker is still to be implemented
|
||||
# The method will not delete any redundant timeslices (Bad for memory, Great for convenience)
|
||||
def symmetric(self):
|
||||
""" Symmetrize the correlator around x0=0."""
|
||||
if self.T % 2 != 0:
|
||||
|
@ -184,16 +203,19 @@ class Corr:
|
|||
raise Exception("Corr could not be symmetrized: No redundant values")
|
||||
return Corr(newcontent, prange=self.prange)
|
||||
|
||||
# This method will symmetrice the matrices and therefore make them positive definit.
|
||||
def smearing_symmetric(self):
|
||||
"""Symmetrizes the matrices and therefore make them positive definite."""
|
||||
if self.N > 1:
|
||||
transposed = [None if (G is None) else G.T for G in self.content]
|
||||
return 0.5 * (Corr(transposed) + self)
|
||||
if self.N == 1:
|
||||
raise Exception("Trying to symmetrize a smearing matrix, that already has N=1.")
|
||||
|
||||
# We also include a simple GEVP method based on Scipy.linalg
|
||||
def GEVP(self, t0, ts, state=1):
|
||||
# There are two ways, the GEVP metod can be called.
|
||||
# 1. return_list=False will return a single eigenvector, normalized according to V*C(t_0)*V=1
|
||||
# 2. return_list=True will return a new eigenvector for every timeslice. The time t_s is used to order the vectors according to. arXiv:2004.10472 [hep-lat]
|
||||
def GEVP(self, t0, ts, state=0, sorting="Eigenvalue", return_list=False):
|
||||
if not return_list:
|
||||
if (self.content[t0] is None) or (self.content[ts] is None):
|
||||
raise Exception("Corr not defined at t0/ts")
|
||||
G0, Gt = np.empty([self.N, self.N], dtype="double"), np.empty([self.N, self.N], dtype="double")
|
||||
|
@ -202,10 +224,32 @@ class Corr:
|
|||
G0[i, j] = self.content[t0][i, j].value
|
||||
Gt[i, j] = self.content[ts][i, j].value
|
||||
|
||||
sp_val, sp_vec = scipy.linalg.eig(Gt, G0)
|
||||
sp_vec = sp_vec[:, np.argsort(sp_val)[-state]] # We only want the eigenvector belonging to the selected state
|
||||
sp_vec = sp_vec / np.sqrt(sp_vec @ sp_vec)
|
||||
sp_vecs = _GEVP_solver(Gt, G0)
|
||||
sp_vec = sp_vecs[state]
|
||||
return sp_vec
|
||||
if return_list:
|
||||
all_vecs = []
|
||||
for t in range(self.T):
|
||||
try:
|
||||
G0, Gt = np.empty([self.N, self.N], dtype="double"), np.empty([self.N, self.N], dtype="double")
|
||||
for i in range(self.N):
|
||||
for j in range(self.N):
|
||||
G0[i, j] = self.content[t0][i, j].value
|
||||
Gt[i, j] = self.content[t][i, j].value
|
||||
|
||||
sp_vecs = _GEVP_solver(Gt, G0)
|
||||
if sorting == "Eigenvalue":
|
||||
sp_vec = sp_vecs[state]
|
||||
all_vecs.append(sp_vec)
|
||||
else:
|
||||
all_vecs.append(sp_vecs)
|
||||
except Exception:
|
||||
all_vecs.append(None)
|
||||
if sorting == "Eigenvector":
|
||||
all_vecs = _sort_vectors(all_vecs, ts)
|
||||
all_vecs = [a[state] for a in all_vecs]
|
||||
|
||||
return all_vecs
|
||||
|
||||
def Eigenvalue(self, t0, state=1):
|
||||
G = self.smearing_symmetric()
|
||||
|
@ -216,6 +260,9 @@ class Corr:
|
|||
LTi = inv(LT)
|
||||
newcontent = []
|
||||
for t in range(self.T):
|
||||
if self.content[t] is None:
|
||||
newcontent.append(None)
|
||||
else:
|
||||
Gt = G.content[t]
|
||||
M = Li @ Gt @ LTi
|
||||
eigenvalues = eigh(M)[0]
|
||||
|
@ -223,6 +270,47 @@ class Corr:
|
|||
newcontent.append(eigenvalue)
|
||||
return Corr(newcontent)
|
||||
|
||||
def Hankel(self, N, periodic=False):
|
||||
"""Constructs an NxN Hankel matrix
|
||||
|
||||
C(t) c(t+1) ... c(t+n-1)
|
||||
C(t+1) c(t+2) ... c(t+n)
|
||||
.................
|
||||
C(t+(n-1)) c(t+n) ... c(t+2(n-1))
|
||||
|
||||
Parameters
|
||||
----------
|
||||
N : int
|
||||
Dimension of the Hankel matrix
|
||||
periodic : bool, optional
|
||||
determines whether the matrix is extended periodically
|
||||
"""
|
||||
|
||||
if self.N != 1:
|
||||
raise Exception("Multi-operator Prony not implemented!")
|
||||
|
||||
array = np.empty([N, N], dtype="object")
|
||||
new_content = []
|
||||
for t in range(self.T):
|
||||
new_content.append(array.copy())
|
||||
|
||||
def wrap(i):
|
||||
if i >= self.T:
|
||||
return i - self.T
|
||||
return i
|
||||
|
||||
for t in range(self.T):
|
||||
for i in range(N):
|
||||
for j in range(N):
|
||||
if periodic:
|
||||
new_content[t][i, j] = self.content[wrap(t + i + j)][0]
|
||||
elif (t + i + j) >= self.T:
|
||||
new_content[t] = None
|
||||
else:
|
||||
new_content[t][i, j] = self.content[t + i + j][0]
|
||||
|
||||
return Corr(new_content)
|
||||
|
||||
def roll(self, dt):
|
||||
"""Periodically shift the correlator by dt timeslices
|
||||
|
||||
|
@ -235,7 +323,7 @@ class Corr:
|
|||
|
||||
def reverse(self):
|
||||
"""Reverse the time ordering of the Corr"""
|
||||
return Corr(self.content[::-1])
|
||||
return Corr(self.content[:: -1])
|
||||
|
||||
def correlate(self, partner):
|
||||
"""Correlate the correlator with another correlator or Obs
|
||||
|
@ -257,7 +345,7 @@ class Corr:
|
|||
new_content.append(None)
|
||||
else:
|
||||
new_content.append(np.array([correlate(o, partner.content[x0][0]) for o in t_slice]))
|
||||
elif isinstance(partner, Obs):
|
||||
elif isinstance(partner, Obs): # Should this include CObs?
|
||||
new_content.append(np.array([correlate(o, partner) for o in t_slice]))
|
||||
else:
|
||||
raise Exception("Can only correlate with an Obs or a Corr.")
|
||||
|
@ -328,7 +416,7 @@ class Corr:
|
|||
newcontent.append(self.content[t + 1] - self.content[t])
|
||||
if(all([x is None for x in newcontent])):
|
||||
raise Exception("Derivative is undefined at all timeslices")
|
||||
return Corr(newcontent, padding_back=1)
|
||||
return Corr(newcontent, padding=[0, 1])
|
||||
if symmetric:
|
||||
newcontent = []
|
||||
for t in range(1, self.T - 1):
|
||||
|
@ -338,7 +426,7 @@ class Corr:
|
|||
newcontent.append(0.5 * (self.content[t + 1] - self.content[t - 1]))
|
||||
if(all([x is None for x in newcontent])):
|
||||
raise Exception('Derivative is undefined at all timeslices')
|
||||
return Corr(newcontent, padding_back=1, padding_front=1)
|
||||
return Corr(newcontent, padding=[1, 1])
|
||||
|
||||
def second_deriv(self):
|
||||
"""Return the second derivative of the correlator with respect to x0."""
|
||||
|
@ -350,7 +438,7 @@ class Corr:
|
|||
newcontent.append((self.content[t + 1] - 2 * self.content[t] + self.content[t - 1]))
|
||||
if(all([x is None for x in newcontent])):
|
||||
raise Exception("Derivative is undefined at all timeslices")
|
||||
return Corr(newcontent, padding_back=1, padding_front=1)
|
||||
return Corr(newcontent, padding=[1, 1])
|
||||
|
||||
def m_eff(self, variant='log', guess=1.0):
|
||||
"""Returns the effective mass of the correlator as correlator object
|
||||
|
@ -378,7 +466,7 @@ class Corr:
|
|||
if(all([x is None for x in newcontent])):
|
||||
raise Exception('m_eff is undefined at all timeslices')
|
||||
|
||||
return np.log(Corr(newcontent, padding_back=1))
|
||||
return np.log(Corr(newcontent, padding=[0, 1]))
|
||||
|
||||
elif variant in ['periodic', 'cosh', 'sinh']:
|
||||
if variant in ['periodic', 'cosh']:
|
||||
|
@ -401,7 +489,7 @@ class Corr:
|
|||
if(all([x is None for x in newcontent])):
|
||||
raise Exception('m_eff is undefined at all timeslices')
|
||||
|
||||
return Corr(newcontent, padding_back=1)
|
||||
return Corr(newcontent, padding=[0, 1])
|
||||
|
||||
elif variant == 'arccosh':
|
||||
newcontent = []
|
||||
|
@ -412,7 +500,7 @@ class Corr:
|
|||
newcontent.append((self.content[t + 1] + self.content[t - 1]) / (2 * self.content[t]))
|
||||
if(all([x is None for x in newcontent])):
|
||||
raise Exception("m_eff is undefined at all timeslices")
|
||||
return np.arccosh(Corr(newcontent, padding_back=1, padding_front=1))
|
||||
return np.arccosh(Corr(newcontent, padding=[1, 1]))
|
||||
|
||||
else:
|
||||
raise Exception('Unknown variant.')
|
||||
|
@ -433,16 +521,11 @@ class Corr:
|
|||
if self.N != 1:
|
||||
raise Exception("Correlator must be projected before fitting")
|
||||
|
||||
# The default behavior is:
|
||||
# 1 use explicit fitrange
|
||||
# if none is provided, use the range of the corr
|
||||
# if this is also not set, use the whole length of the corr (This could come with a warning!)
|
||||
|
||||
if fitrange is None:
|
||||
if self.prange:
|
||||
fitrange = self.prange
|
||||
else:
|
||||
fitrange = [0, self.T]
|
||||
fitrange = [0, self.T - 1]
|
||||
|
||||
xs = [x for x in range(fitrange[0], fitrange[1] + 1) if not self.content[x] is None]
|
||||
ys = [self.content[x][0] for x in range(fitrange[0], fitrange[1] + 1) if not self.content[x] is None]
|
||||
|
@ -519,7 +602,7 @@ class Corr:
|
|||
if self.N != 1:
|
||||
raise Exception("Correlator must be projected before plotting")
|
||||
if x_range is None:
|
||||
x_range = [0, self.T]
|
||||
x_range = [0, self.T - 1]
|
||||
|
||||
fig = plt.figure()
|
||||
ax1 = fig.add_subplot(111)
|
||||
|
@ -535,7 +618,7 @@ class Corr:
|
|||
y_min = min([(x[0].value - x[0].dvalue) for x in self.content[x_range[0]: x_range[1] + 1] if (x is not None) and x[0].dvalue < 2 * np.abs(x[0].value)])
|
||||
y_max = max([(x[0].value + x[0].dvalue) for x in self.content[x_range[0]: x_range[1] + 1] if (x is not None) and x[0].dvalue < 2 * np.abs(x[0].value)])
|
||||
ax1.set_ylim([y_min - 0.1 * (y_max - y_min), y_max + 0.1 * (y_max - y_min)])
|
||||
except:
|
||||
except Exception:
|
||||
pass
|
||||
else:
|
||||
ax1.set_ylim(y_range)
|
||||
|
@ -581,24 +664,45 @@ class Corr:
|
|||
|
||||
return
|
||||
|
||||
def dump(self, filename):
|
||||
"""Dumps the Corr into a pickle file
|
||||
|
||||
def dump(self, filename, datatype="json.gz", **kwargs):
|
||||
"""Dumps the Corr into a file of chosen type
|
||||
Parameters
|
||||
----------
|
||||
filename : str
|
||||
Name of the file
|
||||
Name of the file to be saved.
|
||||
datatype : str
|
||||
Format of the exported file. Supported formats include
|
||||
"json.gz" and "pickle"
|
||||
path : str
|
||||
specifies a custom path for the file (default '.')
|
||||
"""
|
||||
dump_object(self, filename)
|
||||
return
|
||||
if datatype == "json.gz":
|
||||
from .input.json import dump_to_json
|
||||
if 'path' in kwargs:
|
||||
file_name = kwargs.get('path') + '/' + filename
|
||||
else:
|
||||
file_name = filename
|
||||
dump_to_json(self, file_name)
|
||||
elif datatype == "pickle":
|
||||
dump_object(self, filename, **kwargs)
|
||||
else:
|
||||
raise Exception("Unknown datatype " + str(datatype))
|
||||
|
||||
def print(self, range=[0, None]):
|
||||
print(self.__repr__(range))
|
||||
|
||||
def __repr__(self, range=[0, None]):
|
||||
content_string = ""
|
||||
|
||||
content_string += "Corr T=" + str(self.T) + " N=" + str(self.N) + "\n" # +" filled with"+ str(type(self.content[0][0])) there should be a good solution here
|
||||
|
||||
if self.tag is not None:
|
||||
content_string += "Description: " + self.tag + "\n"
|
||||
if self.N != 1:
|
||||
return content_string
|
||||
# This avoids a crash for N>1. I do not know, what else to do here. I like the list representation for N==1. We could print only one "smearing" or one matrix. Printing everything will just
|
||||
# be a wall of numbers.
|
||||
|
||||
if range[1]:
|
||||
range[1] += 1
|
||||
content_string += 'x0/a\tCorr(x0/a)\n------------------\n'
|
||||
|
@ -632,7 +736,7 @@ class Corr:
|
|||
newcontent.append(self.content[t] + y.content[t])
|
||||
return Corr(newcontent)
|
||||
|
||||
elif isinstance(y, Obs) or isinstance(y, int) or isinstance(y, float):
|
||||
elif isinstance(y, Obs) or isinstance(y, int) or isinstance(y, float) or isinstance(y, CObs):
|
||||
newcontent = []
|
||||
for t in range(self.T):
|
||||
if (self.content[t] is None):
|
||||
|
@ -655,7 +759,7 @@ class Corr:
|
|||
newcontent.append(self.content[t] * y.content[t])
|
||||
return Corr(newcontent)
|
||||
|
||||
elif isinstance(y, Obs) or isinstance(y, int) or isinstance(y, float):
|
||||
elif isinstance(y, Obs) or isinstance(y, int) or isinstance(y, float) or isinstance(y, CObs):
|
||||
newcontent = []
|
||||
for t in range(self.T):
|
||||
if (self.content[t] is None):
|
||||
|
@ -688,9 +792,14 @@ class Corr:
|
|||
raise Exception("Division returns completely undefined correlator")
|
||||
return Corr(newcontent)
|
||||
|
||||
elif isinstance(y, Obs):
|
||||
elif isinstance(y, Obs) or isinstance(y, CObs):
|
||||
if isinstance(y, Obs):
|
||||
if y.value == 0:
|
||||
raise Exception('Division by zero will return undefined correlator')
|
||||
if isinstance(y, CObs):
|
||||
if y.is_zero():
|
||||
raise Exception('Division by zero will return undefined correlator')
|
||||
|
||||
newcontent = []
|
||||
for t in range(self.T):
|
||||
if (self.content[t] is None):
|
||||
|
@ -720,7 +829,7 @@ class Corr:
|
|||
return self + (-y)
|
||||
|
||||
def __pow__(self, y):
|
||||
if isinstance(y, Obs) or isinstance(y, int) or isinstance(y, float):
|
||||
if isinstance(y, Obs) or isinstance(y, int) or isinstance(y, float) or isinstance(y, CObs):
|
||||
newcontent = [None if (item is None) else item**y for item in self.content]
|
||||
return Corr(newcontent, prange=self.prange)
|
||||
else:
|
||||
|
@ -801,3 +910,70 @@ class Corr:
|
|||
|
||||
def __rtruediv__(self, y):
|
||||
return (self / y) ** (-1)
|
||||
|
||||
@property
|
||||
def real(self):
|
||||
def return_real(obs_OR_cobs):
|
||||
if isinstance(obs_OR_cobs, CObs):
|
||||
return obs_OR_cobs.real
|
||||
else:
|
||||
return obs_OR_cobs
|
||||
|
||||
return self._apply_func_to_corr(return_real)
|
||||
|
||||
@property
|
||||
def imag(self):
|
||||
def return_imag(obs_OR_cobs):
|
||||
if isinstance(obs_OR_cobs, CObs):
|
||||
return obs_OR_cobs.imag
|
||||
else:
|
||||
return obs_OR_cobs * 0 # So it stays the right type
|
||||
|
||||
return self._apply_func_to_corr(return_imag)
|
||||
|
||||
|
||||
def _sort_vectors(vec_set, ts):
|
||||
"""Helper function used to find a set of Eigenvectors consistent over all timeslices"""
|
||||
reference_sorting = np.array(vec_set[ts])
|
||||
N = reference_sorting.shape[0]
|
||||
sorted_vec_set = []
|
||||
for t in range(len(vec_set)):
|
||||
if vec_set[t] is None:
|
||||
sorted_vec_set.append(None)
|
||||
elif not t == ts:
|
||||
perms = permutation([i for i in range(N)])
|
||||
best_score = 0
|
||||
for perm in perms:
|
||||
current_score = 1
|
||||
for k in range(N):
|
||||
new_sorting = reference_sorting.copy()
|
||||
new_sorting[perm[k], :] = vec_set[t][k]
|
||||
current_score *= abs(np.linalg.det(new_sorting))
|
||||
if current_score > best_score:
|
||||
best_score = current_score
|
||||
best_perm = perm
|
||||
sorted_vec_set.append([vec_set[t][k] for k in best_perm])
|
||||
else:
|
||||
sorted_vec_set.append(vec_set[t])
|
||||
|
||||
return sorted_vec_set
|
||||
|
||||
|
||||
def permutation(lst): # Shamelessly copied
|
||||
if len(lst) == 1:
|
||||
return [lst]
|
||||
ll = []
|
||||
for i in range(len(lst)):
|
||||
m = lst[i]
|
||||
remLst = lst[:i] + lst[i + 1:]
|
||||
# Generating all permutations where m is first
|
||||
for p in permutation(remLst):
|
||||
ll.append([m] + p)
|
||||
return ll
|
||||
|
||||
|
||||
def _GEVP_solver(Gt, G0): # Just so normalization an sorting does not need to be repeated. Here we could later put in some checks
|
||||
sp_val, sp_vecs = scipy.linalg.eig(Gt, G0)
|
||||
sp_vecs = [sp_vecs[:, np.argsort(sp_val)[-i]] for i in range(1, sp_vecs.shape[0] + 1)]
|
||||
sp_vecs = [v / np.sqrt((v.T @ G0 @ v)) for v in sp_vecs]
|
||||
return sp_vecs
|
||||
|
|
75
pyerrors/covobs.py
Normal file
75
pyerrors/covobs.py
Normal file
|
@ -0,0 +1,75 @@
|
|||
import numpy as np
|
||||
|
||||
|
||||
class Covobs:
|
||||
|
||||
def __init__(self, mean, cov, name, pos=None, grad=None):
|
||||
""" Initialize Covobs object.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
mean : float
|
||||
Mean value of the new Obs
|
||||
cov : list or array
|
||||
2d Covariance matrix or 1d diagonal entries
|
||||
name : str
|
||||
identifier for the covariance matrix
|
||||
pos : int
|
||||
Position of the variance belonging to mean in cov.
|
||||
Is taken to be 1 if cov is 0-dimensional
|
||||
grad : list or array
|
||||
Gradient of the Covobs wrt. the means belonging to cov.
|
||||
"""
|
||||
self._set_cov(cov)
|
||||
if '|' in name:
|
||||
raise Exception("Covobs name must not contain replica separator '|'.")
|
||||
self.name = name
|
||||
if grad is None:
|
||||
if pos is None:
|
||||
if self.N == 1:
|
||||
pos = 0
|
||||
else:
|
||||
raise Exception('Have to specify position of cov-element belonging to mean!')
|
||||
else:
|
||||
if pos > self.N:
|
||||
raise Exception('pos %d too large for covariance matrix with dimension %dx%d!' % (pos, self.N, self.N))
|
||||
self._grad = np.zeros((self.N, 1))
|
||||
self._grad[pos] = 1.
|
||||
else:
|
||||
self._set_grad(grad)
|
||||
self.value = mean
|
||||
|
||||
def errsq(self):
|
||||
""" Return the variance (= square of the error) of the Covobs
|
||||
"""
|
||||
return float(np.dot(np.transpose(self.grad), np.dot(self.cov, self.grad)))
|
||||
|
||||
def _set_cov(self, cov):
|
||||
self._cov = np.array(cov)
|
||||
if self._cov.ndim == 0:
|
||||
self.N = 1
|
||||
self._cov = np.diag([self._cov])
|
||||
elif self._cov.ndim == 1:
|
||||
self.N = len(self._cov)
|
||||
self._cov = np.diag(self._cov)
|
||||
elif self._cov.ndim == 2:
|
||||
self.N = self._cov.shape[0]
|
||||
if self._cov.shape[1] != self.N:
|
||||
raise Exception('Covariance matrix has to be a square matrix!')
|
||||
else:
|
||||
raise Exception('Covariance matrix has to be a 2 dimensional square matrix!')
|
||||
|
||||
def _set_grad(self, grad):
|
||||
self._grad = np.array(grad)
|
||||
if self._grad.ndim in [0, 1]:
|
||||
self._grad = np.reshape(self._grad, (self.N, 1))
|
||||
elif self._grad.ndim != 2:
|
||||
raise Exception('Invalid dimension of grad!')
|
||||
|
||||
@property
|
||||
def cov(self):
|
||||
return self._cov
|
||||
|
||||
@property
|
||||
def grad(self):
|
||||
return self._grad
|
|
@ -22,6 +22,30 @@ identity = np.array(
|
|||
dtype=complex)
|
||||
|
||||
|
||||
def epsilon_tensor(i, j, k):
|
||||
"""Rank-3 epsilon tensor
|
||||
|
||||
Based on https://codegolf.stackexchange.com/a/160375
|
||||
"""
|
||||
test_set = set((i, j, k))
|
||||
if not (test_set <= set((1, 2, 3)) or test_set <= set((0, 1, 2))):
|
||||
raise Exception("Unexpected input", i, j, k)
|
||||
|
||||
return (i - j) * (j - k) * (k - i) / 2
|
||||
|
||||
|
||||
def epsilon_tensor_rank4(i, j, k, o):
|
||||
"""Rank-4 epsilon tensor
|
||||
|
||||
Extension of https://codegolf.stackexchange.com/a/160375
|
||||
"""
|
||||
test_set = set((i, j, k, o))
|
||||
if not (test_set <= set((1, 2, 3, 4)) or test_set <= set((0, 1, 2, 3))):
|
||||
raise Exception("Unexpected input", i, j, k, o)
|
||||
|
||||
return (i - j) * (j - k) * (k - i) * (i - o) * (j - o) * (o - k) / 12
|
||||
|
||||
|
||||
def Grid_gamma(gamma_tag):
|
||||
"""Returns gamma matrix in Grid labeling."""
|
||||
if gamma_tag == 'Identity':
|
||||
|
|
180
pyerrors/fits.py
180
pyerrors/fits.py
|
@ -8,10 +8,11 @@ import scipy.stats
|
|||
import matplotlib.pyplot as plt
|
||||
from matplotlib import gridspec
|
||||
from scipy.odr import ODR, Model, RealData
|
||||
from scipy.stats import chi2
|
||||
import iminuit
|
||||
from autograd import jacobian
|
||||
from autograd import elementwise_grad as egrad
|
||||
from .obs import Obs, derived_observable, covariance, pseudo_Obs
|
||||
from .obs import Obs, derived_observable, covariance, cov_Obs
|
||||
|
||||
|
||||
class Fit_result(Sequence):
|
||||
|
@ -46,6 +47,8 @@ class Fit_result(Sequence):
|
|||
my_str += 'residual variance = ' + f'{self.residual_variance:2.6f}' + '\n'
|
||||
if hasattr(self, 'chisquare_by_expected_chisquare'):
|
||||
my_str += '\u03C7\u00b2/\u03C7\u00b2exp = ' + f'{self.chisquare_by_expected_chisquare:2.6f}' + '\n'
|
||||
if hasattr(self, 'p_value'):
|
||||
my_str += 'p-value = ' + f'{self.p_value:2.4f}' + '\n'
|
||||
my_str += 'Fit parameters:\n'
|
||||
for i_par, par in enumerate(self.fit_parameters):
|
||||
my_str += str(i_par) + '\t' + ' ' * int(par >= 0) + str(par).rjust(int(par < 0.0)) + '\n'
|
||||
|
@ -188,7 +191,7 @@ def total_least_squares(x, y, func, silent=False, **kwargs):
|
|||
for i in range(25):
|
||||
try:
|
||||
func(np.arange(i), x.T[0])
|
||||
except:
|
||||
except Exception:
|
||||
pass
|
||||
else:
|
||||
break
|
||||
|
@ -301,12 +304,13 @@ def total_least_squares(x, y, func, silent=False, **kwargs):
|
|||
|
||||
result = []
|
||||
for i in range(n_parms):
|
||||
result.append(derived_observable(lambda x, **kwargs: x[0], [pseudo_Obs(out.beta[i], 0.0, y[0].names[0], y[0].shape[y[0].names[0]])] + list(x.ravel()) + list(y), man_grad=[0] + list(deriv_x[i]) + list(deriv_y[i])))
|
||||
result.append(derived_observable(lambda my_var, **kwargs: (my_var[0] + np.finfo(np.float64).eps) / (x.ravel()[0].value + np.finfo(np.float64).eps) * out.beta[i], list(x.ravel()) + list(y), man_grad=list(deriv_x[i]) + list(deriv_y[i])))
|
||||
|
||||
output.fit_parameters = result + const_par
|
||||
|
||||
output.odr_chisquare = odr_chisquare(np.concatenate((out.beta, out.xplus.ravel())))
|
||||
output.dof = x.shape[-1] - n_parms
|
||||
output.p_value = 1 - chi2.cdf(output.odr_chisquare, output.dof)
|
||||
|
||||
return output
|
||||
|
||||
|
@ -329,7 +333,7 @@ def _prior_fit(x, y, func, priors, silent=False, **kwargs):
|
|||
for i in range(100):
|
||||
try:
|
||||
func(np.arange(i), 0)
|
||||
except:
|
||||
except Exception:
|
||||
pass
|
||||
else:
|
||||
break
|
||||
|
@ -353,7 +357,7 @@ def _prior_fit(x, y, func, priors, silent=False, **kwargs):
|
|||
loc_priors.append(i_prior)
|
||||
else:
|
||||
loc_val, loc_dval = extract_val_and_dval(i_prior)
|
||||
loc_priors.append(pseudo_Obs(loc_val, loc_dval, 'p' + str(i_n)))
|
||||
loc_priors.append(cov_Obs(loc_val, loc_dval ** 2, '#prior' + str(i_n) + f"_{np.random.randint(2147483647):010d}"))
|
||||
|
||||
output.priors = loc_priors
|
||||
|
||||
|
@ -387,13 +391,15 @@ def _prior_fit(x, y, func, priors, silent=False, **kwargs):
|
|||
if not silent:
|
||||
print('Method: migrad')
|
||||
|
||||
m = iminuit.Minuit.from_array_func(chisqfunc, x0, error=np.asarray(x0) * 0.01, errordef=1, print_level=0)
|
||||
m = iminuit.Minuit(chisqfunc, x0)
|
||||
m.errordef = 1
|
||||
m.print_level = 0
|
||||
if 'tol' in kwargs:
|
||||
m.tol = kwargs.get('tol')
|
||||
else:
|
||||
m.tol = 1e-4
|
||||
m.migrad()
|
||||
params = np.asarray(m.values.values())
|
||||
params = np.asarray(m.values)
|
||||
|
||||
output.chisquare_by_dof = m.fval / len(x)
|
||||
|
||||
|
@ -402,7 +408,7 @@ def _prior_fit(x, y, func, priors, silent=False, **kwargs):
|
|||
if not silent:
|
||||
print('chisquare/d.o.f.:', output.chisquare_by_dof)
|
||||
|
||||
if not m.get_fmin().is_valid:
|
||||
if not m.fmin.is_valid:
|
||||
raise Exception('The minimization procedure did not converge.')
|
||||
|
||||
hess_inv = np.linalg.pinv(jacobian(jacobian(chisqfunc))(params))
|
||||
|
@ -418,7 +424,7 @@ def _prior_fit(x, y, func, priors, silent=False, **kwargs):
|
|||
|
||||
result = []
|
||||
for i in range(n_parms):
|
||||
result.append(derived_observable(lambda x, **kwargs: x[0], [pseudo_Obs(params[i], 0.0, y[0].names[0], y[0].shape[y[0].names[0]])] + list(y) + list(loc_priors), man_grad=[0] + list(deriv[i])))
|
||||
result.append(derived_observable(lambda x, **kwargs: (x[0] + np.finfo(np.float64).eps) / (y[0].value + np.finfo(np.float64).eps) * params[i], list(y) + list(loc_priors), man_grad=list(deriv[i])))
|
||||
|
||||
output.fit_parameters = result
|
||||
output.chisquare = chisqfunc(np.asarray(params))
|
||||
|
@ -468,7 +474,7 @@ def _standard_fit(x, y, func, silent=False, **kwargs):
|
|||
for i in range(25):
|
||||
try:
|
||||
func(np.arange(i), x.T[0])
|
||||
except:
|
||||
except Exception:
|
||||
pass
|
||||
else:
|
||||
break
|
||||
|
@ -612,12 +618,13 @@ def _standard_fit(x, y, func, silent=False, **kwargs):
|
|||
|
||||
result = []
|
||||
for i in range(n_parms):
|
||||
result.append(derived_observable(lambda x, **kwargs: x[0], [pseudo_Obs(fit_result.x[i], 0.0, y[0].names[0], y[0].shape[y[0].names[0]])] + list(y), man_grad=[0] + list(deriv[i])))
|
||||
result.append(derived_observable(lambda x, **kwargs: (x[0] + np.finfo(np.float64).eps) / (y[0].value + np.finfo(np.float64).eps) * fit_result.x[i], list(y), man_grad=list(deriv[i])))
|
||||
|
||||
output.fit_parameters = result + const_par
|
||||
|
||||
output.chisquare = chisqfunc(fit_result.x)
|
||||
output.dof = x.shape[-1] - n_parms
|
||||
output.p_value = 1 - chi2.cdf(output.chisquare, output.dof)
|
||||
|
||||
if kwargs.get('resplot') is True:
|
||||
residual_plot(x, y, func, result)
|
||||
|
@ -646,10 +653,10 @@ def fit_lin(x, y, **kwargs):
|
|||
return y
|
||||
|
||||
if all(isinstance(n, Obs) for n in x):
|
||||
out = odr_fit(x, y, f, **kwargs)
|
||||
out = total_least_squares(x, y, f, **kwargs)
|
||||
return out.fit_parameters
|
||||
elif all(isinstance(n, float) or isinstance(n, int) for n in x) or isinstance(x, np.ndarray):
|
||||
out = standard_fit(x, y, f, **kwargs)
|
||||
out = least_squares(x, y, f, **kwargs)
|
||||
return out.fit_parameters
|
||||
else:
|
||||
raise Exception('Unsupported types for x')
|
||||
|
@ -678,7 +685,7 @@ def qqplot(x, o_y, func, p):
|
|||
plt.xlabel('Theoretical quantiles')
|
||||
plt.ylabel('Ordered Values')
|
||||
plt.legend()
|
||||
plt.show()
|
||||
plt.draw()
|
||||
|
||||
|
||||
def residual_plot(x, y, func, fit_res):
|
||||
|
@ -702,12 +709,12 @@ def residual_plot(x, y, func, fit_res):
|
|||
ax1.plot(x, residuals, 'ko', ls='none', markersize=5)
|
||||
ax1.tick_params(direction='out')
|
||||
ax1.tick_params(axis="x", bottom=True, top=True, labelbottom=True)
|
||||
ax1.axhline(y=0.0, ls='--', color='k')
|
||||
ax1.axhline(y=0.0, ls='--', color='k', marker=" ")
|
||||
ax1.fill_between(x_samples, -1.0, 1.0, alpha=0.1, facecolor='k')
|
||||
ax1.set_xlim([xstart, xstop])
|
||||
ax1.set_ylabel('Residuals')
|
||||
plt.subplots_adjust(wspace=None, hspace=None)
|
||||
plt.show()
|
||||
plt.draw()
|
||||
|
||||
|
||||
def covariance_matrix(y):
|
||||
|
@ -741,148 +748,41 @@ def error_band(x, func, beta):
|
|||
return err
|
||||
|
||||
|
||||
def ks_test(obs=None):
|
||||
"""Performs a Kolmogorov–Smirnov test for the Q-values of all fit object.
|
||||
def ks_test(objects=None):
|
||||
"""Performs a Kolmogorov–Smirnov test for the p-values of all fit object.
|
||||
|
||||
If no list is given all Obs in memory are used.
|
||||
|
||||
Disclaimer: The determination of the individual Q-values as well as this function have not been tested yet.
|
||||
Parameters
|
||||
----------
|
||||
objects : list
|
||||
List of fit results to include in the analysis (optional).
|
||||
"""
|
||||
|
||||
raise Exception('Not yet implemented')
|
||||
|
||||
if obs is None:
|
||||
if objects is None:
|
||||
obs_list = []
|
||||
for obj in gc.get_objects():
|
||||
if isinstance(obj, Obs):
|
||||
if isinstance(obj, Fit_result):
|
||||
obs_list.append(obj)
|
||||
else:
|
||||
obs_list = obs
|
||||
obs_list = objects
|
||||
|
||||
# TODO: Rework to apply to Q-values of all fits in memory
|
||||
Qs = []
|
||||
for obs_i in obs_list:
|
||||
for ens in obs_i.e_names:
|
||||
if obs_i.e_Q[ens] is not None:
|
||||
Qs.append(obs_i.e_Q[ens])
|
||||
p_values = [o.p_value for o in obs_list]
|
||||
|
||||
bins = len(Qs)
|
||||
bins = len(p_values)
|
||||
x = np.arange(0, 1.001, 0.001)
|
||||
plt.plot(x, x, 'k', zorder=1)
|
||||
plt.xlim(0, 1)
|
||||
plt.ylim(0, 1)
|
||||
plt.xlabel('Q value')
|
||||
plt.xlabel('p-value')
|
||||
plt.ylabel('Cumulative probability')
|
||||
plt.title(str(bins) + ' Q values')
|
||||
plt.title(str(bins) + ' p-values')
|
||||
|
||||
n = np.arange(1, bins + 1) / np.float64(bins)
|
||||
Xs = np.sort(Qs)
|
||||
Xs = np.sort(p_values)
|
||||
plt.step(Xs, n)
|
||||
diffs = n - Xs
|
||||
loc_max_diff = np.argmax(np.abs(diffs))
|
||||
loc = Xs[loc_max_diff]
|
||||
plt.annotate(s='', xy=(loc, loc), xytext=(loc, loc + diffs[loc_max_diff]), arrowprops=dict(arrowstyle='<->', shrinkA=0, shrinkB=0))
|
||||
plt.show()
|
||||
plt.annotate('', xy=(loc, loc), xytext=(loc, loc + diffs[loc_max_diff]), arrowprops=dict(arrowstyle='<->', shrinkA=0, shrinkB=0))
|
||||
plt.draw()
|
||||
|
||||
print(scipy.stats.kstest(Qs, 'uniform'))
|
||||
|
||||
|
||||
def fit_general(x, y, func, silent=False, **kwargs):
|
||||
"""Performs a non-linear fit to y = func(x) and returns a list of Obs corresponding to the fit parameters.
|
||||
|
||||
Plausibility of the results should be checked. To control the numerical differentiation
|
||||
the kwargs of numdifftools.step_generators.MaxStepGenerator can be used.
|
||||
|
||||
func has to be of the form
|
||||
|
||||
def func(a, x):
|
||||
y = a[0] + a[1] * x + a[2] * np.sinh(x)
|
||||
return y
|
||||
|
||||
y has to be a list of Obs, the dvalues of the Obs are used as yerror for the fit.
|
||||
x can either be a list of floats in which case no xerror is assumed, or
|
||||
a list of Obs, where the dvalues of the Obs are used as xerror for the fit.
|
||||
|
||||
Keyword arguments
|
||||
-----------------
|
||||
silent -- If true all output to the console is omitted (default False).
|
||||
initial_guess -- can provide an initial guess for the input parameters. Relevant for non-linear fits
|
||||
with many parameters.
|
||||
"""
|
||||
|
||||
warnings.warn("New fit functions with exact error propagation are now available as alternative.", DeprecationWarning)
|
||||
|
||||
if not callable(func):
|
||||
raise TypeError('func has to be a function.')
|
||||
|
||||
for i in range(10):
|
||||
try:
|
||||
func(np.arange(i), 0)
|
||||
except:
|
||||
pass
|
||||
else:
|
||||
break
|
||||
n_parms = i
|
||||
if not silent:
|
||||
print('Fit with', n_parms, 'parameters')
|
||||
|
||||
global print_output, beta0
|
||||
print_output = 1
|
||||
if 'initial_guess' in kwargs:
|
||||
beta0 = kwargs.get('initial_guess')
|
||||
if len(beta0) != n_parms:
|
||||
raise Exception('Initial guess does not have the correct length.')
|
||||
else:
|
||||
beta0 = np.arange(n_parms)
|
||||
|
||||
if len(x) != len(y):
|
||||
raise Exception('x and y have to have the same length')
|
||||
|
||||
if all(isinstance(n, Obs) for n in x):
|
||||
obs = x + y
|
||||
x_constants = None
|
||||
xerr = [o.dvalue for o in x]
|
||||
yerr = [o.dvalue for o in y]
|
||||
elif all(isinstance(n, float) or isinstance(n, int) for n in x) or isinstance(x, np.ndarray):
|
||||
obs = y
|
||||
x_constants = x
|
||||
xerr = None
|
||||
yerr = [o.dvalue for o in y]
|
||||
else:
|
||||
raise Exception('Unsupported types for x')
|
||||
|
||||
def do_the_fit(obs, **kwargs):
|
||||
|
||||
global print_output, beta0
|
||||
|
||||
func = kwargs.get('function')
|
||||
yerr = kwargs.get('yerr')
|
||||
length = len(yerr)
|
||||
|
||||
xerr = kwargs.get('xerr')
|
||||
|
||||
if length == len(obs):
|
||||
assert 'x_constants' in kwargs
|
||||
data = RealData(kwargs.get('x_constants'), obs, sy=yerr)
|
||||
fit_type = 2
|
||||
elif length == len(obs) // 2:
|
||||
data = RealData(obs[:length], obs[length:], sx=xerr, sy=yerr)
|
||||
fit_type = 0
|
||||
else:
|
||||
raise Exception('x and y do not fit together.')
|
||||
|
||||
model = Model(func)
|
||||
|
||||
odr = ODR(data, model, beta0, partol=np.finfo(np.float64).eps)
|
||||
odr.set_job(fit_type=fit_type, deriv=1)
|
||||
output = odr.run()
|
||||
if print_output and not silent:
|
||||
print(*output.stopreason)
|
||||
print('chisquare/d.o.f.:', output.res_var)
|
||||
print_output = 0
|
||||
beta0 = output.beta
|
||||
return output.beta[kwargs.get('n')]
|
||||
res = []
|
||||
for n in range(n_parms):
|
||||
res.append(derived_observable(do_the_fit, obs, function=func, xerr=xerr, yerr=yerr, x_constants=x_constants, num_grad=True, n=n, **kwargs))
|
||||
return res
|
||||
print(scipy.stats.kstest(p_values, 'uniform'))
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
from . import bdio
|
||||
from . import hadrons
|
||||
from . import sfcf
|
||||
from . import openQCD
|
||||
from . import json
|
||||
from . import misc
|
||||
from . import openQCD
|
||||
from . import sfcf
|
||||
|
|
|
@ -226,7 +226,7 @@ def write_ADerrors(obs_list, file_path, bdio_path='./libbdio.so', **kwargs):
|
|||
for key in keys:
|
||||
try: # Try to convert key to integer
|
||||
ids.append(int(key))
|
||||
except: # If not possible construct a hash
|
||||
except Exception: # If not possible construct a hash
|
||||
ids.append(int(hashlib.sha256(key.encode('utf-8')).hexdigest(), 16) % 10 ** 8)
|
||||
print('ids', ids)
|
||||
nt = []
|
||||
|
|
|
@ -1,25 +1,15 @@
|
|||
#!/usr/bin/env python
|
||||
# coding: utf-8
|
||||
|
||||
import os
|
||||
import h5py
|
||||
import numpy as np
|
||||
from ..obs import Obs, CObs
|
||||
from ..correlators import Corr
|
||||
from ..npr import Npr_matrix
|
||||
|
||||
|
||||
def _get_files(path, filestem):
|
||||
ls = []
|
||||
for (dirpath, dirnames, filenames) in os.walk(path):
|
||||
ls.extend(filenames)
|
||||
break
|
||||
def _get_files(path, filestem, idl):
|
||||
ls = os.listdir(path)
|
||||
|
||||
# Clean up file list
|
||||
files = []
|
||||
for line in ls:
|
||||
if line.startswith(filestem):
|
||||
files.append(line)
|
||||
files = list(filter(lambda x: x.startswith(filestem), ls))
|
||||
|
||||
if not files:
|
||||
raise Exception('No files starting with', filestem, 'in folder', path)
|
||||
|
@ -30,18 +20,31 @@ def _get_files(path, filestem):
|
|||
# Sort according to configuration number
|
||||
files.sort(key=get_cnfg_number)
|
||||
|
||||
# Check that configurations are evenly spaced
|
||||
cnfg_numbers = []
|
||||
filtered_files = []
|
||||
for line in files:
|
||||
cnfg_numbers.append(get_cnfg_number(line))
|
||||
no = get_cnfg_number(line)
|
||||
if idl:
|
||||
if no in list(idl):
|
||||
filtered_files.append(line)
|
||||
cnfg_numbers.append(no)
|
||||
else:
|
||||
filtered_files.append(line)
|
||||
cnfg_numbers.append(no)
|
||||
|
||||
if not all(np.diff(cnfg_numbers) == np.diff(cnfg_numbers)[0]):
|
||||
# Check that configurations are evenly spaced
|
||||
dc = np.unique(np.diff(cnfg_numbers))
|
||||
if np.any(dc < 0):
|
||||
raise Exception("Unsorted files")
|
||||
if len(dc) == 1:
|
||||
idx = range(cnfg_numbers[0], cnfg_numbers[-1] + dc[0], dc[0])
|
||||
else:
|
||||
raise Exception('Configurations are not evenly spaced.')
|
||||
|
||||
return files, cnfg_numbers
|
||||
return filtered_files, idx
|
||||
|
||||
|
||||
def read_meson_hd5(path, filestem, ens_id, meson='meson_0', tree='meson'):
|
||||
def read_meson_hd5(path, filestem, ens_id, meson='meson_0', tree='meson', idl=None):
|
||||
"""Read hadrons meson hdf5 file and extract the meson labeled 'meson'
|
||||
|
||||
Parameters
|
||||
|
@ -55,14 +58,13 @@ def read_meson_hd5(path, filestem, ens_id, meson='meson_0', tree='meson'):
|
|||
meson : str
|
||||
label of the meson to be extracted, standard value meson_0 which
|
||||
corresponds to the pseudoscalar pseudoscalar two-point function.
|
||||
tree : str
|
||||
Label of the upmost directory in the hdf5 file, default 'meson'
|
||||
for outputs of the Meson module. Can be altered to read input
|
||||
from other modules with similar structures.
|
||||
idl : range
|
||||
If specified only configurations in the given range are read in.
|
||||
"""
|
||||
|
||||
files, cnfg_numbers = _get_files(path, filestem)
|
||||
files, idx = _get_files(path, filestem, idl)
|
||||
|
||||
tree = meson.rsplit('_')[0]
|
||||
corr_data = []
|
||||
infos = []
|
||||
for hd5_file in files:
|
||||
|
@ -78,27 +80,69 @@ def read_meson_hd5(path, filestem, ens_id, meson='meson_0', tree='meson'):
|
|||
|
||||
l_obs = []
|
||||
for c in corr_data.T:
|
||||
l_obs.append(Obs([c], [ens_id], idl=[cnfg_numbers]))
|
||||
l_obs.append(Obs([c], [ens_id], idl=[idx]))
|
||||
|
||||
corr = Corr(l_obs)
|
||||
corr.tag = r", ".join(infos)
|
||||
return corr
|
||||
|
||||
|
||||
def read_ExternalLeg_hd5(path, filestem, ens_id, order='F'):
|
||||
class Npr_matrix(np.ndarray):
|
||||
|
||||
def __new__(cls, input_array, mom_in=None, mom_out=None):
|
||||
obj = np.asarray(input_array).view(cls)
|
||||
obj.mom_in = mom_in
|
||||
obj.mom_out = mom_out
|
||||
return obj
|
||||
|
||||
@property
|
||||
def g5H(self):
|
||||
"""Gamma_5 hermitean conjugate
|
||||
|
||||
Uses the fact that the propagator is gamma5 hermitean, so just the
|
||||
in and out momenta of the propagator are exchanged.
|
||||
"""
|
||||
return Npr_matrix(self,
|
||||
mom_in=self.mom_out,
|
||||
mom_out=self.mom_in)
|
||||
|
||||
def _propagate_mom(self, other, name):
|
||||
s_mom = getattr(self, name, None)
|
||||
o_mom = getattr(other, name, None)
|
||||
if s_mom is not None and o_mom is not None:
|
||||
if not np.allclose(s_mom, o_mom):
|
||||
raise Exception(name + ' does not match.')
|
||||
return o_mom if o_mom is not None else s_mom
|
||||
|
||||
def __matmul__(self, other):
|
||||
return self.__new__(Npr_matrix,
|
||||
super().__matmul__(other),
|
||||
self._propagate_mom(other, 'mom_in'),
|
||||
self._propagate_mom(other, 'mom_out'))
|
||||
|
||||
def __array_finalize__(self, obj):
|
||||
if obj is None:
|
||||
return
|
||||
self.mom_in = getattr(obj, 'mom_in', None)
|
||||
self.mom_out = getattr(obj, 'mom_out', None)
|
||||
|
||||
|
||||
def read_ExternalLeg_hd5(path, filestem, ens_id, idl=None):
|
||||
"""Read hadrons ExternalLeg hdf5 file and output an array of CObs
|
||||
|
||||
Parameters
|
||||
-----------------
|
||||
path -- path to the files to read
|
||||
filestem -- namestem of the files to read
|
||||
ens_id -- name of the ensemble, required for internal bookkeeping
|
||||
order -- order in which the array is to be reshaped,
|
||||
'F' for the first index changing fastest (9 4x4 matrices) default.
|
||||
'C' for the last index changing fastest (16 3x3 matrices),
|
||||
----------
|
||||
path : str
|
||||
path to the files to read
|
||||
filestem : str
|
||||
namestem of the files to read
|
||||
ens_id : str
|
||||
name of the ensemble, required for internal bookkeeping
|
||||
idl : range
|
||||
If specified only configurations in the given range are read in.
|
||||
"""
|
||||
|
||||
files, cnfg_numbers = _get_files(path, filestem)
|
||||
files, idx = _get_files(path, filestem, idl)
|
||||
|
||||
mom = None
|
||||
|
||||
|
@ -107,9 +151,7 @@ def read_ExternalLeg_hd5(path, filestem, ens_id, order='F'):
|
|||
file = h5py.File(path + '/' + hd5_file, "r")
|
||||
raw_data = file['ExternalLeg/corr'][0][0].view('complex')
|
||||
corr_data.append(raw_data)
|
||||
if mom is not None:
|
||||
assert np.allclose(mom, np.array(str(file['ExternalLeg/info'].attrs['pIn'])[3:-2].strip().split(' '), dtype=int))
|
||||
else:
|
||||
if mom is None:
|
||||
mom = np.array(str(file['ExternalLeg/info'].attrs['pIn'])[3:-2].strip().split(' '), dtype=int)
|
||||
file.close()
|
||||
corr_data = np.array(corr_data)
|
||||
|
@ -118,28 +160,29 @@ def read_ExternalLeg_hd5(path, filestem, ens_id, order='F'):
|
|||
|
||||
matrix = np.empty((rolled_array.shape[:-1]), dtype=object)
|
||||
for si, sj, ci, cj in np.ndindex(rolled_array.shape[:-1]):
|
||||
real = Obs([rolled_array[si, sj, ci, cj].real], [ens_id], idl=[cnfg_numbers])
|
||||
imag = Obs([rolled_array[si, sj, ci, cj].imag], [ens_id], idl=[cnfg_numbers])
|
||||
real = Obs([rolled_array[si, sj, ci, cj].real], [ens_id], idl=[idx])
|
||||
imag = Obs([rolled_array[si, sj, ci, cj].imag], [ens_id], idl=[idx])
|
||||
matrix[si, sj, ci, cj] = CObs(real, imag)
|
||||
matrix[si, sj, ci, cj].gamma_method()
|
||||
|
||||
return Npr_matrix(matrix.swapaxes(1, 2).reshape((12, 12), order=order), mom_in=mom)
|
||||
return Npr_matrix(matrix, mom_in=mom)
|
||||
|
||||
|
||||
def read_Bilinear_hd5(path, filestem, ens_id, order='F'):
|
||||
def read_Bilinear_hd5(path, filestem, ens_id, idl=None):
|
||||
"""Read hadrons Bilinear hdf5 file and output an array of CObs
|
||||
|
||||
Parameters
|
||||
-----------------
|
||||
path -- path to the files to read
|
||||
filestem -- namestem of the files to read
|
||||
ens_id -- name of the ensemble, required for internal bookkeeping
|
||||
order -- order in which the array is to be reshaped,
|
||||
'F' for the first index changing fastest (9 4x4 matrices) default.
|
||||
'C' for the last index changing fastest (16 3x3 matrices),
|
||||
----------
|
||||
path : str
|
||||
path to the files to read
|
||||
filestem : str
|
||||
namestem of the files to read
|
||||
ens_id : str
|
||||
name of the ensemble, required for internal bookkeeping
|
||||
idl : range
|
||||
If specified only configurations in the given range are read in.
|
||||
"""
|
||||
|
||||
files, cnfg_numbers = _get_files(path, filestem)
|
||||
files, idx = _get_files(path, filestem, idl)
|
||||
|
||||
mom_in = None
|
||||
mom_out = None
|
||||
|
@ -153,13 +196,9 @@ def read_Bilinear_hd5(path, filestem, ens_id, order='F'):
|
|||
corr_data[name] = []
|
||||
raw_data = file['Bilinear/Bilinear_' + str(i) + '/corr'][0][0].view('complex')
|
||||
corr_data[name].append(raw_data)
|
||||
if mom_in is not None:
|
||||
assert np.allclose(mom_in, np.array(str(file['Bilinear/Bilinear_' + str(i) + '/info'].attrs['pIn'])[3:-2].strip().split(' '), dtype=int))
|
||||
else:
|
||||
if mom_in is None:
|
||||
mom_in = np.array(str(file['Bilinear/Bilinear_' + str(i) + '/info'].attrs['pIn'])[3:-2].strip().split(' '), dtype=int)
|
||||
if mom_out is not None:
|
||||
assert np.allclose(mom_out, np.array(str(file['Bilinear/Bilinear_' + str(i) + '/info'].attrs['pOut'])[3:-2].strip().split(' '), dtype=int))
|
||||
else:
|
||||
if mom_out is None:
|
||||
mom_out = np.array(str(file['Bilinear/Bilinear_' + str(i) + '/info'].attrs['pOut'])[3:-2].strip().split(' '), dtype=int)
|
||||
|
||||
file.close()
|
||||
|
@ -173,11 +212,117 @@ def read_Bilinear_hd5(path, filestem, ens_id, order='F'):
|
|||
|
||||
matrix = np.empty((rolled_array.shape[:-1]), dtype=object)
|
||||
for si, sj, ci, cj in np.ndindex(rolled_array.shape[:-1]):
|
||||
real = Obs([rolled_array[si, sj, ci, cj].real], [ens_id], idl=[cnfg_numbers])
|
||||
imag = Obs([rolled_array[si, sj, ci, cj].imag], [ens_id], idl=[cnfg_numbers])
|
||||
real = Obs([rolled_array[si, sj, ci, cj].real], [ens_id], idl=[idx])
|
||||
imag = Obs([rolled_array[si, sj, ci, cj].imag], [ens_id], idl=[idx])
|
||||
matrix[si, sj, ci, cj] = CObs(real, imag)
|
||||
matrix[si, sj, ci, cj].gamma_method()
|
||||
|
||||
result_dict[key] = Npr_matrix(matrix.swapaxes(1, 2).reshape((12, 12), order=order), mom_in=mom_in, mom_out=mom_out)
|
||||
result_dict[key] = Npr_matrix(matrix, mom_in=mom_in, mom_out=mom_out)
|
||||
|
||||
return result_dict
|
||||
|
||||
|
||||
def read_Fourquark_hd5(path, filestem, ens_id, idl=None, vertices=["VA", "AV"]):
|
||||
"""Read hadrons FourquarkFullyConnected hdf5 file and output an array of CObs
|
||||
|
||||
Parameters
|
||||
----------
|
||||
path : str
|
||||
path to the files to read
|
||||
filestem : str
|
||||
namestem of the files to read
|
||||
ens_id : str
|
||||
name of the ensemble, required for internal bookkeeping
|
||||
idl : range
|
||||
If specified only configurations in the given range are read in.
|
||||
vertices : list
|
||||
Vertex functions to be extracted.
|
||||
"""
|
||||
|
||||
files, idx = _get_files(path, filestem, idl)
|
||||
|
||||
mom_in = None
|
||||
mom_out = None
|
||||
|
||||
vertex_names = []
|
||||
for vertex in vertices:
|
||||
vertex_names += _get_lorentz_names(vertex)
|
||||
|
||||
corr_data = {}
|
||||
|
||||
tree = 'FourQuarkFullyConnected/FourQuarkFullyConnected_'
|
||||
|
||||
for hd5_file in files:
|
||||
file = h5py.File(path + '/' + hd5_file, "r")
|
||||
|
||||
for i in range(32):
|
||||
name = (file[tree + str(i) + '/info'].attrs['gammaA'][0].decode('UTF-8'), file[tree + str(i) + '/info'].attrs['gammaB'][0].decode('UTF-8'))
|
||||
if name in vertex_names:
|
||||
if name not in corr_data:
|
||||
corr_data[name] = []
|
||||
raw_data = file[tree + str(i) + '/corr'][0][0].view('complex')
|
||||
corr_data[name].append(raw_data)
|
||||
if mom_in is None:
|
||||
mom_in = np.array(str(file[tree + str(i) + '/info'].attrs['pIn'])[3:-2].strip().split(' '), dtype=int)
|
||||
if mom_out is None:
|
||||
mom_out = np.array(str(file[tree + str(i) + '/info'].attrs['pOut'])[3:-2].strip().split(' '), dtype=int)
|
||||
|
||||
file.close()
|
||||
|
||||
intermediate_dict = {}
|
||||
|
||||
for vertex in vertices:
|
||||
lorentz_names = _get_lorentz_names(vertex)
|
||||
for v_name in lorentz_names:
|
||||
if vertex not in intermediate_dict:
|
||||
intermediate_dict[vertex] = np.array(corr_data[v_name])
|
||||
else:
|
||||
intermediate_dict[vertex] += np.array(corr_data[v_name])
|
||||
|
||||
result_dict = {}
|
||||
|
||||
for key, data in intermediate_dict.items():
|
||||
|
||||
rolled_array = np.moveaxis(data, 0, 8)
|
||||
|
||||
matrix = np.empty((rolled_array.shape[:-1]), dtype=object)
|
||||
for index in np.ndindex(rolled_array.shape[:-1]):
|
||||
real = Obs([rolled_array[index].real], [ens_id], idl=[idx])
|
||||
imag = Obs([rolled_array[index].imag], [ens_id], idl=[idx])
|
||||
matrix[index] = CObs(real, imag)
|
||||
|
||||
result_dict[key] = Npr_matrix(matrix, mom_in=mom_in, mom_out=mom_out)
|
||||
|
||||
return result_dict
|
||||
|
||||
|
||||
def _get_lorentz_names(name):
|
||||
assert len(name) == 2
|
||||
|
||||
res = []
|
||||
|
||||
if not set(name) <= set(['S', 'P', 'V', 'A', 'T']):
|
||||
raise Exception("Name can only contain 'S', 'P', 'V', 'A' or 'T'")
|
||||
|
||||
if 'S' in name or 'P' in name:
|
||||
if not set(name) <= set(['S', 'P']):
|
||||
raise Exception("'" + name + "' is not a Lorentz scalar")
|
||||
|
||||
g_names = {'S': 'Identity',
|
||||
'P': 'Gamma5'}
|
||||
|
||||
res.append((g_names[name[0]], g_names[name[1]]))
|
||||
|
||||
elif 'T' in name:
|
||||
if not set(name) <= set(['T']):
|
||||
raise Exception("'" + name + "' is not a Lorentz scalar")
|
||||
raise Exception("Tensor operators not yet implemented.")
|
||||
else:
|
||||
if not set(name) <= set(['V', 'A']):
|
||||
raise Exception("'" + name + "' is not a Lorentz scalar")
|
||||
lorentz_index = ['X', 'Y', 'Z', 'T']
|
||||
|
||||
for ind in lorentz_index:
|
||||
res.append(('Gamma' + ind + (name[0] == 'A') * 'Gamma5',
|
||||
'Gamma' + ind + (name[1] == 'A') * 'Gamma5'))
|
||||
|
||||
return res
|
||||
|
|
477
pyerrors/input/json.py
Normal file
477
pyerrors/input/json.py
Normal file
|
@ -0,0 +1,477 @@
|
|||
import json
|
||||
import gzip
|
||||
import numpy as np
|
||||
import getpass
|
||||
import socket
|
||||
import datetime
|
||||
import platform
|
||||
import warnings
|
||||
from ..obs import Obs
|
||||
from ..covobs import Covobs
|
||||
from ..correlators import Corr
|
||||
from ..misc import _assert_equal_properties
|
||||
from .. import version as pyerrorsversion
|
||||
|
||||
|
||||
def create_json_string(ol, description='', indent=1):
|
||||
"""Generate the string for the export of a list of Obs or structures containing Obs
|
||||
to a .json(.gz) file
|
||||
|
||||
Parameters
|
||||
----------
|
||||
ol : list
|
||||
List of objects that will be exported. At the moments, these objects can be
|
||||
either of: Obs, list, numpy.ndarray, Corr.
|
||||
All Obs inside a structure have to be defined on the same set of configurations.
|
||||
description : str
|
||||
Optional string that describes the contents of the json file.
|
||||
indent : int
|
||||
Specify the indentation level of the json file. None or 0 is permissible and
|
||||
saves disk space.
|
||||
"""
|
||||
|
||||
def _default(self, obj):
|
||||
return str(obj)
|
||||
my_encoder = json.JSONEncoder
|
||||
_default.default = json.JSONEncoder().default
|
||||
my_encoder.default = _default
|
||||
|
||||
class Deltalist:
|
||||
def __init__(self, li):
|
||||
self.cnfg = li[0]
|
||||
self.deltas = li[1:]
|
||||
|
||||
def __repr__(self):
|
||||
s = '[%d' % (self.cnfg)
|
||||
for d in self.deltas:
|
||||
s += ', %1.15e' % (d)
|
||||
s += ']'
|
||||
return s
|
||||
|
||||
def __str__(self):
|
||||
return self.__repr__()
|
||||
|
||||
class Floatlist:
|
||||
def __init__(self, li):
|
||||
self.li = list(li)
|
||||
|
||||
def __repr__(self):
|
||||
s = '['
|
||||
for i in range(len(self.li)):
|
||||
if i > 0:
|
||||
s += ', '
|
||||
s += '%1.15e' % (self.li[i])
|
||||
s += ']'
|
||||
return s
|
||||
|
||||
def __str__(self):
|
||||
return self.__repr__()
|
||||
|
||||
def _gen_data_d_from_list(ol):
|
||||
dl = []
|
||||
for name in ol[0].mc_names:
|
||||
ed = {}
|
||||
ed['id'] = name
|
||||
ed['replica'] = []
|
||||
for r_name in ol[0].e_content[name]:
|
||||
rd = {}
|
||||
rd['name'] = r_name
|
||||
if ol[0].is_merged.get(r_name, False):
|
||||
rd['is_merged'] = True
|
||||
rd['deltas'] = []
|
||||
for i in range(len(ol[0].idl[r_name])):
|
||||
rd['deltas'].append([ol[0].idl[r_name][i]])
|
||||
for o in ol:
|
||||
rd['deltas'][-1].append(o.deltas[r_name][i])
|
||||
rd['deltas'][-1] = Deltalist(rd['deltas'][-1])
|
||||
ed['replica'].append(rd)
|
||||
dl.append(ed)
|
||||
return dl
|
||||
|
||||
def _gen_cdata_d_from_list(ol):
|
||||
dl = []
|
||||
for name in ol[0].cov_names:
|
||||
ed = {}
|
||||
ed['id'] = name
|
||||
ed['layout'] = str(ol[0].covobs[name].cov.shape).lstrip('(').rstrip(')').rstrip(',')
|
||||
ed['cov'] = Floatlist(np.ravel(ol[0].covobs[name].cov))
|
||||
ncov = ol[0].covobs[name].cov.shape[0]
|
||||
ed['grad'] = []
|
||||
for i in range(ncov):
|
||||
ed['grad'].append([])
|
||||
for o in ol:
|
||||
ed['grad'][-1].append(o.covobs[name].grad[i][0])
|
||||
ed['grad'][-1] = Floatlist(ed['grad'][-1])
|
||||
dl.append(ed)
|
||||
return dl
|
||||
|
||||
def write_Obs_to_dict(o):
|
||||
d = {}
|
||||
d['type'] = 'Obs'
|
||||
d['layout'] = '1'
|
||||
if o.tag:
|
||||
d['tag'] = [o.tag]
|
||||
if o.reweighted:
|
||||
d['reweighted'] = o.reweighted
|
||||
d['value'] = [o.value]
|
||||
data = _gen_data_d_from_list([o])
|
||||
if len(data) > 0:
|
||||
d['data'] = data
|
||||
cdata = _gen_cdata_d_from_list([o])
|
||||
if len(cdata) > 0:
|
||||
d['cdata'] = cdata
|
||||
return d
|
||||
|
||||
def write_List_to_dict(ol):
|
||||
_assert_equal_properties(ol)
|
||||
d = {}
|
||||
d['type'] = 'List'
|
||||
d['layout'] = '%d' % len(ol)
|
||||
taglist = [o.tag for o in ol]
|
||||
if np.any([tag is not None for tag in taglist]):
|
||||
d['tag'] = taglist
|
||||
if ol[0].reweighted:
|
||||
d['reweighted'] = ol[0].reweighted
|
||||
d['value'] = [o.value for o in ol]
|
||||
data = _gen_data_d_from_list(ol)
|
||||
if len(data) > 0:
|
||||
d['data'] = data
|
||||
cdata = _gen_cdata_d_from_list(ol)
|
||||
if len(cdata) > 0:
|
||||
d['cdata'] = cdata
|
||||
return d
|
||||
|
||||
def write_Array_to_dict(oa):
|
||||
ol = np.ravel(oa)
|
||||
_assert_equal_properties(ol)
|
||||
d = {}
|
||||
d['type'] = 'Array'
|
||||
d['layout'] = str(oa.shape).lstrip('(').rstrip(')').rstrip(',')
|
||||
taglist = [o.tag for o in ol]
|
||||
if np.any([tag is not None for tag in taglist]):
|
||||
d['tag'] = taglist
|
||||
if ol[0].reweighted:
|
||||
d['reweighted'] = ol[0].reweighted
|
||||
d['value'] = [o.value for o in ol]
|
||||
data = _gen_data_d_from_list(ol)
|
||||
if len(data) > 0:
|
||||
d['data'] = data
|
||||
cdata = _gen_cdata_d_from_list(ol)
|
||||
if len(cdata) > 0:
|
||||
d['cdata'] = cdata
|
||||
return d
|
||||
|
||||
def write_Corr_to_dict(my_corr):
|
||||
front_padding = next(i for i, j in enumerate(my_corr.content) if np.all(j))
|
||||
back_padding_start = front_padding + next((i for i, j in enumerate(my_corr.content[front_padding:]) if not np.all(j)), my_corr.T)
|
||||
dat = write_Array_to_dict(np.array(my_corr.content[front_padding:back_padding_start]))
|
||||
dat['type'] = 'Corr'
|
||||
corr_meta_data = str(front_padding) + '|' + str(my_corr.T - back_padding_start) + '|' + str(my_corr.tag)
|
||||
if 'tag' in dat.keys():
|
||||
dat['tag'].append(corr_meta_data)
|
||||
else:
|
||||
dat['tag'] = [corr_meta_data]
|
||||
return dat
|
||||
|
||||
if not isinstance(ol, list):
|
||||
ol = [ol]
|
||||
|
||||
d = {}
|
||||
d['program'] = 'pyerrors %s' % (pyerrorsversion.__version__)
|
||||
d['version'] = '0.1'
|
||||
d['who'] = getpass.getuser()
|
||||
d['date'] = datetime.datetime.now().astimezone().strftime('%Y-%m-%d %H:%M:%S %z')
|
||||
d['host'] = socket.gethostname() + ', ' + platform.platform()
|
||||
|
||||
if description:
|
||||
d['description'] = description
|
||||
d['obsdata'] = []
|
||||
for io in ol:
|
||||
if isinstance(io, Obs):
|
||||
d['obsdata'].append(write_Obs_to_dict(io))
|
||||
elif isinstance(io, list):
|
||||
d['obsdata'].append(write_List_to_dict(io))
|
||||
elif isinstance(io, np.ndarray):
|
||||
d['obsdata'].append(write_Array_to_dict(io))
|
||||
elif isinstance(io, Corr):
|
||||
d['obsdata'].append(write_Corr_to_dict(io))
|
||||
else:
|
||||
raise Exception("Unkown datatype.")
|
||||
|
||||
jsonstring = json.dumps(d, indent=indent, cls=my_encoder, ensure_ascii=False)
|
||||
|
||||
def remove_quotationmarks(s):
|
||||
"""Workaround for un-quoting of delta lists, adds 5% of work
|
||||
but is save, compared to a simple replace that could destroy the structure
|
||||
"""
|
||||
deltas = False
|
||||
split = s.split('\n')
|
||||
for i in range(len(split)):
|
||||
if '"deltas":' in split[i] or '"cov":' in split[i] or '"grad":' in split[i]:
|
||||
deltas = True
|
||||
if deltas:
|
||||
split[i] = split[i].replace('"[', '[').replace(']"', ']')
|
||||
if split[i][-1] == ']':
|
||||
deltas = False
|
||||
return '\n'.join(split)
|
||||
|
||||
jsonstring = remove_quotationmarks(jsonstring)
|
||||
return jsonstring
|
||||
|
||||
|
||||
def dump_to_json(ol, fname, description='', indent=1, gz=True):
|
||||
"""Export a list of Obs or structures containing Obs to a .json(.gz) file
|
||||
|
||||
Parameters
|
||||
----------
|
||||
ol : list
|
||||
List of objects that will be exported. At the moments, these objects can be
|
||||
either of: Obs, list, numpy.ndarray, Corr.
|
||||
All Obs inside a structure have to be defined on the same set of configurations.
|
||||
fname : str
|
||||
Filename of the output file.
|
||||
description : str
|
||||
Optional string that describes the contents of the json file.
|
||||
indent : int
|
||||
Specify the indentation level of the json file. None or 0 is permissible and
|
||||
saves disk space.
|
||||
gz : bool
|
||||
If True, the output is a gzipped json. If False, the output is a json file.
|
||||
"""
|
||||
|
||||
jsonstring = create_json_string(ol, description, indent)
|
||||
|
||||
if not fname.endswith('.json') and not fname.endswith('.gz'):
|
||||
fname += '.json'
|
||||
|
||||
if gz:
|
||||
if not fname.endswith('.gz'):
|
||||
fname += '.gz'
|
||||
|
||||
fp = gzip.open(fname, 'wb')
|
||||
fp.write(jsonstring.encode('utf-8'))
|
||||
else:
|
||||
fp = open(fname, 'w', encoding='utf-8')
|
||||
fp.write(jsonstring)
|
||||
fp.close()
|
||||
|
||||
|
||||
def import_json_string(json_string, verbose=True, full_output=False):
|
||||
"""Reconstruct a list of Obs or structures containing Obs from a json string.
|
||||
|
||||
The following structures are supported: Obs, list, numpy.ndarray, Corr
|
||||
If the list contains only one element, it is unpacked from the list.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
json_string : str
|
||||
json string containing the data.
|
||||
verbose : bool
|
||||
Print additional information that was written to the file.
|
||||
full_output : bool
|
||||
If True, a dict containing auxiliary information and the data is returned.
|
||||
If False, only the data is returned.
|
||||
"""
|
||||
|
||||
def _gen_obsd_from_datad(d):
|
||||
retd = {}
|
||||
if d:
|
||||
retd['names'] = []
|
||||
retd['idl'] = []
|
||||
retd['deltas'] = []
|
||||
retd['is_merged'] = {}
|
||||
for ens in d:
|
||||
for rep in ens['replica']:
|
||||
retd['names'].append(rep['name'])
|
||||
retd['idl'].append([di[0] for di in rep['deltas']])
|
||||
retd['deltas'].append(np.array([di[1:] for di in rep['deltas']]))
|
||||
retd['is_merged'][rep['name']] = rep.get('is_merged', False)
|
||||
return retd
|
||||
|
||||
def _gen_covobsd_from_cdatad(d):
|
||||
retd = {}
|
||||
for ens in d:
|
||||
retl = []
|
||||
name = ens['id']
|
||||
layouts = ens.get('layout', '1').strip()
|
||||
layout = [int(ls.strip()) for ls in layouts.split(',') if len(ls) > 0]
|
||||
cov = np.reshape(ens['cov'], layout)
|
||||
grad = ens['grad']
|
||||
nobs = len(grad[0])
|
||||
for i in range(nobs):
|
||||
retl.append({'name': name, 'cov': cov, 'grad': [g[i] for g in grad]})
|
||||
retd[name] = retl
|
||||
return retd
|
||||
|
||||
def get_Obs_from_dict(o):
|
||||
layouts = o.get('layout', '1').strip()
|
||||
if layouts != '1':
|
||||
raise Exception("layout is %s has to be 1 for type Obs." % (layouts), RuntimeWarning)
|
||||
|
||||
values = o['value']
|
||||
od = _gen_obsd_from_datad(o.get('data', {}))
|
||||
cd = _gen_covobsd_from_cdatad(o.get('cdata', {}))
|
||||
|
||||
if od:
|
||||
ret = Obs([[ddi[0] + values[0] for ddi in di] for di in od['deltas']], od['names'], idl=od['idl'])
|
||||
ret.is_merged = od['is_merged']
|
||||
else:
|
||||
ret = Obs([], [])
|
||||
ret._value = values[0]
|
||||
for name in cd:
|
||||
co = cd[name][0]
|
||||
ret._covobs[name] = Covobs(None, co['cov'], co['name'], grad=co['grad'])
|
||||
ret.names.append(co['name'])
|
||||
|
||||
ret.reweighted = o.get('reweighted', False)
|
||||
ret.tag = o.get('tag', [None])[0]
|
||||
return ret
|
||||
|
||||
def get_List_from_dict(o):
|
||||
layouts = o.get('layout', '1').strip()
|
||||
layout = int(layouts)
|
||||
values = o['value']
|
||||
od = _gen_obsd_from_datad(o.get('data', {}))
|
||||
cd = _gen_covobsd_from_cdatad(o.get('cdata', {}))
|
||||
|
||||
ret = []
|
||||
taglist = o.get('tag', layout * [None])
|
||||
for i in range(layout):
|
||||
if od:
|
||||
ret.append(Obs([list(di[:, i] + values[i]) for di in od['deltas']], od['names'], idl=od['idl']))
|
||||
ret[-1].is_merged = od['is_merged']
|
||||
else:
|
||||
ret.append(Obs([], []))
|
||||
ret[-1]._value = values[i]
|
||||
print('Created Obs with means= ', values[i])
|
||||
for name in cd:
|
||||
co = cd[name][i]
|
||||
ret[-1]._covobs[name] = Covobs(None, co['cov'], co['name'], grad=co['grad'])
|
||||
ret[-1].names.append(co['name'])
|
||||
|
||||
ret[-1].reweighted = o.get('reweighted', False)
|
||||
ret[-1].tag = taglist[i]
|
||||
return ret
|
||||
|
||||
def get_Array_from_dict(o):
|
||||
layouts = o.get('layout', '1').strip()
|
||||
layout = [int(ls.strip()) for ls in layouts.split(',') if len(ls) > 0]
|
||||
N = np.prod(layout)
|
||||
values = o['value']
|
||||
od = _gen_obsd_from_datad(o.get('data', {}))
|
||||
cd = _gen_covobsd_from_cdatad(o.get('cdata', {}))
|
||||
|
||||
ret = []
|
||||
taglist = o.get('tag', N * [None])
|
||||
for i in range(N):
|
||||
if od:
|
||||
ret.append(Obs([di[:, i] + values[i] for di in od['deltas']], od['names'], idl=od['idl']))
|
||||
ret[-1].is_merged = od['is_merged']
|
||||
else:
|
||||
ret.append(Obs([], []))
|
||||
ret[-1]._value = values[i]
|
||||
for name in cd:
|
||||
co = cd[name][i]
|
||||
ret[-1]._covobs[name] = Covobs(None, co['cov'], co['name'], grad=co['grad'])
|
||||
ret[-1].names.append(co['name'])
|
||||
ret[-1].reweighted = o.get('reweighted', False)
|
||||
ret[-1].tag = taglist[i]
|
||||
return np.reshape(ret, layout)
|
||||
|
||||
def get_Corr_from_dict(o):
|
||||
taglist = o.get('tag')
|
||||
corr_meta_data = taglist[-1].split('|')
|
||||
padding_front = int(corr_meta_data[0])
|
||||
padding_back = int(corr_meta_data[1])
|
||||
corr_tag = corr_meta_data[2]
|
||||
tmp_o = o
|
||||
tmp_o['tag'] = taglist[:-1]
|
||||
if len(tmp_o['tag']) == 0:
|
||||
del tmp_o['tag']
|
||||
dat = get_Array_from_dict(tmp_o)
|
||||
my_corr = Corr(list(dat), padding=[padding_front, padding_back])
|
||||
if corr_tag != 'None':
|
||||
my_corr.tag = corr_tag
|
||||
return my_corr
|
||||
|
||||
json_dict = json.loads(json_string)
|
||||
|
||||
prog = json_dict.get('program', '')
|
||||
version = json_dict.get('version', '')
|
||||
who = json_dict.get('who', '')
|
||||
date = json_dict.get('date', '')
|
||||
host = json_dict.get('host', '')
|
||||
if prog and verbose:
|
||||
print('Data has been written using %s.' % (prog))
|
||||
if version and verbose:
|
||||
print('Format version %s' % (version))
|
||||
if np.any([who, date, host] and verbose):
|
||||
print('Written by %s on %s on host %s' % (who, date, host))
|
||||
description = json_dict.get('description', '')
|
||||
if description and verbose:
|
||||
print()
|
||||
print('Description: ', description)
|
||||
obsdata = json_dict['obsdata']
|
||||
ol = []
|
||||
for io in obsdata:
|
||||
if io['type'] == 'Obs':
|
||||
ol.append(get_Obs_from_dict(io))
|
||||
elif io['type'] == 'List':
|
||||
ol.append(get_List_from_dict(io))
|
||||
elif io['type'] == 'Array':
|
||||
ol.append(get_Array_from_dict(io))
|
||||
elif io['type'] == 'Corr':
|
||||
ol.append(get_Corr_from_dict(io))
|
||||
else:
|
||||
raise Exception("Unkown datatype.")
|
||||
|
||||
if full_output:
|
||||
retd = {}
|
||||
retd['program'] = prog
|
||||
retd['version'] = version
|
||||
retd['who'] = who
|
||||
retd['date'] = date
|
||||
retd['host'] = host
|
||||
retd['description'] = description
|
||||
retd['obsdata'] = ol
|
||||
|
||||
return retd
|
||||
else:
|
||||
if len(obsdata) == 1:
|
||||
ol = ol[0]
|
||||
|
||||
return ol
|
||||
|
||||
|
||||
def load_json(fname, verbose=True, gz=True, full_output=False):
|
||||
"""Import a list of Obs or structures containing Obs from a .json.gz file.
|
||||
|
||||
The following structures are supported: Obs, list, numpy.ndarray, Corr
|
||||
If the list contains only one element, it is unpacked from the list.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
fname : str
|
||||
Filename of the input file.
|
||||
verbose : bool
|
||||
Print additional information that was written to the file.
|
||||
gz : bool
|
||||
If True, assumes that data is gzipped. If False, assumes JSON file.
|
||||
full_output : bool
|
||||
If True, a dict containing auxiliary information and the data is returned.
|
||||
If False, only the data is returned.
|
||||
"""
|
||||
if not fname.endswith('.json') and not fname.endswith('.gz'):
|
||||
fname += '.json'
|
||||
if gz:
|
||||
if not fname.endswith('.gz'):
|
||||
fname += '.gz'
|
||||
with gzip.open(fname, 'r') as fin:
|
||||
d = fin.read().decode('utf-8')
|
||||
else:
|
||||
if fname.endswith('.gz'):
|
||||
warnings.warn("Trying to read from %s without unzipping!" % fname, UserWarning)
|
||||
with open(fname, 'r', encoding='utf-8') as fin:
|
||||
d = fin.read()
|
||||
|
||||
return import_json_string(d, verbose, full_output)
|
|
@ -1,6 +1,3 @@
|
|||
#!/usr/bin/env python
|
||||
# coding: utf-8
|
||||
|
||||
import os
|
||||
import fnmatch
|
||||
import re
|
||||
|
@ -12,11 +9,12 @@ from ..obs import Obs
|
|||
def read_pbp(path, prefix, **kwargs):
|
||||
"""Read pbp format from given folder structure. Returns a list of length nrw
|
||||
|
||||
Keyword arguments
|
||||
-----------------
|
||||
r_start -- list which contains the first config to be read for each replicum
|
||||
r_stop -- list which contains the last config to be read for each replicum
|
||||
|
||||
Parameters
|
||||
----------
|
||||
r_start : list
|
||||
list which contains the first config to be read for each replicum
|
||||
r_stop : list
|
||||
list which contains the last config to be read for each replicum
|
||||
"""
|
||||
|
||||
extract_nfct = 1
|
||||
|
@ -66,7 +64,6 @@ def read_pbp(path, prefix, **kwargs):
|
|||
tmp_array = []
|
||||
with open(path + '/' + ls[rep], 'rb') as fp:
|
||||
|
||||
# header
|
||||
t = fp.read(4) # number of reweighting factors
|
||||
if rep == 0:
|
||||
nrw = struct.unpack('i', t)[0]
|
||||
|
@ -74,7 +71,7 @@ def read_pbp(path, prefix, **kwargs):
|
|||
deltas.append([])
|
||||
else:
|
||||
if nrw != struct.unpack('i', t)[0]:
|
||||
raise Exception('Error: different number of reweighting factors for replicum', rep)
|
||||
raise Exception('Error: different number of factors for replicum', rep)
|
||||
|
||||
for k in range(nrw):
|
||||
tmp_array.append([])
|
||||
|
|
|
@ -1,6 +1,3 @@
|
|||
#!/usr/bin/env python
|
||||
# coding: utf-8
|
||||
|
||||
import os
|
||||
import fnmatch
|
||||
import re
|
||||
|
@ -39,8 +36,9 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
|
|||
|
||||
if not ls:
|
||||
raise Exception('Error, directory not found')
|
||||
|
||||
# Exclude files with different names
|
||||
if 'files' in kwargs:
|
||||
ls = kwargs.get('files')
|
||||
else:
|
||||
for exc in ls:
|
||||
if not fnmatch.fnmatch(exc, prefix + '*' + postfix + '.dat'):
|
||||
ls = list(set(ls) - set([exc]))
|
||||
|
@ -64,9 +62,9 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
|
|||
else:
|
||||
r_stop = [None] * replica
|
||||
|
||||
print('Read reweighting factors from', prefix[:-1], ',', replica, 'replica', end='')
|
||||
print('Read reweighting factors from', prefix[:-1], ',',
|
||||
replica, 'replica', end='')
|
||||
|
||||
# Adjust replica names to new bookmarking system
|
||||
if names is None:
|
||||
rep_names = []
|
||||
for entry in ls:
|
||||
|
@ -85,7 +83,6 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
|
|||
tmp_array = []
|
||||
with open(path + '/' + ls[rep], 'rb') as fp:
|
||||
|
||||
# header
|
||||
t = fp.read(4) # number of reweighting factors
|
||||
if rep == 0:
|
||||
nrw = struct.unpack('i', t)[0]
|
||||
|
@ -94,7 +91,7 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
|
|||
for k in range(nrw):
|
||||
deltas.append([])
|
||||
else:
|
||||
if ((nrw != struct.unpack('i', t)[0] and (not version == '2.0')) or (nrw != struct.unpack('i', t)[0] / 2 and version == '2.0')): # little weird if-clause due to the /2 operation needed.
|
||||
if ((nrw != struct.unpack('i', t)[0] and (not version == '2.0')) or (nrw != struct.unpack('i', t)[0] / 2 and version == '2.0')):
|
||||
raise Exception('Error: different number of reweighting factors for replicum', rep)
|
||||
|
||||
for k in range(nrw):
|
||||
|
@ -106,7 +103,6 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
|
|||
for i in range(nrw):
|
||||
t = fp.read(4)
|
||||
nfct.append(struct.unpack('i', t)[0])
|
||||
# print('nfct: ', nfct) # Hasenbusch factor, 1 for rat reweighting
|
||||
else:
|
||||
for i in range(nrw):
|
||||
nfct.append(1)
|
||||
|
@ -119,7 +115,6 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
|
|||
if not struct.unpack('i', fp.read(4))[0] == 0:
|
||||
print('something is wrong!')
|
||||
|
||||
# body
|
||||
while 0 < 1:
|
||||
t = fp.read(4)
|
||||
if len(t) < 4:
|
||||
|
@ -135,8 +130,11 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
|
|||
for j in range(tmpd['n'][0]):
|
||||
tmp_nfct *= np.mean(np.exp(-np.asarray(tmp_rw[j])))
|
||||
if print_err:
|
||||
print(config_no, i, j, np.mean(np.exp(-np.asarray(tmp_rw[j]))), np.std(np.exp(-np.asarray(tmp_rw[j]))))
|
||||
print('Sources:', np.exp(-np.asarray(tmp_rw[j])))
|
||||
print(config_no, i, j,
|
||||
np.mean(np.exp(-np.asarray(tmp_rw[j]))),
|
||||
np.std(np.exp(-np.asarray(tmp_rw[j]))))
|
||||
print('Sources:',
|
||||
np.exp(-np.asarray(tmp_rw[j])))
|
||||
print('Partial factor:', tmp_nfct)
|
||||
elif version == '1.6' or version == '1.4':
|
||||
tmp_nfct = 1.0
|
||||
|
@ -146,7 +144,9 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
|
|||
tmp_rw = struct.unpack('d' * nsrc[i], t)
|
||||
tmp_nfct *= np.mean(np.exp(-np.asarray(tmp_rw)))
|
||||
if print_err:
|
||||
print(config_no, i, j, np.mean(np.exp(-np.asarray(tmp_rw))), np.std(np.exp(-np.asarray(tmp_rw))))
|
||||
print(config_no, i, j,
|
||||
np.mean(np.exp(-np.asarray(tmp_rw))),
|
||||
np.std(np.exp(-np.asarray(tmp_rw))))
|
||||
print('Sources:', np.exp(-np.asarray(tmp_rw)))
|
||||
print('Partial factor:', tmp_nfct)
|
||||
tmp_array[i].append(tmp_nfct)
|
||||
|
@ -165,11 +165,14 @@ def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
|
|||
return result
|
||||
|
||||
|
||||
def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwargs):
|
||||
def extract_t0(path, prefix, dtr_read, xmin,
|
||||
spatial_extent, fit_range=5, **kwargs):
|
||||
"""Extract t0 from given .ms.dat files. Returns t0 as Obs.
|
||||
|
||||
It is assumed that all boundary effects have sufficiently decayed at x0=xmin.
|
||||
The data around the zero crossing of t^2<E> - 0.3 is fitted with a linear function
|
||||
It is assumed that all boundary effects have
|
||||
sufficiently decayed at x0=xmin.
|
||||
The data around the zero crossing of t^2<E> - 0.3
|
||||
is fitted with a linear function
|
||||
from which the exact root is extracted.
|
||||
Only works with openQCD v 1.2.
|
||||
|
||||
|
@ -180,14 +183,17 @@ def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwar
|
|||
prefix : str
|
||||
Ensemble prefix
|
||||
dtr_read : int
|
||||
Determines how many trajectories should be skipped when reading the ms.dat files.
|
||||
Determines how many trajectories should be skipped
|
||||
when reading the ms.dat files.
|
||||
Corresponds to dtr_cnfg / dtr_ms in the openQCD input file.
|
||||
xmin : int
|
||||
First timeslice where the boundary effects have sufficiently decayed.
|
||||
First timeslice where the boundary
|
||||
effects have sufficiently decayed.
|
||||
spatial_extent : int
|
||||
spatial extent of the lattice, required for normalization.
|
||||
fit_range : int
|
||||
Number of data points left and right of the zero crossing to be included in the linear fit. (Default: 5)
|
||||
Number of data points left and right of the zero
|
||||
crossing to be included in the linear fit. (Default: 5)
|
||||
r_start : list
|
||||
list which contains the first config to be read for each replicum.
|
||||
r_stop: list
|
||||
|
@ -204,7 +210,6 @@ def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwar
|
|||
if not ls:
|
||||
raise Exception('Error, directory not found')
|
||||
|
||||
# Exclude files with different names
|
||||
for exc in ls:
|
||||
if not fnmatch.fnmatch(exc, prefix + '*.ms.dat'):
|
||||
ls = list(set(ls) - set([exc]))
|
||||
|
@ -216,7 +221,6 @@ def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwar
|
|||
r_start = kwargs.get('r_start')
|
||||
if len(r_start) != replica:
|
||||
raise Exception('r_start does not match number of replicas')
|
||||
# Adjust Configuration numbering to python index
|
||||
r_start = [o - 1 if o else None for o in r_start]
|
||||
else:
|
||||
r_start = [None] * replica
|
||||
|
@ -235,7 +239,6 @@ def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwar
|
|||
for rep in range(replica):
|
||||
|
||||
with open(path + '/' + ls[rep], 'rb') as fp:
|
||||
# Read header
|
||||
t = fp.read(12)
|
||||
header = struct.unpack('iii', t)
|
||||
if rep == 0:
|
||||
|
@ -254,7 +257,6 @@ def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwar
|
|||
|
||||
Ysl = []
|
||||
|
||||
# Read body
|
||||
while 0 < 1:
|
||||
t = fp.read(4)
|
||||
if(len(t) < 4):
|
||||
|
@ -273,7 +275,9 @@ def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwar
|
|||
|
||||
Ysum.append([])
|
||||
for i, item in enumerate(Ysl):
|
||||
Ysum[-1].append([np.mean(item[current + xmin:current + tmax - xmin]) for current in range(0, len(item), tmax)])
|
||||
Ysum[-1].append([np.mean(item[current + xmin:
|
||||
current + tmax - xmin])
|
||||
for current in range(0, len(item), tmax)])
|
||||
|
||||
t2E_dict = {}
|
||||
for n in range(nn + 1):
|
||||
|
@ -286,10 +290,13 @@ def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwar
|
|||
new_obs = Obs(samples, [(w.split('.'))[0] for w in ls])
|
||||
t2E_dict[n * dn * eps] = (n * dn * eps) ** 2 * new_obs / (spatial_extent ** 3) - 0.3
|
||||
|
||||
zero_crossing = np.argmax(np.array([o.value for o in t2E_dict.values()]) > 0.0)
|
||||
zero_crossing = np.argmax(np.array(
|
||||
[o.value for o in t2E_dict.values()]) > 0.0)
|
||||
|
||||
x = list(t2E_dict.keys())[zero_crossing - fit_range: zero_crossing + fit_range]
|
||||
y = list(t2E_dict.values())[zero_crossing - fit_range: zero_crossing + fit_range]
|
||||
x = list(t2E_dict.keys())[zero_crossing - fit_range:
|
||||
zero_crossing + fit_range]
|
||||
y = list(t2E_dict.values())[zero_crossing - fit_range:
|
||||
zero_crossing + fit_range]
|
||||
[o.gamma_method() for o in y]
|
||||
|
||||
fit_result = fit_lin(x, y)
|
||||
|
@ -313,12 +320,6 @@ def _parse_array_openQCD2(d, n, size, wa, quadrupel=False):
|
|||
return arr
|
||||
|
||||
|
||||
# mimic the read_array routine of openQCD-2.0.
|
||||
# fp is the opened file handle
|
||||
# returns the dict array
|
||||
# at this point we only parse a 2d array
|
||||
# d = 2
|
||||
# n = [nfct[irw], 2*nsrc[irw]]
|
||||
def _read_array_openQCD2(fp):
|
||||
t = fp.read(4)
|
||||
d = struct.unpack('i', t)[0]
|
||||
|
@ -343,3 +344,232 @@ def _read_array_openQCD2(fp):
|
|||
|
||||
arr = _parse_array_openQCD2(d, n, size, tmp, quadrupel=True)
|
||||
return {'d': d, 'n': n, 'size': size, 'arr': arr}
|
||||
|
||||
|
||||
def read_qtop(path, prefix, c, dtr_cnfg=1, version="1.2", **kwargs):
|
||||
"""Read qtop format from given folder structure.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
path:
|
||||
path of the measurement files
|
||||
prefix:
|
||||
prefix of the measurement files, e.g. <prefix>_id0_r0.ms.dat
|
||||
c: double
|
||||
Smearing radius in units of the lattice extent, c = sqrt(8 t0) / L
|
||||
dtr_cnfg: int
|
||||
(optional) parameter that specifies the number of trajectories
|
||||
between two configs.
|
||||
if it is not set, the distance between two measurements
|
||||
in the file is assumed to be
|
||||
the distance between two configurations.
|
||||
steps: int
|
||||
(optional) (maybe only necessary for openQCD2.0)
|
||||
nt step size, guessed if not given
|
||||
version: str
|
||||
version string of the openQCD (sfqcd) version used to create
|
||||
the ensemble
|
||||
L: int
|
||||
spatial length of the lattice in L/a.
|
||||
HAS to be set if version != sfqcd, since openQCD does not provide
|
||||
this in the header
|
||||
r_start: list
|
||||
offset of the first ensemble, making it easier to match
|
||||
later on with other Obs
|
||||
r_stop: list
|
||||
last configurations that need to be read (per replicum)
|
||||
files: list
|
||||
specify the exact files that need to be read
|
||||
from path, practical if e.g. only one replicum is needed
|
||||
names: list
|
||||
Alternative labeling for replicas/ensembles.
|
||||
Has to have the appropriate length
|
||||
"""
|
||||
known_versions = ["1.0", "1.2", "1.4", "1.6", "2.0", "sfqcd"]
|
||||
|
||||
if version not in known_versions:
|
||||
raise Exception("Unknown openQCD version.")
|
||||
if "steps" in kwargs:
|
||||
steps = kwargs.get("steps")
|
||||
if version == "sfqcd":
|
||||
if "L" in kwargs:
|
||||
supposed_L = kwargs.get("L")
|
||||
else:
|
||||
if "L" not in kwargs:
|
||||
raise Exception("This version of openQCD needs you to provide the spatial length of the lattice as parameter 'L'.")
|
||||
else:
|
||||
L = kwargs.get("L")
|
||||
r_start = 1
|
||||
if "r_start" in kwargs:
|
||||
r_start = kwargs.get("r_start")
|
||||
if "r_stop" in kwargs:
|
||||
r_stop = kwargs.get("r_stop")
|
||||
if "files" in kwargs:
|
||||
files = kwargs.get("files")
|
||||
else:
|
||||
found = []
|
||||
files = []
|
||||
for (dirpath, dirnames, filenames) in os.walk(path + "/"):
|
||||
# print(filenames)
|
||||
found.extend(filenames)
|
||||
break
|
||||
for f in found:
|
||||
if fnmatch.fnmatch(f, prefix + "*" + ".ms.dat"):
|
||||
files.append(f)
|
||||
print(files)
|
||||
rep_names = []
|
||||
|
||||
deltas = []
|
||||
idl = []
|
||||
for rep, file in enumerate(files):
|
||||
with open(path + "/" + file, "rb") as fp:
|
||||
t = fp.read(12)
|
||||
header = struct.unpack('<iii', t)
|
||||
# step size in integration steps "dnms"
|
||||
dn = header[0]
|
||||
# number of measurements, so "ntot"/dn
|
||||
nn = header[1]
|
||||
# lattice T/a
|
||||
tmax = header[2]
|
||||
if version == "sfqcd":
|
||||
t = fp.read(12)
|
||||
Ls = struct.unpack('<iii', t)
|
||||
if(Ls[0] == Ls[1] and Ls[1] == Ls[2]):
|
||||
L = Ls[0]
|
||||
if not (supposed_L == L):
|
||||
raise Exception("It seems the length given in the header and by you contradict each other")
|
||||
else:
|
||||
raise Exception("Found more than one spatial length in header!")
|
||||
|
||||
print('dnms:', dn)
|
||||
print('nn:', nn)
|
||||
print('tmax:', tmax)
|
||||
t = fp.read(8)
|
||||
eps = struct.unpack('d', t)[0]
|
||||
print('eps:', eps)
|
||||
|
||||
Q = []
|
||||
ncs = []
|
||||
while 0 < 1:
|
||||
t = fp.read(4)
|
||||
if(len(t) < 4):
|
||||
break
|
||||
ncs.append(struct.unpack('i', t)[0])
|
||||
# Wsl
|
||||
t = fp.read(8 * tmax * (nn + 1))
|
||||
# Ysl
|
||||
t = fp.read(8 * tmax * (nn + 1))
|
||||
# Qsl, which is asked for in this method
|
||||
t = fp.read(8 * tmax * (nn + 1))
|
||||
# unpack the array of Qtops,
|
||||
# on each timeslice t=0,...,tmax-1 and the
|
||||
# measurement number in = 0...nn (see README.qcd1)
|
||||
tmpd = struct.unpack('d' * tmax * (nn + 1), t)
|
||||
Q.append(tmpd)
|
||||
|
||||
if not len(set([ncs[i] - ncs[i - 1] for i in range(1, len(ncs))])):
|
||||
raise Exception("Irregularities in stepsize found")
|
||||
else:
|
||||
if 'steps' in kwargs:
|
||||
if steps != ncs[1] - ncs[0]:
|
||||
raise Exception("steps and the found stepsize are not the same")
|
||||
else:
|
||||
steps = ncs[1] - ncs[0]
|
||||
|
||||
print(len(Q))
|
||||
print('max_t:', dn * (nn) * eps)
|
||||
|
||||
t_aim = (c * L) ** 2 / 8
|
||||
|
||||
print('t_aim:', t_aim)
|
||||
index_aim = round(t_aim / eps / dn)
|
||||
print('index_aim:', index_aim)
|
||||
|
||||
Q_sum = []
|
||||
for i, item in enumerate(Q):
|
||||
Q_sum.append([sum(item[current:current + tmax])
|
||||
for current in range(0, len(item), tmax)])
|
||||
print(len(Q_sum))
|
||||
print(len(Q_sum[0]))
|
||||
Q_round = []
|
||||
for i in range(len(Q) // dtr_cnfg):
|
||||
Q_round.append(round(Q_sum[dtr_cnfg * i][index_aim]))
|
||||
if len(Q_round) != len(ncs) // dtr_cnfg:
|
||||
raise Exception("qtops and ncs dont have the same length")
|
||||
|
||||
truncated_file = file[:-7]
|
||||
print(truncated_file)
|
||||
idl_start = 1
|
||||
|
||||
if "r_start" in kwargs:
|
||||
Q_round = Q_round[r_start[rep]:]
|
||||
idl_start = r_start[rep]
|
||||
if "r_stop" in kwargs:
|
||||
Q_round = Q_round[:r_stop[rep]]
|
||||
idl_stop = idl_start + len(Q_round)
|
||||
# keyword "names" prevails over "ens_name"
|
||||
if "names" not in kwargs:
|
||||
try:
|
||||
idx = truncated_file.index('r')
|
||||
except Exception:
|
||||
if "names" not in kwargs:
|
||||
raise Exception("Automatic recognition of replicum failed, please enter the key word 'names'.")
|
||||
if "ens_name" in kwargs:
|
||||
ens_name = kwargs.get("ens_name")
|
||||
else:
|
||||
ens_name = truncated_file[:idx]
|
||||
rep_names.append(ens_name + '|' + truncated_file[idx:])
|
||||
else:
|
||||
names = kwargs.get("names")
|
||||
rep_names = names
|
||||
deltas.append(np.array(Q_round))
|
||||
idl.append(range(idl_start, idl_stop))
|
||||
result = Obs(deltas, rep_names, idl=idl)
|
||||
return result
|
||||
|
||||
|
||||
def read_qtop_sector(target=0, **kwargs):
|
||||
"""target: int
|
||||
specifies the topological sector to be reweighted to (default 0)
|
||||
q_top: Obs
|
||||
alternatively takes args of read_qtop method as kwargs
|
||||
"""
|
||||
if "q_top" in kwargs:
|
||||
qtop = kwargs.get("q_top")
|
||||
else:
|
||||
if "path" in kwargs:
|
||||
path = kwargs.get("path")
|
||||
del kwargs["path"]
|
||||
else:
|
||||
raise Exception("If you are not providing q_top, please provide path")
|
||||
if "prefix" in kwargs:
|
||||
prefix = kwargs.get("prefix")
|
||||
del kwargs["prefix"]
|
||||
else:
|
||||
raise Exception("If you are not providing q_top, please provide prefix")
|
||||
if "c" in kwargs:
|
||||
c = kwargs.get("c")
|
||||
del kwargs["c"]
|
||||
else:
|
||||
raise Exception("If you are not providing q_top, please provide c")
|
||||
if "version" in kwargs:
|
||||
version = kwargs.get("version")
|
||||
del kwargs["version"]
|
||||
else:
|
||||
version = "1.2"
|
||||
if "dtr_cnfg" in kwargs:
|
||||
dtr_cnfg = kwargs.get("dtr_cnfg")
|
||||
del kwargs["dtr_cnfg"]
|
||||
else:
|
||||
dtr_cnfg = 1
|
||||
qtop = read_qtop(path, prefix, c, dtr_cnfg=dtr_cnfg,
|
||||
version=version, **kwargs)
|
||||
names = qtop.names
|
||||
print(names)
|
||||
print(qtop.deltas.keys())
|
||||
proj_qtop = []
|
||||
for n in qtop.deltas:
|
||||
proj_qtop.append(np.array([1 if int(qtop.value + q) == target else 0 for q in qtop.deltas[n]]))
|
||||
|
||||
result = Obs(proj_qtop, qtop.names)
|
||||
return result
|
||||
|
|
|
@ -6,17 +6,62 @@ import fnmatch
|
|||
import re
|
||||
import numpy as np # Thinly-wrapped numpy
|
||||
from ..obs import Obs
|
||||
from . import utils
|
||||
|
||||
|
||||
def read_sfcf(path, prefix, name, **kwargs):
|
||||
"""Read sfcf C format from given folder structure.
|
||||
def read_sfcf(path, prefix, name, quarks='.*', noffset=0, wf=0, wf2=0,
|
||||
version="1.0c", **kwargs):
|
||||
"""Read sfcf format from given folder structure.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
im -- if True, read imaginary instead of real part of the correlation function.
|
||||
single -- if True, read a boundary-to-boundary correlation function with a single value
|
||||
b2b -- if True, read a time-dependent boundary-to-boundary correlation function
|
||||
names -- Alternative labeling for replicas/ensembles. Has to have the appropriate length
|
||||
path : str
|
||||
Path to the measurement files.
|
||||
prefix : str
|
||||
Ensemble prefix for which the data is to be extracted.
|
||||
name : str
|
||||
Name of the correlation function to be extracted from the file
|
||||
quarks: str, optional
|
||||
Label of the quarks used in the sfcf input file. e.g. "quark quark"
|
||||
for version 0.0 this does NOT need to be given with the typical " - "
|
||||
that is present in the output file,
|
||||
this is done automatically for this version
|
||||
noffset: int, optional
|
||||
Offset of the source (only relevant when wavefunctions are used)
|
||||
wf: int, optional
|
||||
ID of wave function
|
||||
wf2: int, optional
|
||||
ID of the second wavefunction
|
||||
(only relevant for boundary-to-boundary correlation functions)
|
||||
im: bool, optional
|
||||
if True, read imaginary instead of real part
|
||||
of the correlation function.
|
||||
b2b: bool, optional
|
||||
if True, read a time-dependent boundary-to-boundary
|
||||
correlation function
|
||||
single: bool, optional
|
||||
if True, read time independent boundary to boundary
|
||||
correlation function
|
||||
names: list, optional
|
||||
Alternative labeling for replicas/ensembles.
|
||||
Has to have the appropriate length
|
||||
ens_name : str, optional
|
||||
replaces the name of the ensemble
|
||||
version: str, optional
|
||||
version of SFCF, with which the measurement was done.
|
||||
if the compact output option (-c) was specified,
|
||||
append a "c" to the version (e.g. "1.0c")
|
||||
if the append output option (-a) was specified,
|
||||
append an "a" to the version. Currently supported versions
|
||||
are "0.0", "1.0", "2.0", "1.0c", "2.0c", "1.0a" and "2.0a".
|
||||
replica: list, optional
|
||||
list of replica to be read, default is all
|
||||
files: list, optional
|
||||
list of files to be read per replica, default is all.
|
||||
for non-compact output format, hand the folders to be read here.
|
||||
check_configs: list, optional
|
||||
list of list of supposed configs, eg. [range(1,1000)]
|
||||
for one replicum with 1000 configs
|
||||
"""
|
||||
if kwargs.get('im'):
|
||||
im = 1
|
||||
|
@ -29,175 +74,144 @@ def read_sfcf(path, prefix, name, **kwargs):
|
|||
b2b = 1
|
||||
single = 1
|
||||
else:
|
||||
b2b = 0
|
||||
single = 0
|
||||
|
||||
if kwargs.get('b2b'):
|
||||
b2b = 1
|
||||
else:
|
||||
b2b = 0
|
||||
single = 0
|
||||
if "replica" in kwargs:
|
||||
reps = kwargs.get("replica")
|
||||
|
||||
compact = True
|
||||
appended = False
|
||||
known_versions = ["0.0", "1.0", "2.0", "1.0c", "2.0c", "1.0a", "2.0a"]
|
||||
|
||||
if version not in known_versions:
|
||||
raise Exception("This version is not known!")
|
||||
if(version[-1] == "c"):
|
||||
appended = False
|
||||
compact = True
|
||||
version = version[:-1]
|
||||
elif(version[-1] == "a"):
|
||||
appended = True
|
||||
compact = False
|
||||
version = version[:-1]
|
||||
else:
|
||||
compact = False
|
||||
appended = False
|
||||
read = 0
|
||||
T = 0
|
||||
start = 0
|
||||
ls = []
|
||||
if "replica" in kwargs:
|
||||
ls = reps
|
||||
else:
|
||||
for (dirpath, dirnames, filenames) in os.walk(path):
|
||||
if not appended:
|
||||
ls.extend(dirnames)
|
||||
break
|
||||
if not ls:
|
||||
raise Exception('Error, directory not found')
|
||||
for exc in ls:
|
||||
if fnmatch.fnmatch(exc, prefix + '*'):
|
||||
ls = list(set(ls) - set(exc))
|
||||
if len(ls) > 1:
|
||||
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0]))
|
||||
replica = len(ls)
|
||||
print('Read', part, 'part of', name, 'from', prefix, ',', replica, 'replica')
|
||||
if 'names' in kwargs:
|
||||
new_names = kwargs.get('names')
|
||||
if len(new_names) != replica:
|
||||
raise Exception('Names does not have the required length', replica)
|
||||
else:
|
||||
# Adjust replica names to new bookmarking system
|
||||
new_names = []
|
||||
for entry in ls:
|
||||
idx = entry.index('r')
|
||||
new_names.append(entry[:idx] + '|' + entry[idx:])
|
||||
|
||||
print(replica, 'replica')
|
||||
for i, item in enumerate(ls):
|
||||
print(item)
|
||||
sub_ls = []
|
||||
for (dirpath, dirnames, filenames) in os.walk(path + '/' + item):
|
||||
sub_ls.extend(dirnames)
|
||||
ls.extend(filenames)
|
||||
break
|
||||
for exc in sub_ls:
|
||||
if fnmatch.fnmatch(exc, 'cfg*'):
|
||||
sub_ls = list(set(sub_ls) - set(exc))
|
||||
sub_ls.sort(key=lambda x: int(x[3:]))
|
||||
no_cfg = len(sub_ls)
|
||||
print(no_cfg, 'configurations')
|
||||
|
||||
if i == 0:
|
||||
with open(path + '/' + item + '/' + sub_ls[0] + '/' + name) as fp:
|
||||
for k, line in enumerate(fp):
|
||||
if read == 1 and not line.strip() and k > start + 1:
|
||||
break
|
||||
if read == 1 and k >= start:
|
||||
T += 1
|
||||
if '[correlator]' in line:
|
||||
read = 1
|
||||
start = k + 7 + b2b
|
||||
T -= b2b
|
||||
|
||||
deltas = []
|
||||
for j in range(T):
|
||||
deltas.append([])
|
||||
|
||||
sublength = len(sub_ls)
|
||||
for j in range(T):
|
||||
deltas[j].append(np.zeros(sublength))
|
||||
|
||||
for cnfg, subitem in enumerate(sub_ls):
|
||||
with open(path + '/' + item + '/' + subitem + '/' + name) as fp:
|
||||
for k, line in enumerate(fp):
|
||||
if(k >= start and k < start + T):
|
||||
floats = list(map(float, line.split()))
|
||||
deltas[k - start][i][cnfg] = floats[1 + im - single]
|
||||
|
||||
result = []
|
||||
for t in range(T):
|
||||
result.append(Obs(deltas[t], new_names))
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def read_sfcf_c(path, prefix, name, quarks='.*', noffset=0, wf=0, wf2=0, **kwargs):
|
||||
"""Read sfcf c format from given folder structure.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
quarks -- Label of the quarks used in the sfcf input file
|
||||
noffset -- Offset of the source (only relevant when wavefunctions are used)
|
||||
wf -- ID of wave function
|
||||
wf2 -- ID of the second wavefunction (only relevant for boundary-to-boundary correlation functions)
|
||||
im -- if True, read imaginary instead of real part of the correlation function.
|
||||
b2b -- if True, read a time-dependent boundary-to-boundary correlation function
|
||||
names -- Alternative labeling for replicas/ensembles. Has to have the appropriate length
|
||||
ens_name : str
|
||||
replaces the name of the ensemble
|
||||
"""
|
||||
|
||||
if kwargs.get('im'):
|
||||
im = 1
|
||||
part = 'imaginary'
|
||||
else:
|
||||
im = 0
|
||||
part = 'real'
|
||||
|
||||
if kwargs.get('b2b'):
|
||||
b2b = 1
|
||||
else:
|
||||
b2b = 0
|
||||
|
||||
T = 0
|
||||
ls = []
|
||||
for (dirpath, dirnames, filenames) in os.walk(path):
|
||||
ls.extend(dirnames)
|
||||
break
|
||||
if not ls:
|
||||
raise Exception('Error, directory not found')
|
||||
# Exclude folders with different names
|
||||
for exc in ls:
|
||||
if not fnmatch.fnmatch(exc, prefix + '*'):
|
||||
ls = list(set(ls) - set([exc]))
|
||||
if not ls:
|
||||
raise Exception('No matching directories found.')
|
||||
if len(ls) > 1:
|
||||
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0])) # New version, to cope with ids, etc.
|
||||
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0]))
|
||||
|
||||
if not appended:
|
||||
replica = len(ls)
|
||||
else:
|
||||
replica = len([file.split(".")[-1] for file in ls]) // len(set([file.split(".")[-1] for file in ls]))
|
||||
print("Read", part, "part of '" + str(name) + "' with prefix '" + str(prefix) + "' (" + str(replica) + " replica)")
|
||||
if 'names' in kwargs:
|
||||
new_names = kwargs.get('names')
|
||||
if len(new_names) != len(set(new_names)):
|
||||
raise Exception("names are not unique!")
|
||||
if len(new_names) != replica:
|
||||
raise Exception('Names does not have the required length', replica)
|
||||
else:
|
||||
# Adjust replica names to new bookmarking system
|
||||
new_names = []
|
||||
if not appended:
|
||||
for entry in ls:
|
||||
try:
|
||||
idx = entry.index('r')
|
||||
except Exception:
|
||||
raise Exception("Automatic recognition of replicum failed, please enter the key word 'names'.")
|
||||
|
||||
if 'ens_name' in kwargs:
|
||||
new_names.append(kwargs.get('ens_name') + '|' + entry[idx:])
|
||||
else:
|
||||
new_names.append(entry[:idx] + '|' + entry[idx:])
|
||||
else:
|
||||
|
||||
print('Read', part, 'part of', name, 'from', prefix[:-1], ',', replica, 'replica')
|
||||
for exc in ls:
|
||||
if not fnmatch.fnmatch(exc, prefix + '*.' + name):
|
||||
ls = list(set(ls) - set([exc]))
|
||||
ls.sort(key=lambda x: int(re.findall(r'\d+', x)[-1]))
|
||||
for entry in ls:
|
||||
myentry = entry[:-len(name) - 1]
|
||||
try:
|
||||
idx = myentry.index('r')
|
||||
except Exception:
|
||||
raise Exception("Automatic recognition of replicum failed, please enter the key word 'names'.")
|
||||
|
||||
if 'ens_name' in kwargs:
|
||||
new_names.append(kwargs.get('ens_name') + '|' + myentry[idx:])
|
||||
else:
|
||||
new_names.append(myentry[:idx] + '|' + myentry[idx:])
|
||||
idl = []
|
||||
if not appended:
|
||||
for i, item in enumerate(ls):
|
||||
sub_ls = []
|
||||
if "files" in kwargs:
|
||||
sub_ls = kwargs.get("files")
|
||||
sub_ls.sort(key=lambda x: int(re.findall(r'\d+', x)[-1]))
|
||||
else:
|
||||
for (dirpath, dirnames, filenames) in os.walk(path + '/' + item):
|
||||
if compact:
|
||||
sub_ls.extend(filenames)
|
||||
else:
|
||||
sub_ls.extend(dirnames)
|
||||
break
|
||||
|
||||
for exc in sub_ls:
|
||||
if compact:
|
||||
if not fnmatch.fnmatch(exc, prefix + '*'):
|
||||
sub_ls = list(set(sub_ls) - set([exc]))
|
||||
sub_ls.sort(key=lambda x: int(re.findall(r'\d+', x)[-1]))
|
||||
|
||||
first_cfg = int(re.findall(r'\d+', sub_ls[0])[-1])
|
||||
|
||||
last_cfg = len(sub_ls) + first_cfg - 1
|
||||
|
||||
for cfg in range(1, len(sub_ls)):
|
||||
if int(re.findall(r'\d+', sub_ls[cfg])[-1]) != first_cfg + cfg:
|
||||
last_cfg = cfg + first_cfg - 1
|
||||
break
|
||||
|
||||
no_cfg = last_cfg - first_cfg + 1
|
||||
print(item, ':', no_cfg, 'evenly spaced configurations (', first_cfg, '-', last_cfg, ') ,', len(sub_ls) - no_cfg, 'configs omitted\n')
|
||||
|
||||
sub_ls.sort(key=lambda x:
|
||||
int(re.findall(r'\d+', x)[-1]))
|
||||
else:
|
||||
if not fnmatch.fnmatch(exc, 'cfg*'):
|
||||
sub_ls = list(set(sub_ls) - set([exc]))
|
||||
sub_ls.sort(key=lambda x: int(x[3:]))
|
||||
rep_idl = []
|
||||
no_cfg = len(sub_ls)
|
||||
for cfg in sub_ls:
|
||||
try:
|
||||
if compact:
|
||||
rep_idl.append(int(cfg.split("n")[-1]))
|
||||
else:
|
||||
rep_idl.append(int(cfg[3:]))
|
||||
except Exception:
|
||||
raise Exception("Couldn't parse idl from directroy, problem with file " + cfg)
|
||||
rep_idl.sort()
|
||||
print(item, ':', no_cfg, ' configurations')
|
||||
idl.append(rep_idl)
|
||||
if i == 0:
|
||||
if compact:
|
||||
pattern = 'name ' + name + '\nquarks ' + quarks + '\noffset ' + str(noffset) + '\nwf ' + str(wf)
|
||||
if b2b:
|
||||
pattern += '\nwf_2 ' + str(wf2)
|
||||
|
||||
with open(path + '/' + item + '/' + sub_ls[0], 'r') as file:
|
||||
content = file.read()
|
||||
match = re.search(pattern, content)
|
||||
if match:
|
||||
# the start and end point of the correlator
|
||||
# in question is extracted for later use in
|
||||
# the other files
|
||||
start_read = content.count('\n', 0, match.start()) + 5 + b2b
|
||||
end_match = re.search(r'\n\s*\n', content[match.start():])
|
||||
T = content[match.start():].count('\n', 0, end_match.start()) - 4 - b2b
|
||||
|
@ -205,90 +219,138 @@ def read_sfcf_c(path, prefix, name, quarks='.*', noffset=0, wf=0, wf2=0, **kwarg
|
|||
print(T, 'entries, starting to read in line', start_read)
|
||||
else:
|
||||
raise Exception('Correlator with pattern\n' + pattern + '\nnot found.')
|
||||
else:
|
||||
# this part does the same as above,
|
||||
# but for non-compactified versions of the files
|
||||
with open(path + '/' + item + '/' + sub_ls[0] + '/' + name) as fp:
|
||||
for k, line in enumerate(fp):
|
||||
if version == "0.0":
|
||||
# check if this is really the right file
|
||||
# by matching pattern similar to above
|
||||
pattern = "# " + name + " : offset " + str(noffset) + ", wf " + str(wf)
|
||||
# if b2b, a second wf is needed
|
||||
if b2b:
|
||||
pattern += ", wf_2 " + str(wf2)
|
||||
qs = quarks.split(" ")
|
||||
pattern += " : " + qs[0] + " - " + qs[1]
|
||||
if read == 1 and not line.strip() and k > start + 1:
|
||||
break
|
||||
if read == 1 and k >= start:
|
||||
T += 1
|
||||
|
||||
if version == "0.0":
|
||||
if pattern in line:
|
||||
# print(line)
|
||||
read = 1
|
||||
start = k + 1
|
||||
else:
|
||||
if '[correlator]' in line:
|
||||
read = 1
|
||||
start = k + 7 + b2b
|
||||
T -= b2b
|
||||
print(str(T) + " entries found.")
|
||||
deltas = []
|
||||
for j in range(T):
|
||||
deltas.append([])
|
||||
|
||||
sublength = no_cfg
|
||||
for j in range(T):
|
||||
deltas[j].append(np.zeros(sublength))
|
||||
|
||||
for t in range(T):
|
||||
deltas[t].append(np.zeros(no_cfg))
|
||||
# we iterate through all measurement files in the path given...
|
||||
if compact:
|
||||
for cfg in range(no_cfg):
|
||||
with open(path + '/' + item + '/' + sub_ls[cfg]) as fp:
|
||||
for k, line in enumerate(fp):
|
||||
lines = fp.readlines()
|
||||
if(start_read + T > len(lines)):
|
||||
raise Exception("EOF before end of correlator data! Maybe " + path + '/' + item + '/' + sub_ls[cfg] + " is corrupted?")
|
||||
for k in range(start_read - 6, start_read + T):
|
||||
if k == start_read - 5 - b2b:
|
||||
if line.strip() != 'name ' + name:
|
||||
raise Exception('Wrong format', sub_ls[cfg])
|
||||
if lines[k].strip() != 'name ' + name:
|
||||
raise Exception('Wrong format',
|
||||
sub_ls[cfg])
|
||||
if(k >= start_read and k < start_read + T):
|
||||
floats = list(map(float, line.split()))
|
||||
floats = list(map(float, lines[k].split()))
|
||||
deltas[k - start_read][i][cfg] = floats[-2:][im]
|
||||
else:
|
||||
for cnfg, subitem in enumerate(sub_ls):
|
||||
with open(path + '/' + item + '/' + subitem + '/' + name) as fp:
|
||||
for k, line in enumerate(fp):
|
||||
if(k >= start and k < start + T):
|
||||
floats = list(map(float, line.split()))
|
||||
if version == "0.0":
|
||||
deltas[k - start][i][cnfg] = floats[im]
|
||||
else:
|
||||
deltas[k - start][i][cnfg] = floats[1 + im - single]
|
||||
|
||||
else:
|
||||
if "files" in kwargs:
|
||||
ls = kwargs.get("files")
|
||||
else:
|
||||
for exc in ls:
|
||||
if not fnmatch.fnmatch(exc, prefix + '*.' + name):
|
||||
ls = list(set(ls) - set([exc]))
|
||||
ls.sort(key=lambda x: int(re.findall(r'\d+', x)[-1]))
|
||||
if len(ls) == 0:
|
||||
raise Exception('File(s) for correlator ' + name + ' not found.')
|
||||
pattern = 'name ' + name + '\nquarks ' + quarks + '\noffset ' + str(noffset) + '\nwf ' + str(wf)
|
||||
if b2b:
|
||||
pattern += '\nwf_2 ' + str(wf2)
|
||||
for rep, file in enumerate(ls):
|
||||
rep_idl = []
|
||||
with open(path + '/' + file, 'r') as fp:
|
||||
content = fp.readlines()
|
||||
data_starts = []
|
||||
for linenumber, line in enumerate(content):
|
||||
if "[run]" in line:
|
||||
data_starts.append(linenumber)
|
||||
if len(set([data_starts[i] - data_starts[i - 1] for i in
|
||||
range(1, len(data_starts))])) > 1:
|
||||
raise Exception("Irregularities in file structure found, not all runs have the same output length")
|
||||
chunk = content[:data_starts[1]]
|
||||
for linenumber, line in enumerate(chunk):
|
||||
if line.startswith("gauge_name"):
|
||||
gauge_line = linenumber
|
||||
elif line.startswith("[correlator]"):
|
||||
corr_line = linenumber
|
||||
found_pat = ""
|
||||
for li in chunk[corr_line + 1:corr_line + 6 + b2b]:
|
||||
found_pat += li
|
||||
if re.search(pattern, found_pat):
|
||||
start_read = corr_line + 7 + b2b
|
||||
T = len(chunk) - 1 - start_read
|
||||
if rep == 0:
|
||||
deltas = []
|
||||
for t in range(T):
|
||||
deltas.append([])
|
||||
for t in range(T):
|
||||
deltas[t].append(np.zeros(len(data_starts)))
|
||||
for cnfg in range(len(data_starts)):
|
||||
start = data_starts[cnfg]
|
||||
stop = start + data_starts[1]
|
||||
chunk = content[start:stop]
|
||||
try:
|
||||
rep_idl.append(int(chunk[gauge_line].split("n")[-1]))
|
||||
except Exception:
|
||||
raise Exception("Couldn't parse idl from directroy, problem with chunk around line " + gauge_line)
|
||||
|
||||
found_pat = ""
|
||||
for li in chunk[corr_line + 1:corr_line + 6 + b2b]:
|
||||
found_pat += li
|
||||
if re.search(pattern, found_pat):
|
||||
for t, line in enumerate(chunk[start_read:start_read + T]):
|
||||
floats = list(map(float, line.split()))
|
||||
deltas[t][rep][cnfg] = floats[-2:][im]
|
||||
idl.append(rep_idl)
|
||||
|
||||
if "check_configs" in kwargs:
|
||||
print("Checking for missing configs...")
|
||||
che = kwargs.get("check_configs")
|
||||
if not (len(che) == len(idl)):
|
||||
raise Exception("check_configs has to be the same length as replica!")
|
||||
for r in range(len(idl)):
|
||||
print("checking " + new_names[r])
|
||||
utils.check_idl(idl[r], che[r])
|
||||
print("Done")
|
||||
result = []
|
||||
for t in range(T):
|
||||
result.append(Obs(deltas[t], new_names))
|
||||
return result
|
||||
|
||||
|
||||
def read_qtop(path, prefix, **kwargs):
|
||||
"""Read qtop format from given folder structure.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
target -- specifies the topological sector to be reweighted to (default 0)
|
||||
full -- if true read the charge instead of the reweighting factor.
|
||||
"""
|
||||
|
||||
if 'target' in kwargs:
|
||||
target = kwargs.get('target')
|
||||
else:
|
||||
target = 0
|
||||
|
||||
if kwargs.get('full'):
|
||||
full = 1
|
||||
else:
|
||||
full = 0
|
||||
|
||||
ls = []
|
||||
for (dirpath, dirnames, filenames) in os.walk(path):
|
||||
ls.extend(filenames)
|
||||
break
|
||||
|
||||
if not ls:
|
||||
raise Exception('Error, directory not found')
|
||||
|
||||
# Exclude files with different names
|
||||
for exc in ls:
|
||||
if not fnmatch.fnmatch(exc, prefix + '*'):
|
||||
ls = list(set(ls) - set([exc]))
|
||||
if len(ls) > 1:
|
||||
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0])) # New version, to cope with ids, etc.
|
||||
replica = len(ls)
|
||||
print('Read Q_top from', prefix[:-1], ',', replica, 'replica')
|
||||
|
||||
deltas = []
|
||||
|
||||
for rep in range(replica):
|
||||
tmp = []
|
||||
with open(path + '/' + ls[rep]) as fp:
|
||||
for k, line in enumerate(fp):
|
||||
floats = list(map(float, line.split()))
|
||||
if full == 1:
|
||||
tmp.append(floats[1])
|
||||
else:
|
||||
if int(floats[1]) == target:
|
||||
tmp.append(1.0)
|
||||
else:
|
||||
tmp.append(0.0)
|
||||
|
||||
deltas.append(np.array(tmp))
|
||||
|
||||
rep_names = []
|
||||
for entry in ls:
|
||||
truncated_entry = entry.split('.')[0]
|
||||
idx = truncated_entry.index('r')
|
||||
rep_names.append(truncated_entry[:idx] + '|' + truncated_entry[idx:])
|
||||
|
||||
result = Obs(deltas, rep_names)
|
||||
|
||||
result.append(Obs(deltas[t], new_names, idl=idl))
|
||||
return result
|
||||
|
|
24
pyerrors/input/utils.py
Normal file
24
pyerrors/input/utils.py
Normal file
|
@ -0,0 +1,24 @@
|
|||
"""Utilities for the input"""
|
||||
|
||||
|
||||
def check_idl(idl, che):
|
||||
"""Checks if list of configurations is contained in an idl
|
||||
|
||||
Parameters
|
||||
----------
|
||||
idl : range or list
|
||||
idl of the current replicum
|
||||
che : list
|
||||
list of configurations to be checked against
|
||||
"""
|
||||
missing = []
|
||||
for c in che:
|
||||
if c not in idl:
|
||||
missing.append(c)
|
||||
# print missing configurations such that it can directly be parsed to slurm terminal
|
||||
if not (len(missing) == 0):
|
||||
print(len(missing), "configs missing")
|
||||
miss_str = str(missing[0])
|
||||
for i in missing[1:]:
|
||||
miss_str += "," + str(i)
|
||||
print(miss_str)
|
|
@ -1,121 +1,6 @@
|
|||
import numpy as np
|
||||
from autograd import jacobian
|
||||
import autograd.numpy as anp # Thinly-wrapped numpy
|
||||
from .obs import derived_observable, CObs, Obs, _merge_idx, _expand_deltas_for_merge, _filter_zeroes, import_jackknife
|
||||
|
||||
from functools import partial
|
||||
from autograd.extend import defvjp
|
||||
|
||||
|
||||
def derived_array(func, data, **kwargs):
|
||||
"""Construct a derived Obs for a matrix valued function according to func(data, **kwargs) using automatic differentiation.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
func : object
|
||||
arbitrary function of the form func(data, **kwargs). For the
|
||||
automatic differentiation to work, all numpy functions have to have
|
||||
the autograd wrapper (use 'import autograd.numpy as anp').
|
||||
data : list
|
||||
list of Obs, e.g. [obs1, obs2, obs3].
|
||||
man_grad : list
|
||||
manually supply a list or an array which contains the jacobian
|
||||
of func. Use cautiously, supplying the wrong derivative will
|
||||
not be intercepted.
|
||||
"""
|
||||
|
||||
data = np.asarray(data)
|
||||
raveled_data = data.ravel()
|
||||
|
||||
# Workaround for matrix operations containing non Obs data
|
||||
for i_data in raveled_data:
|
||||
if isinstance(i_data, Obs):
|
||||
first_name = i_data.names[0]
|
||||
first_shape = i_data.shape[first_name]
|
||||
first_idl = i_data.idl[first_name]
|
||||
break
|
||||
|
||||
for i in range(len(raveled_data)):
|
||||
if isinstance(raveled_data[i], (int, float)):
|
||||
raveled_data[i] = Obs([raveled_data[i] + np.zeros(first_shape)], [first_name], idl=[first_idl])
|
||||
|
||||
n_obs = len(raveled_data)
|
||||
new_names = sorted(set([y for x in [o.names for o in raveled_data] for y in x]))
|
||||
|
||||
is_merged = {name: (len(list(filter(lambda o: o.is_merged.get(name, False) is True, raveled_data))) > 0) for name in new_names}
|
||||
reweighted = len(list(filter(lambda o: o.reweighted is True, raveled_data))) > 0
|
||||
new_idl_d = {}
|
||||
for name in new_names:
|
||||
idl = []
|
||||
for i_data in raveled_data:
|
||||
tmp = i_data.idl.get(name)
|
||||
if tmp is not None:
|
||||
idl.append(tmp)
|
||||
new_idl_d[name] = _merge_idx(idl)
|
||||
if not is_merged[name]:
|
||||
is_merged[name] = (1 != len(set([len(idx) for idx in [*idl, new_idl_d[name]]])))
|
||||
|
||||
if data.ndim == 1:
|
||||
values = np.array([o.value for o in data])
|
||||
else:
|
||||
values = np.vectorize(lambda x: x.value)(data)
|
||||
|
||||
new_values = func(values, **kwargs)
|
||||
|
||||
new_r_values = {}
|
||||
for name in new_names:
|
||||
tmp_values = np.zeros(n_obs)
|
||||
for i, item in enumerate(raveled_data):
|
||||
tmp = item.r_values.get(name)
|
||||
if tmp is None:
|
||||
tmp = item.value
|
||||
tmp_values[i] = tmp
|
||||
tmp_values = np.array(tmp_values).reshape(data.shape)
|
||||
new_r_values[name] = func(tmp_values, **kwargs)
|
||||
|
||||
if 'man_grad' in kwargs:
|
||||
deriv = np.asarray(kwargs.get('man_grad'))
|
||||
if new_values.shape + data.shape != deriv.shape:
|
||||
raise Exception('Manual derivative does not have correct shape.')
|
||||
else:
|
||||
deriv = jacobian(func)(values, **kwargs)
|
||||
|
||||
final_result = np.zeros(new_values.shape, dtype=object)
|
||||
|
||||
d_extracted = {}
|
||||
for name in new_names:
|
||||
d_extracted[name] = []
|
||||
for i_dat, dat in enumerate(data):
|
||||
ens_length = len(new_idl_d[name])
|
||||
d_extracted[name].append(np.array([_expand_deltas_for_merge(o.deltas[name], o.idl[name], o.shape[name], new_idl_d[name]) for o in dat.reshape(np.prod(dat.shape))]).reshape(dat.shape + (ens_length, )))
|
||||
|
||||
for i_val, new_val in np.ndenumerate(new_values):
|
||||
new_deltas = {}
|
||||
for name in new_names:
|
||||
ens_length = d_extracted[name][0].shape[-1]
|
||||
new_deltas[name] = np.zeros(ens_length)
|
||||
for i_dat, dat in enumerate(d_extracted[name]):
|
||||
new_deltas[name] += np.tensordot(deriv[i_val + (i_dat, )], dat)
|
||||
|
||||
new_samples = []
|
||||
new_means = []
|
||||
new_idl = []
|
||||
for name in new_names:
|
||||
if is_merged[name]:
|
||||
filtered_deltas, filtered_idl_d = _filter_zeroes(new_deltas[name], new_idl_d[name])
|
||||
else:
|
||||
filtered_deltas = new_deltas[name]
|
||||
filtered_idl_d = new_idl_d[name]
|
||||
|
||||
new_samples.append(filtered_deltas)
|
||||
new_idl.append(filtered_idl_d)
|
||||
new_means.append(new_r_values[name][i_val])
|
||||
final_result[i_val] = Obs(new_samples, new_names, means=new_means, idl=new_idl)
|
||||
final_result[i_val]._value = new_val
|
||||
final_result[i_val].is_merged = is_merged
|
||||
final_result[i_val].reweighted = reweighted
|
||||
|
||||
return final_result
|
||||
from .obs import derived_observable, CObs, Obs, import_jackknife
|
||||
|
||||
|
||||
def matmul(*operands):
|
||||
|
@ -157,8 +42,8 @@ def matmul(*operands):
|
|||
def multi_dot_i(operands):
|
||||
return multi_dot(operands, 'Imag')
|
||||
|
||||
Nr = derived_array(multi_dot_r, extended_operands)
|
||||
Ni = derived_array(multi_dot_i, extended_operands)
|
||||
Nr = derived_observable(multi_dot_r, extended_operands, array_mode=True)
|
||||
Ni = derived_observable(multi_dot_i, extended_operands, array_mode=True)
|
||||
|
||||
res = np.empty_like(Nr)
|
||||
for (n, m), entry in np.ndenumerate(Nr):
|
||||
|
@ -171,57 +56,142 @@ def matmul(*operands):
|
|||
for op in operands[1:]:
|
||||
stack = stack @ op
|
||||
return stack
|
||||
return derived_array(multi_dot, operands)
|
||||
return derived_observable(multi_dot, operands, array_mode=True)
|
||||
|
||||
|
||||
def jack_matmul(a, b):
|
||||
def jack_matmul(*operands):
|
||||
"""Matrix multiply both operands making use of the jackknife approximation.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
a : numpy.ndarray
|
||||
First matrix, can be real or complex Obs valued
|
||||
b : numpy.ndarray
|
||||
Second matrix, can be real or complex Obs valued
|
||||
operands : numpy.ndarray
|
||||
Arbitrary number of 2d-numpy arrays which can be real or complex
|
||||
Obs valued.
|
||||
|
||||
For large matrices this is considerably faster compared to matmul.
|
||||
"""
|
||||
|
||||
if any(isinstance(o[0, 0], CObs) for o in [a, b]):
|
||||
def _exp_to_jack(matrix):
|
||||
base_matrix = np.empty_like(matrix)
|
||||
for (n, m), entry in np.ndenumerate(matrix):
|
||||
base_matrix[n, m] = entry.real.export_jackknife() + 1j * entry.imag.export_jackknife()
|
||||
for index, entry in np.ndenumerate(matrix):
|
||||
base_matrix[index] = entry.export_jackknife()
|
||||
return base_matrix
|
||||
|
||||
def _imp_from_jack(matrix, name):
|
||||
def _imp_from_jack(matrix, name, idl):
|
||||
base_matrix = np.empty_like(matrix)
|
||||
for (n, m), entry in np.ndenumerate(matrix):
|
||||
base_matrix[n, m] = CObs(import_jackknife(entry.real, name),
|
||||
import_jackknife(entry.imag, name))
|
||||
for index, entry in np.ndenumerate(matrix):
|
||||
base_matrix[index] = import_jackknife(entry, name, [idl])
|
||||
return base_matrix
|
||||
|
||||
j_a = _exp_to_jack(a)
|
||||
j_b = _exp_to_jack(b)
|
||||
r = j_a @ j_b
|
||||
return _imp_from_jack(r, a.ravel()[0].real.names[0])
|
||||
def _exp_to_jack_c(matrix):
|
||||
base_matrix = np.empty_like(matrix)
|
||||
for index, entry in np.ndenumerate(matrix):
|
||||
base_matrix[index] = entry.real.export_jackknife() + 1j * entry.imag.export_jackknife()
|
||||
return base_matrix
|
||||
|
||||
def _imp_from_jack_c(matrix, name, idl):
|
||||
base_matrix = np.empty_like(matrix)
|
||||
for index, entry in np.ndenumerate(matrix):
|
||||
base_matrix[index] = CObs(import_jackknife(entry.real, name, [idl]),
|
||||
import_jackknife(entry.imag, name, [idl]))
|
||||
return base_matrix
|
||||
|
||||
if any(isinstance(o.flat[0], CObs) for o in operands):
|
||||
name = operands[0].flat[0].real.names[0]
|
||||
idl = operands[0].flat[0].real.idl[name]
|
||||
|
||||
r = _exp_to_jack_c(operands[0])
|
||||
for op in operands[1:]:
|
||||
if isinstance(op.flat[0], CObs):
|
||||
r = r @ _exp_to_jack_c(op)
|
||||
else:
|
||||
r = r @ op
|
||||
return _imp_from_jack_c(r, name, idl)
|
||||
else:
|
||||
name = operands[0].flat[0].names[0]
|
||||
idl = operands[0].flat[0].idl[name]
|
||||
|
||||
r = _exp_to_jack(operands[0])
|
||||
for op in operands[1:]:
|
||||
if isinstance(op.flat[0], Obs):
|
||||
r = r @ _exp_to_jack(op)
|
||||
else:
|
||||
r = r @ op
|
||||
return _imp_from_jack(r, name, idl)
|
||||
|
||||
|
||||
def einsum(subscripts, *operands):
|
||||
"""Wrapper for numpy.einsum
|
||||
|
||||
Parameters
|
||||
----------
|
||||
subscripts : str
|
||||
Subscripts for summation (see numpy documentation for details)
|
||||
operands : numpy.ndarray
|
||||
Arbitrary number of 2d-numpy arrays which can be real or complex
|
||||
Obs valued.
|
||||
"""
|
||||
|
||||
def _exp_to_jack(matrix):
|
||||
base_matrix = np.empty_like(matrix)
|
||||
for (n, m), entry in np.ndenumerate(matrix):
|
||||
base_matrix[n, m] = entry.export_jackknife()
|
||||
base_matrix = []
|
||||
for index, entry in np.ndenumerate(matrix):
|
||||
base_matrix.append(entry.export_jackknife())
|
||||
return np.asarray(base_matrix).reshape(matrix.shape + base_matrix[0].shape)
|
||||
|
||||
def _exp_to_jack_c(matrix):
|
||||
base_matrix = []
|
||||
for index, entry in np.ndenumerate(matrix):
|
||||
base_matrix.append(entry.real.export_jackknife() + 1j * entry.imag.export_jackknife())
|
||||
return np.asarray(base_matrix).reshape(matrix.shape + base_matrix[0].shape)
|
||||
|
||||
def _imp_from_jack(matrix, name, idl):
|
||||
base_matrix = np.empty(shape=matrix.shape[:-1], dtype=object)
|
||||
for index in np.ndindex(matrix.shape[:-1]):
|
||||
base_matrix[index] = import_jackknife(matrix[index], name, [idl])
|
||||
return base_matrix
|
||||
|
||||
def _imp_from_jack(matrix, name):
|
||||
base_matrix = np.empty_like(matrix)
|
||||
for (n, m), entry in np.ndenumerate(matrix):
|
||||
base_matrix[n, m] = import_jackknife(entry, name)
|
||||
def _imp_from_jack_c(matrix, name, idl):
|
||||
base_matrix = np.empty(shape=matrix.shape[:-1], dtype=object)
|
||||
for index in np.ndindex(matrix.shape[:-1]):
|
||||
base_matrix[index] = CObs(import_jackknife(matrix[index].real, name, [idl]),
|
||||
import_jackknife(matrix[index].imag, name, [idl]))
|
||||
return base_matrix
|
||||
|
||||
j_a = _exp_to_jack(a)
|
||||
j_b = _exp_to_jack(b)
|
||||
r = j_a @ j_b
|
||||
return _imp_from_jack(r, a.ravel()[0].names[0])
|
||||
for op in operands:
|
||||
if isinstance(op.flat[0], CObs):
|
||||
name = op.flat[0].real.names[0]
|
||||
idl = op.flat[0].real.idl[name]
|
||||
break
|
||||
elif isinstance(op.flat[0], Obs):
|
||||
name = op.flat[0].names[0]
|
||||
idl = op.flat[0].idl[name]
|
||||
break
|
||||
|
||||
conv_operands = []
|
||||
for op in operands:
|
||||
if isinstance(op.flat[0], CObs):
|
||||
conv_operands.append(_exp_to_jack_c(op))
|
||||
elif isinstance(op.flat[0], Obs):
|
||||
conv_operands.append(_exp_to_jack(op))
|
||||
else:
|
||||
conv_operands.append(op)
|
||||
|
||||
tmp_subscripts = ','.join([o + '...' for o in subscripts.split(',')])
|
||||
extended_subscripts = '->'.join([o + '...' for o in tmp_subscripts.split('->')[:-1]] + [tmp_subscripts.split('->')[-1]])
|
||||
einsum_path = np.einsum_path(extended_subscripts, *conv_operands, optimize='optimal')[0]
|
||||
jack_einsum = np.einsum(extended_subscripts, *conv_operands, optimize=einsum_path)
|
||||
|
||||
if jack_einsum.dtype == complex:
|
||||
result = _imp_from_jack_c(jack_einsum, name, idl)
|
||||
elif jack_einsum.dtype == float:
|
||||
result = _imp_from_jack(jack_einsum, name, idl)
|
||||
else:
|
||||
raise Exception("Result has unexpected datatype")
|
||||
|
||||
if result.shape == ():
|
||||
return result.flat[0]
|
||||
else:
|
||||
return result
|
||||
|
||||
|
||||
def boot_matmul(a, b):
|
||||
|
@ -320,12 +290,15 @@ def cholesky(x):
|
|||
return _mat_mat_op(anp.linalg.cholesky, x)
|
||||
|
||||
|
||||
def scalar_mat_op(op, obs, **kwargs):
|
||||
def det(x):
|
||||
"""Determinant of Obs valued matrices."""
|
||||
return _scalar_mat_op(anp.linalg.det, x)
|
||||
|
||||
|
||||
def _scalar_mat_op(op, obs, **kwargs):
|
||||
"""Computes the matrix to scalar operation op to a given matrix of Obs."""
|
||||
def _mat(x, **kwargs):
|
||||
dim = int(np.sqrt(len(x)))
|
||||
if np.sqrt(len(x)) != dim:
|
||||
raise Exception('Input has to have dim**2 entries')
|
||||
|
||||
mat = []
|
||||
for i in range(dim):
|
||||
|
@ -338,8 +311,6 @@ def scalar_mat_op(op, obs, **kwargs):
|
|||
|
||||
if isinstance(obs, np.ndarray):
|
||||
raveled_obs = (1 * (obs.ravel())).tolist()
|
||||
elif isinstance(obs, list):
|
||||
raveled_obs = obs
|
||||
else:
|
||||
raise TypeError('Unproper type of input.')
|
||||
return derived_observable(_mat, raveled_obs, **kwargs)
|
||||
|
@ -359,10 +330,7 @@ def _mat_mat_op(op, obs, **kwargs):
|
|||
A[n, m] = entry
|
||||
B[n, m] = 0.0
|
||||
big_matrix = np.block([[A, -B], [B, A]])
|
||||
if kwargs.get('num_grad') is True:
|
||||
op_big_matrix = _num_diff_mat_mat_op(op, big_matrix, **kwargs)
|
||||
else:
|
||||
op_big_matrix = derived_array(lambda x, **kwargs: op(x), [big_matrix])[0]
|
||||
op_big_matrix = derived_observable(lambda x, **kwargs: op(x), [big_matrix], array_mode=True)[0]
|
||||
dim = op_big_matrix.shape[0]
|
||||
op_A = op_big_matrix[0: dim // 2, 0: dim // 2]
|
||||
op_B = op_big_matrix[dim // 2:, 0: dim // 2]
|
||||
|
@ -371,15 +339,11 @@ def _mat_mat_op(op, obs, **kwargs):
|
|||
res[n, m] = CObs(op_A[n, m], op_B[n, m])
|
||||
return res
|
||||
else:
|
||||
if kwargs.get('num_grad') is True:
|
||||
return _num_diff_mat_mat_op(op, obs, **kwargs)
|
||||
return derived_array(lambda x, **kwargs: op(x), [obs])[0]
|
||||
return derived_observable(lambda x, **kwargs: op(x), [obs], array_mode=True)[0]
|
||||
|
||||
|
||||
def eigh(obs, **kwargs):
|
||||
"""Computes the eigenvalues and eigenvectors of a given hermitian matrix of Obs according to np.linalg.eigh."""
|
||||
if kwargs.get('num_grad') is True:
|
||||
return _num_diff_eigh(obs, **kwargs)
|
||||
w = derived_observable(lambda x, **kwargs: anp.linalg.eigh(x)[0], obs)
|
||||
v = derived_observable(lambda x, **kwargs: anp.linalg.eigh(x)[1], obs)
|
||||
return w, v
|
||||
|
@ -387,305 +351,18 @@ def eigh(obs, **kwargs):
|
|||
|
||||
def eig(obs, **kwargs):
|
||||
"""Computes the eigenvalues of a given matrix of Obs according to np.linalg.eig."""
|
||||
if kwargs.get('num_grad') is True:
|
||||
return _num_diff_eig(obs, **kwargs)
|
||||
# Note: Automatic differentiation of eig is implemented in the git of autograd
|
||||
# but not yet released to PyPi (1.3)
|
||||
w = derived_observable(lambda x, **kwargs: anp.real(anp.linalg.eig(x)[0]), obs)
|
||||
return w
|
||||
|
||||
|
||||
def pinv(obs, **kwargs):
|
||||
"""Computes the Moore-Penrose pseudoinverse of a matrix of Obs."""
|
||||
if kwargs.get('num_grad') is True:
|
||||
return _num_diff_pinv(obs, **kwargs)
|
||||
return derived_observable(lambda x, **kwargs: anp.linalg.pinv(x), obs)
|
||||
|
||||
|
||||
def svd(obs, **kwargs):
|
||||
"""Computes the singular value decomposition of a matrix of Obs."""
|
||||
if kwargs.get('num_grad') is True:
|
||||
return _num_diff_svd(obs, **kwargs)
|
||||
u = derived_observable(lambda x, **kwargs: anp.linalg.svd(x, full_matrices=False)[0], obs)
|
||||
s = derived_observable(lambda x, **kwargs: anp.linalg.svd(x, full_matrices=False)[1], obs)
|
||||
vh = derived_observable(lambda x, **kwargs: anp.linalg.svd(x, full_matrices=False)[2], obs)
|
||||
return (u, s, vh)
|
||||
|
||||
|
||||
def slogdet(obs, **kwargs):
|
||||
"""Computes the determinant of a matrix of Obs via np.linalg.slogdet."""
|
||||
def _mat(x):
|
||||
dim = int(np.sqrt(len(x)))
|
||||
if np.sqrt(len(x)) != dim:
|
||||
raise Exception('Input has to have dim**2 entries')
|
||||
|
||||
mat = []
|
||||
for i in range(dim):
|
||||
row = []
|
||||
for j in range(dim):
|
||||
row.append(x[j + dim * i])
|
||||
mat.append(row)
|
||||
|
||||
(sign, logdet) = anp.linalg.slogdet(np.array(mat))
|
||||
return sign * anp.exp(logdet)
|
||||
|
||||
if isinstance(obs, np.ndarray):
|
||||
return derived_observable(_mat, (1 * (obs.ravel())).tolist(), **kwargs)
|
||||
elif isinstance(obs, list):
|
||||
return derived_observable(_mat, obs, **kwargs)
|
||||
else:
|
||||
raise TypeError('Unproper type of input.')
|
||||
|
||||
|
||||
# Variants for numerical differentiation
|
||||
|
||||
def _num_diff_mat_mat_op(op, obs, **kwargs):
|
||||
"""Computes the matrix to matrix operation op to a given matrix of Obs elementwise
|
||||
which is suitable for numerical differentiation."""
|
||||
def _mat(x, **kwargs):
|
||||
dim = int(np.sqrt(len(x)))
|
||||
if np.sqrt(len(x)) != dim:
|
||||
raise Exception('Input has to have dim**2 entries')
|
||||
|
||||
mat = []
|
||||
for i in range(dim):
|
||||
row = []
|
||||
for j in range(dim):
|
||||
row.append(x[j + dim * i])
|
||||
mat.append(row)
|
||||
|
||||
return op(np.array(mat))[kwargs.get('i')][kwargs.get('j')]
|
||||
|
||||
if isinstance(obs, np.ndarray):
|
||||
raveled_obs = (1 * (obs.ravel())).tolist()
|
||||
elif isinstance(obs, list):
|
||||
raveled_obs = obs
|
||||
else:
|
||||
raise TypeError('Unproper type of input.')
|
||||
|
||||
dim = int(np.sqrt(len(raveled_obs)))
|
||||
|
||||
res_mat = []
|
||||
for i in range(dim):
|
||||
row = []
|
||||
for j in range(dim):
|
||||
row.append(derived_observable(_mat, raveled_obs, i=i, j=j, **kwargs))
|
||||
res_mat.append(row)
|
||||
|
||||
return np.array(res_mat) @ np.identity(dim)
|
||||
|
||||
|
||||
def _num_diff_eigh(obs, **kwargs):
|
||||
"""Computes the eigenvalues and eigenvectors of a given hermitian matrix of Obs according to np.linalg.eigh
|
||||
elementwise which is suitable for numerical differentiation."""
|
||||
def _mat(x, **kwargs):
|
||||
dim = int(np.sqrt(len(x)))
|
||||
if np.sqrt(len(x)) != dim:
|
||||
raise Exception('Input has to have dim**2 entries')
|
||||
|
||||
mat = []
|
||||
for i in range(dim):
|
||||
row = []
|
||||
for j in range(dim):
|
||||
row.append(x[j + dim * i])
|
||||
mat.append(row)
|
||||
|
||||
n = kwargs.get('n')
|
||||
res = np.linalg.eigh(np.array(mat))[n]
|
||||
|
||||
if n == 0:
|
||||
return res[kwargs.get('i')]
|
||||
else:
|
||||
return res[kwargs.get('i')][kwargs.get('j')]
|
||||
|
||||
if isinstance(obs, np.ndarray):
|
||||
raveled_obs = (1 * (obs.ravel())).tolist()
|
||||
elif isinstance(obs, list):
|
||||
raveled_obs = obs
|
||||
else:
|
||||
raise TypeError('Unproper type of input.')
|
||||
|
||||
dim = int(np.sqrt(len(raveled_obs)))
|
||||
|
||||
res_vec = []
|
||||
for i in range(dim):
|
||||
res_vec.append(derived_observable(_mat, raveled_obs, n=0, i=i, **kwargs))
|
||||
|
||||
res_mat = []
|
||||
for i in range(dim):
|
||||
row = []
|
||||
for j in range(dim):
|
||||
row.append(derived_observable(_mat, raveled_obs, n=1, i=i, j=j, **kwargs))
|
||||
res_mat.append(row)
|
||||
|
||||
return (np.array(res_vec) @ np.identity(dim), np.array(res_mat) @ np.identity(dim))
|
||||
|
||||
|
||||
def _num_diff_eig(obs, **kwargs):
|
||||
"""Computes the eigenvalues of a given matrix of Obs according to np.linalg.eig
|
||||
elementwise which is suitable for numerical differentiation."""
|
||||
def _mat(x, **kwargs):
|
||||
dim = int(np.sqrt(len(x)))
|
||||
if np.sqrt(len(x)) != dim:
|
||||
raise Exception('Input has to have dim**2 entries')
|
||||
|
||||
mat = []
|
||||
for i in range(dim):
|
||||
row = []
|
||||
for j in range(dim):
|
||||
row.append(x[j + dim * i])
|
||||
mat.append(row)
|
||||
|
||||
n = kwargs.get('n')
|
||||
res = np.linalg.eig(np.array(mat))[n]
|
||||
|
||||
if n == 0:
|
||||
# Discard imaginary part of eigenvalue here
|
||||
return np.real(res[kwargs.get('i')])
|
||||
else:
|
||||
return res[kwargs.get('i')][kwargs.get('j')]
|
||||
|
||||
if isinstance(obs, np.ndarray):
|
||||
raveled_obs = (1 * (obs.ravel())).tolist()
|
||||
elif isinstance(obs, list):
|
||||
raveled_obs = obs
|
||||
else:
|
||||
raise TypeError('Unproper type of input.')
|
||||
|
||||
dim = int(np.sqrt(len(raveled_obs)))
|
||||
|
||||
res_vec = []
|
||||
for i in range(dim):
|
||||
# Note: Automatic differentiation of eig is implemented in the git of autograd
|
||||
# but not yet released to PyPi (1.3)
|
||||
res_vec.append(derived_observable(_mat, raveled_obs, n=0, i=i, **kwargs))
|
||||
|
||||
return np.array(res_vec) @ np.identity(dim)
|
||||
|
||||
|
||||
def _num_diff_pinv(obs, **kwargs):
|
||||
"""Computes the Moore-Penrose pseudoinverse of a matrix of Obs elementwise which is suitable
|
||||
for numerical differentiation."""
|
||||
def _mat(x, **kwargs):
|
||||
shape = kwargs.get('shape')
|
||||
|
||||
mat = []
|
||||
for i in range(shape[0]):
|
||||
row = []
|
||||
for j in range(shape[1]):
|
||||
row.append(x[j + shape[1] * i])
|
||||
mat.append(row)
|
||||
|
||||
return np.linalg.pinv(np.array(mat))[kwargs.get('i')][kwargs.get('j')]
|
||||
|
||||
if isinstance(obs, np.ndarray):
|
||||
shape = obs.shape
|
||||
raveled_obs = (1 * (obs.ravel())).tolist()
|
||||
else:
|
||||
raise TypeError('Unproper type of input.')
|
||||
|
||||
res_mat = []
|
||||
for i in range(shape[1]):
|
||||
row = []
|
||||
for j in range(shape[0]):
|
||||
row.append(derived_observable(_mat, raveled_obs, shape=shape, i=i, j=j, **kwargs))
|
||||
res_mat.append(row)
|
||||
|
||||
return np.array(res_mat) @ np.identity(shape[0])
|
||||
|
||||
|
||||
def _num_diff_svd(obs, **kwargs):
|
||||
"""Computes the singular value decomposition of a matrix of Obs elementwise which
|
||||
is suitable for numerical differentiation."""
|
||||
def _mat(x, **kwargs):
|
||||
shape = kwargs.get('shape')
|
||||
|
||||
mat = []
|
||||
for i in range(shape[0]):
|
||||
row = []
|
||||
for j in range(shape[1]):
|
||||
row.append(x[j + shape[1] * i])
|
||||
mat.append(row)
|
||||
|
||||
res = np.linalg.svd(np.array(mat), full_matrices=False)
|
||||
|
||||
if kwargs.get('n') == 1:
|
||||
return res[1][kwargs.get('i')]
|
||||
else:
|
||||
return res[kwargs.get('n')][kwargs.get('i')][kwargs.get('j')]
|
||||
|
||||
if isinstance(obs, np.ndarray):
|
||||
shape = obs.shape
|
||||
raveled_obs = (1 * (obs.ravel())).tolist()
|
||||
else:
|
||||
raise TypeError('Unproper type of input.')
|
||||
|
||||
mid_index = min(shape[0], shape[1])
|
||||
|
||||
res_mat0 = []
|
||||
for i in range(shape[0]):
|
||||
row = []
|
||||
for j in range(mid_index):
|
||||
row.append(derived_observable(_mat, raveled_obs, shape=shape, n=0, i=i, j=j, **kwargs))
|
||||
res_mat0.append(row)
|
||||
|
||||
res_mat1 = []
|
||||
for i in range(mid_index):
|
||||
res_mat1.append(derived_observable(_mat, raveled_obs, shape=shape, n=1, i=i, **kwargs))
|
||||
|
||||
res_mat2 = []
|
||||
for i in range(mid_index):
|
||||
row = []
|
||||
for j in range(shape[1]):
|
||||
row.append(derived_observable(_mat, raveled_obs, shape=shape, n=2, i=i, j=j, **kwargs))
|
||||
res_mat2.append(row)
|
||||
|
||||
return (np.array(res_mat0) @ np.identity(mid_index), np.array(res_mat1) @ np.identity(mid_index), np.array(res_mat2) @ np.identity(shape[1]))
|
||||
|
||||
|
||||
# This code block is directly taken from the current master branch of autograd and remains
|
||||
# only until the new version is released on PyPi
|
||||
_dot = partial(anp.einsum, '...ij,...jk->...ik')
|
||||
|
||||
|
||||
# batched diag
|
||||
def _diag(a):
|
||||
return anp.eye(a.shape[-1]) * a
|
||||
|
||||
|
||||
# batched diagonal, similar to matrix_diag in tensorflow
|
||||
def _matrix_diag(a):
|
||||
reps = anp.array(a.shape)
|
||||
reps[:-1] = 1
|
||||
reps[-1] = a.shape[-1]
|
||||
newshape = list(a.shape) + [a.shape[-1]]
|
||||
return _diag(anp.tile(a, reps).reshape(newshape))
|
||||
|
||||
# https://arxiv.org/pdf/1701.00392.pdf Eq(4.77)
|
||||
# Note the formula from Sec3.1 in https://people.maths.ox.ac.uk/gilesm/files/NA-08-01.pdf is incomplete
|
||||
|
||||
|
||||
def grad_eig(ans, x):
|
||||
"""Gradient of a general square (complex valued) matrix"""
|
||||
e, u = ans # eigenvalues as 1d array, eigenvectors in columns
|
||||
n = e.shape[-1]
|
||||
|
||||
def vjp(g):
|
||||
ge, gu = g
|
||||
ge = _matrix_diag(ge)
|
||||
f = 1 / (e[..., anp.newaxis, :] - e[..., :, anp.newaxis] + 1.e-20)
|
||||
f -= _diag(f)
|
||||
ut = anp.swapaxes(u, -1, -2)
|
||||
r1 = f * _dot(ut, gu)
|
||||
r2 = -f * (_dot(_dot(ut, anp.conj(u)), anp.real(_dot(ut, gu)) * anp.eye(n)))
|
||||
r = _dot(_dot(anp.linalg.inv(ut), ge + r1 + r2), ut)
|
||||
if not anp.iscomplexobj(x):
|
||||
r = anp.real(r)
|
||||
# the derivative is still complex for real input (imaginary delta is allowed), real output
|
||||
# but the derivative should be real in real input case when imaginary delta is forbidden
|
||||
return r
|
||||
return vjp
|
||||
|
||||
|
||||
defvjp(anp.linalg.eig, grad_eig)
|
||||
# End of the code block from autograd.master
|
||||
|
|
|
@ -1,18 +1,56 @@
|
|||
import pickle
|
||||
import numpy as np
|
||||
from .obs import Obs
|
||||
|
||||
|
||||
def dump_object(obj, name, **kwargs):
|
||||
"""Dump object into pickle file.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
obj : object
|
||||
object to be saved in the pickle file
|
||||
name : str
|
||||
name of the file
|
||||
path : str
|
||||
specifies a custom path for the file (default '.')
|
||||
"""
|
||||
if 'path' in kwargs:
|
||||
file_name = kwargs.get('path') + '/' + name + '.p'
|
||||
else:
|
||||
file_name = name + '.p'
|
||||
with open(file_name, 'wb') as fb:
|
||||
pickle.dump(obj, fb)
|
||||
|
||||
|
||||
def load_object(path):
|
||||
"""Load object from pickle file.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
path : str
|
||||
path to the file
|
||||
"""
|
||||
with open(path, 'rb') as file:
|
||||
return pickle.load(file)
|
||||
|
||||
|
||||
def gen_correlated_data(means, cov, name, tau=0.5, samples=1000):
|
||||
""" Generate observables with given covariance and autocorrelation times.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
means -- list containing the mean value of each observable.
|
||||
cov -- covariance matrix for the data to be geneated.
|
||||
name -- ensemble name for the data to be geneated.
|
||||
tau -- can either be a real number or a list with an entry for
|
||||
means : list
|
||||
list containing the mean value of each observable.
|
||||
cov : numpy.ndarray
|
||||
covariance matrix for the data to be generated.
|
||||
name : str
|
||||
ensemble name for the data to be geneated.
|
||||
tau : float or list
|
||||
can either be a real number or a list with an entry for
|
||||
every dataset.
|
||||
samples -- number of samples to be generated for each observable.
|
||||
samples : int
|
||||
number of samples to be generated for each observable.
|
||||
"""
|
||||
|
||||
assert len(means) == cov.shape[-1]
|
||||
|
@ -32,3 +70,19 @@ def gen_correlated_data(means, cov, name, tau=0.5, samples=1000):
|
|||
data.append(np.sqrt(1 - a ** 2) * rand[i] + a * data[-1])
|
||||
corr_data = np.array(data) - np.mean(data, axis=0) + means
|
||||
return [Obs([dat], [name]) for dat in corr_data.T]
|
||||
|
||||
|
||||
def _assert_equal_properties(ol, otype=Obs):
|
||||
if not isinstance(ol[0], otype):
|
||||
raise Exception("Wrong data type in list.")
|
||||
for o in ol[1:]:
|
||||
if not isinstance(o, otype):
|
||||
raise Exception("Wrong data type in list.")
|
||||
if not ol[0].is_merged == o.is_merged:
|
||||
raise Exception("All Obs in list have to be defined on the same set of configs.")
|
||||
if not ol[0].reweighted == o.reweighted:
|
||||
raise Exception("All Obs in list have to have the same property 'reweighted'.")
|
||||
if not ol[0].e_content == o.e_content:
|
||||
raise Exception("All Obs in list have to be defined on the same set of configs.")
|
||||
if not ol[0].idl == o.idl:
|
||||
raise Exception("All Obs in list have to be defined on the same set of configurations.")
|
||||
|
|
108
pyerrors/npr.py
108
pyerrors/npr.py
|
@ -1,108 +0,0 @@
|
|||
import warnings
|
||||
import numpy as np
|
||||
from .linalg import inv, matmul
|
||||
from .dirac import gamma, gamma5
|
||||
|
||||
|
||||
L = None
|
||||
T = None
|
||||
|
||||
|
||||
class Npr_matrix(np.ndarray):
|
||||
|
||||
def __new__(cls, input_array, mom_in=None, mom_out=None):
|
||||
obj = np.asarray(input_array).view(cls)
|
||||
obj.mom_in = mom_in
|
||||
obj.mom_out = mom_out
|
||||
return obj
|
||||
|
||||
@property
|
||||
def g5H(self):
|
||||
"""Gamma_5 hermitean conjugate
|
||||
|
||||
Returns gamma_5 @ M.T.conj() @ gamma_5 and exchanges in and out going
|
||||
momenta. Works only for 12x12 matrices.
|
||||
"""
|
||||
if self.shape != (12, 12):
|
||||
raise Exception('g5H only works for 12x12 matrices.')
|
||||
extended_g5 = np.kron(np.eye(3, dtype=int), gamma5)
|
||||
return Npr_matrix(matmul(extended_g5, self.conj().T, extended_g5),
|
||||
mom_in=self.mom_out,
|
||||
mom_out=self.mom_in)
|
||||
|
||||
def _propagate_mom(self, other, name):
|
||||
s_mom = getattr(self, name, None)
|
||||
o_mom = getattr(other, name, None)
|
||||
if s_mom is not None and o_mom is not None:
|
||||
if not np.allclose(s_mom, o_mom):
|
||||
raise Exception(name + ' does not match.')
|
||||
return o_mom if o_mom is not None else s_mom
|
||||
|
||||
def __matmul__(self, other):
|
||||
return self.__new__(Npr_matrix,
|
||||
super().__matmul__(other),
|
||||
self._propagate_mom(other, 'mom_in'),
|
||||
self._propagate_mom(other, 'mom_out'))
|
||||
|
||||
def __array_finalize__(self, obj):
|
||||
if obj is None:
|
||||
return
|
||||
self.mom_in = getattr(obj, 'mom_in', None)
|
||||
self.mom_out = getattr(obj, 'mom_out', None)
|
||||
|
||||
|
||||
def _check_geometry():
|
||||
if L is None:
|
||||
raise Exception("Spatial extent 'L' not set.")
|
||||
else:
|
||||
if not isinstance(L, int):
|
||||
raise Exception("Spatial extent 'L' must be an integer.")
|
||||
if T is None:
|
||||
raise Exception("Temporal extent 'T' not set.")
|
||||
if not isinstance(T, int):
|
||||
raise Exception("Temporal extent 'T' must be an integer.")
|
||||
|
||||
|
||||
def inv_propagator(prop):
|
||||
""" Inverts a 12x12 quark propagator"""
|
||||
if prop.shape != (12, 12):
|
||||
raise Exception("Only 12x12 propagators can be inverted.")
|
||||
return Npr_matrix(inv(prop), prop.mom_in)
|
||||
|
||||
|
||||
def Zq(inv_prop, fermion='Wilson'):
|
||||
""" Calculates the quark field renormalization constant Zq
|
||||
|
||||
Parameters
|
||||
----------
|
||||
inv_prop : array
|
||||
Inverted 12x12 quark propagator
|
||||
fermion : str
|
||||
Fermion type for which the tree-level propagator is used
|
||||
in the calculation of Zq. Default Wilson.
|
||||
"""
|
||||
_check_geometry()
|
||||
mom = np.copy(inv_prop.mom_in)
|
||||
mom[3] /= T / L
|
||||
sin_mom = np.sin(2 * np.pi / L * mom)
|
||||
|
||||
if fermion == 'Wilson':
|
||||
p_slash = -1j * (sin_mom[0] * gamma[0] + sin_mom[1] * gamma[1] + sin_mom[2] * gamma[2] + sin_mom[3] * gamma[3]) / np.sum(sin_mom ** 2)
|
||||
elif fermion == 'Continuum':
|
||||
p_mom = 2 * np.pi / L * mom
|
||||
p_slash = -1j * (p_mom[0] * gamma[0] + p_mom[1] * gamma[1] + p_mom[2] * gamma[2] + p_mom[3] * gamma[3]) / np.sum(p_mom ** 2)
|
||||
elif fermion == 'DWF':
|
||||
W = np.sum(1 - np.cos(2 * np.pi / L * mom))
|
||||
s2 = np.sum(sin_mom ** 2)
|
||||
p_slash = -1j * (sin_mom[0] * gamma[0] + sin_mom[1] * gamma[1] + sin_mom[2] * gamma[2] + sin_mom[3] * gamma[3])
|
||||
p_slash /= 2 * (W - 1 + np.sqrt((1 - W) ** 2 + s2))
|
||||
else:
|
||||
raise Exception("Fermion type '" + fermion + "' not implemented")
|
||||
|
||||
res = 1 / 12. * np.trace(matmul(inv_prop, np.kron(np.eye(3, dtype=int), p_slash)))
|
||||
res.gamma_method()
|
||||
|
||||
if not res.imag.is_zero_within_error(5):
|
||||
warnings.warn("Imaginary part of Zq is not zero within 5 sigma")
|
||||
return res
|
||||
return res.real
|
514
pyerrors/obs.py
514
pyerrors/obs.py
|
@ -4,8 +4,10 @@ import numpy as np
|
|||
import autograd.numpy as anp # Thinly-wrapped numpy
|
||||
from autograd import jacobian
|
||||
import matplotlib.pyplot as plt
|
||||
from scipy.stats import skew, skewtest, kurtosis, kurtosistest
|
||||
import numdifftools as nd
|
||||
from itertools import groupby
|
||||
from .covobs import Covobs
|
||||
|
||||
|
||||
class Obs:
|
||||
|
@ -41,7 +43,7 @@ class Obs:
|
|||
'ddvalue', 'reweighted', 'S', 'tau_exp', 'N_sigma',
|
||||
'e_dvalue', 'e_ddvalue', 'e_tauint', 'e_dtauint',
|
||||
'e_windowsize', 'e_rho', 'e_drho', 'e_n_tauint', 'e_n_dtauint',
|
||||
'idl', 'is_merged', 'tag', '__dict__']
|
||||
'idl', 'is_merged', 'tag', '_covobs', '__dict__']
|
||||
|
||||
S_global = 2.0
|
||||
S_dict = {}
|
||||
|
@ -67,16 +69,21 @@ class Obs:
|
|||
already subtracted from the samples
|
||||
"""
|
||||
|
||||
if means is None:
|
||||
if means is None and len(samples):
|
||||
if len(samples) != len(names):
|
||||
raise Exception('Length of samples and names incompatible.')
|
||||
if idl is not None:
|
||||
if len(idl) != len(names):
|
||||
raise Exception('Length of idl incompatible with samples and names.')
|
||||
if len(names) != len(set(names)):
|
||||
name_length = len(names)
|
||||
if name_length > 1:
|
||||
if name_length != len(set(names)):
|
||||
raise Exception('names are not unique.')
|
||||
if not all(isinstance(x, str) for x in names):
|
||||
raise TypeError('All names have to be strings.')
|
||||
else:
|
||||
if not isinstance(names[0], str):
|
||||
raise TypeError('All names have to be strings.')
|
||||
if min(len(x) for x in samples) <= 4:
|
||||
raise Exception('Samples have to have at least 5 entries.')
|
||||
|
||||
|
@ -84,8 +91,10 @@ class Obs:
|
|||
self.shape = {}
|
||||
self.r_values = {}
|
||||
self.deltas = {}
|
||||
self._covobs = {}
|
||||
|
||||
self.idl = {}
|
||||
if len(samples):
|
||||
if idl is not None:
|
||||
for name, idx in sorted(zip(names, idl)):
|
||||
if isinstance(idx, range):
|
||||
|
@ -104,29 +113,32 @@ class Obs:
|
|||
for name, sample in sorted(zip(names, samples)):
|
||||
self.idl[name] = range(1, len(sample) + 1)
|
||||
|
||||
self._value = 0
|
||||
self.N = 0
|
||||
if means is not None:
|
||||
for name, sample, mean in sorted(zip(names, samples, means)):
|
||||
self.shape[name] = len(self.idl[name])
|
||||
if len(sample) != self.shape[name]:
|
||||
raise Exception('Incompatible samples and idx for %s: %d vs. %d' % (name, len(sample), self.shape[name]))
|
||||
self.N += self.shape[name]
|
||||
self.r_values[name] = mean
|
||||
self.deltas[name] = sample
|
||||
else:
|
||||
for name, sample in sorted(zip(names, samples)):
|
||||
self.shape[name] = len(self.idl[name])
|
||||
self.N += self.shape[name]
|
||||
if len(sample) != self.shape[name]:
|
||||
raise Exception('Incompatible samples and idx for %s: %d vs. %d' % (name, len(sample), self.shape[name]))
|
||||
self.r_values[name] = np.mean(sample)
|
||||
self.deltas[name] = sample - self.r_values[name]
|
||||
self.is_merged = {}
|
||||
self.N = sum(list(self.shape.values()))
|
||||
|
||||
self._value = 0
|
||||
if means is None:
|
||||
for name in self.names:
|
||||
self._value += self.shape[name] * self.r_values[name]
|
||||
self._value /= self.N
|
||||
|
||||
self.is_merged = {}
|
||||
|
||||
else:
|
||||
self._value = 0
|
||||
self.is_merged = {}
|
||||
self.N = 0
|
||||
|
||||
self._dvalue = 0.0
|
||||
self.ddvalue = 0.0
|
||||
self.reweighted = False
|
||||
|
@ -145,6 +157,14 @@ class Obs:
|
|||
def e_names(self):
|
||||
return sorted(set([o.split('|')[0] for o in self.names]))
|
||||
|
||||
@property
|
||||
def cov_names(self):
|
||||
return sorted(set([o for o in self.covobs.keys()]))
|
||||
|
||||
@property
|
||||
def mc_names(self):
|
||||
return sorted(set([o.split('|')[0] for o in self.names if o not in self.cov_names]))
|
||||
|
||||
@property
|
||||
def e_content(self):
|
||||
res = {}
|
||||
|
@ -154,6 +174,10 @@ class Obs:
|
|||
res[e_name].append(e_name)
|
||||
return res
|
||||
|
||||
@property
|
||||
def covobs(self):
|
||||
return self._covobs
|
||||
|
||||
def gamma_method(self, **kwargs):
|
||||
"""Estimate the error and related properties of the Obs.
|
||||
|
||||
|
@ -219,8 +243,7 @@ class Obs:
|
|||
_parse_kwarg('tau_exp')
|
||||
_parse_kwarg('N_sigma')
|
||||
|
||||
for e, e_name in enumerate(self.e_names):
|
||||
|
||||
for e, e_name in enumerate(self.mc_names):
|
||||
r_length = []
|
||||
for r_name in e_content[e_name]:
|
||||
if isinstance(self.idl[r_name], range):
|
||||
|
@ -306,11 +329,16 @@ class Obs:
|
|||
self._dvalue += self.e_dvalue[e_name] ** 2
|
||||
self.ddvalue += (self.e_dvalue[e_name] * self.e_ddvalue[e_name]) ** 2
|
||||
|
||||
self._dvalue = np.sqrt(self.dvalue)
|
||||
for e_name in self.cov_names:
|
||||
self.e_dvalue[e_name] = np.sqrt(self.covobs[e_name].errsq())
|
||||
self.e_ddvalue[e_name] = 0
|
||||
self._dvalue += self.e_dvalue[e_name]**2
|
||||
|
||||
self._dvalue = np.sqrt(self._dvalue)
|
||||
if self._dvalue == 0.0:
|
||||
self.ddvalue = 0.0
|
||||
else:
|
||||
self.ddvalue = np.sqrt(self.ddvalue) / self.dvalue
|
||||
self.ddvalue = np.sqrt(self.ddvalue) / self._dvalue
|
||||
return
|
||||
|
||||
def _calc_gamma(self, deltas, idx, shape, w_max, fft):
|
||||
|
@ -362,17 +390,19 @@ class Obs:
|
|||
if self.value == 0.0:
|
||||
percentage = np.nan
|
||||
else:
|
||||
percentage = np.abs(self.dvalue / self.value) * 100
|
||||
print('Result\t %3.8e +/- %3.8e +/- %3.8e (%3.3f%%)' % (self.value, self.dvalue, self.ddvalue, percentage))
|
||||
percentage = np.abs(self._dvalue / self.value) * 100
|
||||
print('Result\t %3.8e +/- %3.8e +/- %3.8e (%3.3f%%)' % (self.value, self._dvalue, self.ddvalue, percentage))
|
||||
if len(self.e_names) > 1:
|
||||
print(' Ensemble errors:')
|
||||
for e_name in self.e_names:
|
||||
for e_name in self.mc_names:
|
||||
if len(self.e_names) > 1:
|
||||
print('', e_name, '\t %3.8e +/- %3.8e' % (self.e_dvalue[e_name], self.e_ddvalue[e_name]))
|
||||
if self.tau_exp[e_name] > 0:
|
||||
print(' t_int\t %3.8e +/- %3.8e tau_exp = %3.2f, N_sigma = %1.0i' % (self.e_tauint[e_name], self.e_dtauint[e_name], self.tau_exp[e_name], self.N_sigma[e_name]))
|
||||
else:
|
||||
print(' t_int\t %3.8e +/- %3.8e S = %3.2f' % (self.e_tauint[e_name], self.e_dtauint[e_name], self.S[e_name]))
|
||||
for e_name in self.cov_names:
|
||||
print('', e_name, '\t %3.8e' % (self.e_dvalue[e_name]))
|
||||
if ens_content is True:
|
||||
if len(self.e_names) == 1:
|
||||
print(self.N, 'samples in', len(self.e_names), 'ensemble:')
|
||||
|
@ -380,6 +410,7 @@ class Obs:
|
|||
print(self.N, 'samples in', len(self.e_names), 'ensembles:')
|
||||
my_string_list = []
|
||||
for key, value in sorted(self.e_content.items()):
|
||||
if key not in self.covobs:
|
||||
my_string = ' ' + "\u00B7 Ensemble '" + key + "' "
|
||||
if len(value) == 1:
|
||||
my_string += f': {self.shape[value[0]]} configurations'
|
||||
|
@ -399,13 +430,11 @@ class Obs:
|
|||
sublist.append(my_substring)
|
||||
|
||||
my_string += '\n' + '\n'.join(sublist)
|
||||
else:
|
||||
my_string = ' ' + "\u00B7 Covobs '" + key + "' "
|
||||
my_string_list.append(my_string)
|
||||
print('\n'.join(my_string_list))
|
||||
|
||||
def print(self, level=1):
|
||||
warnings.warn("Method 'print' renamed to 'details'", DeprecationWarning)
|
||||
self.details(level > 1)
|
||||
|
||||
def is_zero_within_error(self, sigma=1):
|
||||
"""Checks whether the observable is zero within 'sigma' standard errors.
|
||||
|
||||
|
@ -416,11 +445,19 @@ class Obs:
|
|||
|
||||
Works only properly when the gamma method was run.
|
||||
"""
|
||||
return self.is_zero() or np.abs(self.value) <= sigma * self.dvalue
|
||||
return self.is_zero() or np.abs(self.value) <= sigma * self._dvalue
|
||||
|
||||
def is_zero(self):
|
||||
"""Checks whether the observable is zero within machine precision."""
|
||||
return np.isclose(0.0, self.value) and all(np.allclose(0.0, delta) for delta in self.deltas.values())
|
||||
def is_zero(self, rtol=1.e-5, atol=1.e-8):
|
||||
"""Checks whether the observable is zero within a given tolerance.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
rtol : float
|
||||
Relative tolerance (for details see numpy documentation).
|
||||
atol : float
|
||||
Absolute tolerance (for details see numpy documentation).
|
||||
"""
|
||||
return np.isclose(0.0, self.value, rtol, atol) and all(np.allclose(0.0, delta, rtol, atol) for delta in self.deltas.values()) and all(np.allclose(0.0, delta.errsq(), rtol, atol) for delta in self.covobs.values())
|
||||
|
||||
def plot_tauint(self, save=None):
|
||||
"""Plot integrated autocorrelation time for each ensemble.
|
||||
|
@ -433,8 +470,8 @@ class Obs:
|
|||
if not hasattr(self, 'e_dvalue'):
|
||||
raise Exception('Run the gamma method first.')
|
||||
|
||||
for e, e_name in enumerate(self.mc_names):
|
||||
fig = plt.figure()
|
||||
for e, e_name in enumerate(self.e_names):
|
||||
plt.xlabel(r'$W$')
|
||||
plt.ylabel(r'$\tau_\mathrm{int}$')
|
||||
length = int(len(self.e_n_tauint[e_name]))
|
||||
|
@ -459,13 +496,20 @@ class Obs:
|
|||
plt.ylim(bottom=0.0)
|
||||
plt.draw()
|
||||
if save:
|
||||
fig.savefig(save)
|
||||
fig.savefig(save + "_" + str(e))
|
||||
|
||||
def plot_rho(self):
|
||||
"""Plot normalized autocorrelation function time for each ensemble."""
|
||||
def plot_rho(self, save=None):
|
||||
"""Plot normalized autocorrelation function time for each ensemble.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
save : str
|
||||
saves the figure to a file named 'save' if.
|
||||
"""
|
||||
if not hasattr(self, 'e_dvalue'):
|
||||
raise Exception('Run the gamma method first.')
|
||||
for e, e_name in enumerate(self.e_names):
|
||||
for e, e_name in enumerate(self.mc_names):
|
||||
fig = plt.figure()
|
||||
plt.xlabel('W')
|
||||
plt.ylabel('rho')
|
||||
length = int(len(self.e_drho[e_name]))
|
||||
|
@ -482,12 +526,14 @@ class Obs:
|
|||
plt.plot([-0.5, xmax], [0, 0], 'k--', lw=1)
|
||||
plt.xlim(-0.5, xmax)
|
||||
plt.draw()
|
||||
if save:
|
||||
fig.savefig(save + "_" + str(e))
|
||||
|
||||
def plot_rep_dist(self):
|
||||
"""Plot replica distribution for each ensemble with more than one replicum."""
|
||||
if not hasattr(self, 'e_dvalue'):
|
||||
raise Exception('Run the gamma method first.')
|
||||
for e, e_name in enumerate(self.e_names):
|
||||
for e, e_name in enumerate(self.mc_names):
|
||||
if len(self.e_content[e_name]) == 1:
|
||||
print('No replica distribution for a single replicum (', e_name, ')')
|
||||
continue
|
||||
|
@ -513,22 +559,28 @@ class Obs:
|
|||
expand : bool
|
||||
show expanded history for irregular Monte Carlo chains (default: True).
|
||||
"""
|
||||
for e, e_name in enumerate(self.e_names):
|
||||
for e, e_name in enumerate(self.mc_names):
|
||||
plt.figure()
|
||||
r_length = []
|
||||
tmp = []
|
||||
tmp_expanded = []
|
||||
for r, r_name in enumerate(self.e_content[e_name]):
|
||||
if expand:
|
||||
tmp.append(_expand_deltas(self.deltas[r_name], list(self.idl[r_name]), self.shape[r_name]) + self.r_values[r_name])
|
||||
else:
|
||||
tmp.append(self.deltas[r_name] + self.r_values[r_name])
|
||||
if expand:
|
||||
tmp_expanded.append(_expand_deltas(self.deltas[r_name], list(self.idl[r_name]), self.shape[r_name]) + self.r_values[r_name])
|
||||
r_length.append(len(tmp_expanded[-1]))
|
||||
else:
|
||||
r_length.append(len(tmp[-1]))
|
||||
e_N = np.sum(r_length)
|
||||
x = np.arange(e_N)
|
||||
y = np.concatenate(tmp, axis=0)
|
||||
y_test = np.concatenate(tmp, axis=0)
|
||||
if expand:
|
||||
y = np.concatenate(tmp_expanded, axis=0)
|
||||
else:
|
||||
y = y_test
|
||||
plt.errorbar(x, y, fmt='.', markersize=3)
|
||||
plt.xlim(-0.5, e_N - 0.5)
|
||||
plt.title(e_name)
|
||||
plt.title(e_name + f'\nskew: {skew(y_test):.3f} (p={skewtest(y_test).pvalue:.3f}), kurtosis: {kurtosis(y_test):.3f} (p={kurtosistest(y_test).pvalue:.3f})')
|
||||
plt.draw()
|
||||
|
||||
def plot_piechart(self):
|
||||
|
@ -536,10 +588,10 @@ class Obs:
|
|||
ensemble to the error and returns a dictionary containing the fractions."""
|
||||
if not hasattr(self, 'e_dvalue'):
|
||||
raise Exception('Run the gamma method first.')
|
||||
if self.dvalue == 0.0:
|
||||
if self._dvalue == 0.0:
|
||||
raise Exception('Error is 0.0')
|
||||
labels = self.e_names
|
||||
sizes = [i ** 2 for i in list(self.e_dvalue.values())] / self.dvalue ** 2
|
||||
sizes = [self.e_dvalue[name] ** 2 for name in labels] / self._dvalue ** 2
|
||||
fig1, ax1 = plt.subplots()
|
||||
ax1.pie(sizes, labels=labels, startangle=90, normalize=True)
|
||||
ax1.axis('equal')
|
||||
|
@ -547,22 +599,32 @@ class Obs:
|
|||
|
||||
return dict(zip(self.e_names, sizes))
|
||||
|
||||
def dump(self, name, **kwargs):
|
||||
"""Dump the Obs to a pickle file 'name'.
|
||||
def dump(self, filename, datatype="json.gz", **kwargs):
|
||||
"""Dump the Obs to a file 'name' of chosen format.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
name : str
|
||||
filename : str
|
||||
name of the file to be saved.
|
||||
datatype : str
|
||||
Format of the exported file. Supported formats include
|
||||
"json.gz" and "pickle"
|
||||
path : str
|
||||
specifies a custom path for the file (default '.')
|
||||
"""
|
||||
if 'path' in kwargs:
|
||||
file_name = kwargs.get('path') + '/' + name + '.p'
|
||||
file_name = kwargs.get('path') + '/' + filename
|
||||
else:
|
||||
file_name = name + '.p'
|
||||
with open(file_name, 'wb') as fb:
|
||||
file_name = filename
|
||||
|
||||
if datatype == "json.gz":
|
||||
from .input.json import dump_to_json
|
||||
dump_to_json([self], file_name)
|
||||
elif datatype == "pickle":
|
||||
with open(file_name + '.p', 'wb') as fb:
|
||||
pickle.dump(self, fb)
|
||||
else:
|
||||
raise Exception("Unknown datatype " + str(datatype))
|
||||
|
||||
def export_jackknife(self):
|
||||
"""Export jackknife samples from the Obs
|
||||
|
@ -584,9 +646,9 @@ class Obs:
|
|||
name = self.names[0]
|
||||
full_data = self.deltas[name] + self.r_values[name]
|
||||
n = full_data.size
|
||||
mean = np.mean(full_data)
|
||||
mean = self.value
|
||||
tmp_jacks = np.zeros(n + 1)
|
||||
tmp_jacks[0] = self.value
|
||||
tmp_jacks[0] = mean
|
||||
tmp_jacks[1:] = (n * mean - full_data) / (n - 1)
|
||||
return tmp_jacks
|
||||
|
||||
|
@ -597,15 +659,15 @@ class Obs:
|
|||
return 'Obs[' + str(self) + ']'
|
||||
|
||||
def __str__(self):
|
||||
if self.dvalue == 0.0:
|
||||
if self._dvalue == 0.0:
|
||||
return str(self.value)
|
||||
fexp = np.floor(np.log10(self.dvalue))
|
||||
fexp = np.floor(np.log10(self._dvalue))
|
||||
if fexp < 0.0:
|
||||
return '{:{form}}({:2.0f})'.format(self.value, self.dvalue * 10 ** (-fexp + 1), form='.' + str(-int(fexp) + 1) + 'f')
|
||||
return '{:{form}}({:2.0f})'.format(self.value, self._dvalue * 10 ** (-fexp + 1), form='.' + str(-int(fexp) + 1) + 'f')
|
||||
elif fexp == 0.0:
|
||||
return '{:.1f}({:1.1f})'.format(self.value, self.dvalue)
|
||||
return '{:.1f}({:1.1f})'.format(self.value, self._dvalue)
|
||||
else:
|
||||
return '{:.0f}({:2.0f})'.format(self.value, self.dvalue)
|
||||
return '{:.0f}({:2.0f})'.format(self.value, self._dvalue)
|
||||
|
||||
# Overload comparisons
|
||||
def __lt__(self, other):
|
||||
|
@ -759,9 +821,6 @@ class Obs:
|
|||
def arctanh(self):
|
||||
return derived_observable(lambda x: anp.arctanh(x[0]), [self])
|
||||
|
||||
def sinc(self):
|
||||
return derived_observable(lambda x: anp.sinc(x[0]), [self])
|
||||
|
||||
|
||||
class CObs:
|
||||
"""Class for a complex valued observable."""
|
||||
|
@ -907,7 +966,7 @@ def _merge_idx(idl):
|
|||
g = groupby(idl)
|
||||
if next(g, True) and not next(g, False):
|
||||
return idl[0]
|
||||
except:
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if np.all([type(idx) is range for idx in idl]):
|
||||
|
@ -976,7 +1035,7 @@ def _filter_zeroes(deltas, idx, eps=Obs.filter_eps):
|
|||
return deltas, idx
|
||||
|
||||
|
||||
def derived_observable(func, data, **kwargs):
|
||||
def derived_observable(func, data, array_mode=False, **kwargs):
|
||||
"""Construct a derived Obs according to func(data, **kwargs) using automatic differentiation.
|
||||
|
||||
Parameters
|
||||
|
@ -1009,32 +1068,27 @@ def derived_observable(func, data, **kwargs):
|
|||
raveled_data = data.ravel()
|
||||
|
||||
# Workaround for matrix operations containing non Obs data
|
||||
for i_data in raveled_data:
|
||||
if isinstance(i_data, Obs):
|
||||
first_name = i_data.names[0]
|
||||
first_shape = i_data.shape[first_name]
|
||||
first_idl = i_data.idl[first_name]
|
||||
break
|
||||
|
||||
if not all(isinstance(x, Obs) for x in raveled_data):
|
||||
for i in range(len(raveled_data)):
|
||||
if isinstance(raveled_data[i], (int, float)):
|
||||
raveled_data[i] = Obs([raveled_data[i] + np.zeros(first_shape)], [first_name], idl=[first_idl])
|
||||
raveled_data[i] = cov_Obs(raveled_data[i], 0.0, "###dummy_covobs###")
|
||||
|
||||
allcov = {}
|
||||
for o in raveled_data:
|
||||
for name in o.cov_names:
|
||||
if name in allcov:
|
||||
if not np.allclose(allcov[name], o.covobs[name].cov):
|
||||
raise Exception('Inconsistent covariance matrices for %s!' % (name))
|
||||
else:
|
||||
allcov[name] = o.covobs[name].cov
|
||||
|
||||
n_obs = len(raveled_data)
|
||||
new_names = sorted(set([y for x in [o.names for o in raveled_data] for y in x]))
|
||||
new_cov_names = sorted(set([y for x in [o.cov_names for o in raveled_data] for y in x]))
|
||||
new_sample_names = sorted(set(new_names) - set(new_cov_names))
|
||||
|
||||
is_merged = {name: (len(list(filter(lambda o: o.is_merged.get(name, False) is True, raveled_data))) > 0) for name in new_names}
|
||||
is_merged = {name: (len(list(filter(lambda o: o.is_merged.get(name, False) is True, raveled_data))) > 0) for name in new_sample_names}
|
||||
reweighted = len(list(filter(lambda o: o.reweighted is True, raveled_data))) > 0
|
||||
new_idl_d = {}
|
||||
for name in new_names:
|
||||
idl = []
|
||||
for i_data in raveled_data:
|
||||
tmp = i_data.idl.get(name)
|
||||
if tmp is not None:
|
||||
idl.append(tmp)
|
||||
new_idl_d[name] = _merge_idx(idl)
|
||||
if not is_merged[name]:
|
||||
is_merged[name] = (1 != len(set([len(idx) for idx in [*idl, new_idl_d[name]]])))
|
||||
|
||||
if data.ndim == 1:
|
||||
values = np.array([o.value for o in data])
|
||||
|
@ -1043,21 +1097,24 @@ def derived_observable(func, data, **kwargs):
|
|||
|
||||
new_values = func(values, **kwargs)
|
||||
|
||||
multi = 0
|
||||
if isinstance(new_values, np.ndarray):
|
||||
multi = 1
|
||||
multi = int(isinstance(new_values, np.ndarray))
|
||||
|
||||
new_r_values = {}
|
||||
for name in new_names:
|
||||
new_idl_d = {}
|
||||
for name in new_sample_names:
|
||||
idl = []
|
||||
tmp_values = np.zeros(n_obs)
|
||||
for i, item in enumerate(raveled_data):
|
||||
tmp = item.r_values.get(name)
|
||||
if tmp is None:
|
||||
tmp = item.value
|
||||
tmp_values[i] = tmp
|
||||
tmp_values[i] = item.r_values.get(name, item.value)
|
||||
tmp_idl = item.idl.get(name)
|
||||
if tmp_idl is not None:
|
||||
idl.append(tmp_idl)
|
||||
if multi > 0:
|
||||
tmp_values = np.array(tmp_values).reshape(data.shape)
|
||||
new_r_values[name] = func(tmp_values, **kwargs)
|
||||
new_idl_d[name] = _merge_idx(idl)
|
||||
if not is_merged[name]:
|
||||
is_merged[name] = (1 != len(set([len(idx) for idx in [*idl, new_idl_d[name]]])))
|
||||
|
||||
if 'man_grad' in kwargs:
|
||||
deriv = np.asarray(kwargs.get('man_grad'))
|
||||
|
@ -1090,16 +1147,57 @@ def derived_observable(func, data, **kwargs):
|
|||
|
||||
final_result = np.zeros(new_values.shape, dtype=object)
|
||||
|
||||
if array_mode is True:
|
||||
|
||||
class _Zero_grad():
|
||||
def __init__(self, N):
|
||||
self.grad = np.zeros((N, 1))
|
||||
|
||||
new_covobs_lengths = dict(set([y for x in [[(n, o.covobs[n].N) for n in o.cov_names] for o in raveled_data] for y in x]))
|
||||
d_extracted = {}
|
||||
g_extracted = {}
|
||||
for name in new_sample_names:
|
||||
d_extracted[name] = []
|
||||
ens_length = len(new_idl_d[name])
|
||||
for i_dat, dat in enumerate(data):
|
||||
d_extracted[name].append(np.array([_expand_deltas_for_merge(o.deltas.get(name, np.zeros(ens_length)), o.idl.get(name, new_idl_d[name]), o.shape.get(name, ens_length), new_idl_d[name]) for o in dat.reshape(np.prod(dat.shape))]).reshape(dat.shape + (ens_length, )))
|
||||
for name in new_cov_names:
|
||||
g_extracted[name] = []
|
||||
zero_grad = _Zero_grad(new_covobs_lengths[name])
|
||||
for i_dat, dat in enumerate(data):
|
||||
g_extracted[name].append(np.array([o.covobs.get(name, zero_grad).grad for o in dat.reshape(np.prod(dat.shape))]).reshape(dat.shape + (new_covobs_lengths[name], 1)))
|
||||
|
||||
for i_val, new_val in np.ndenumerate(new_values):
|
||||
new_deltas = {}
|
||||
new_grad = {}
|
||||
if array_mode is True:
|
||||
for name in new_sample_names:
|
||||
ens_length = d_extracted[name][0].shape[-1]
|
||||
new_deltas[name] = np.zeros(ens_length)
|
||||
for i_dat, dat in enumerate(d_extracted[name]):
|
||||
new_deltas[name] += np.tensordot(deriv[i_val + (i_dat, )], dat)
|
||||
for name in new_cov_names:
|
||||
new_grad[name] = 0
|
||||
for i_dat, dat in enumerate(g_extracted[name]):
|
||||
new_grad[name] += np.tensordot(deriv[i_val + (i_dat, )], dat)
|
||||
else:
|
||||
for j_obs, obs in np.ndenumerate(data):
|
||||
for name in obs.names:
|
||||
if name in obs.cov_names:
|
||||
new_grad[name] = new_grad.get(name, 0) + deriv[i_val + j_obs] * obs.covobs[name].grad
|
||||
else:
|
||||
new_deltas[name] = new_deltas.get(name, 0) + deriv[i_val + j_obs] * _expand_deltas_for_merge(obs.deltas[name], obs.idl[name], obs.shape[name], new_idl_d[name])
|
||||
|
||||
new_covobs = {name: Covobs(0, allcov[name], name, grad=new_grad[name]) for name in new_grad}
|
||||
|
||||
if not set(new_covobs.keys()).isdisjoint(new_deltas.keys()):
|
||||
raise Exception('The same name has been used for deltas and covobs!')
|
||||
new_samples = []
|
||||
new_means = []
|
||||
new_idl = []
|
||||
new_names_obs = []
|
||||
for name in new_names:
|
||||
if name not in new_covobs:
|
||||
if is_merged[name]:
|
||||
filtered_deltas, filtered_idl_d = _filter_zeroes(new_deltas[name], new_idl_d[name])
|
||||
else:
|
||||
|
@ -1109,7 +1207,11 @@ def derived_observable(func, data, **kwargs):
|
|||
new_samples.append(filtered_deltas)
|
||||
new_idl.append(filtered_idl_d)
|
||||
new_means.append(new_r_values[name][i_val])
|
||||
final_result[i_val] = Obs(new_samples, new_names, means=new_means, idl=new_idl)
|
||||
new_names_obs.append(name)
|
||||
final_result[i_val] = Obs(new_samples, new_names_obs, means=new_means, idl=new_idl)
|
||||
for name in new_covobs:
|
||||
final_result[i_val].names.append(name)
|
||||
final_result[i_val]._covobs = new_covobs
|
||||
final_result[i_val]._value = new_val
|
||||
final_result[i_val].is_merged = is_merged
|
||||
final_result[i_val].reweighted = reweighted
|
||||
|
@ -1172,22 +1274,24 @@ def reweight(weight, obs, **kwargs):
|
|||
"""
|
||||
result = []
|
||||
for i in range(len(obs)):
|
||||
if sorted(weight.names) != sorted(obs[i].names):
|
||||
if len(obs[i].cov_names):
|
||||
raise Exception('Error: Not possible to reweight an Obs that contains covobs!')
|
||||
if not set(obs[i].names).issubset(weight.names):
|
||||
raise Exception('Error: Ensembles do not fit')
|
||||
for name in weight.names:
|
||||
for name in obs[i].names:
|
||||
if not set(obs[i].idl[name]).issubset(weight.idl[name]):
|
||||
raise Exception('obs[%d] has to be defined on a subset of the configs in weight.idl[%s]!' % (i, name))
|
||||
new_samples = []
|
||||
w_deltas = {}
|
||||
for name in sorted(weight.names):
|
||||
for name in sorted(obs[i].names):
|
||||
w_deltas[name] = _reduce_deltas(weight.deltas[name], weight.idl[name], obs[i].idl[name])
|
||||
new_samples.append((w_deltas[name] + weight.r_values[name]) * (obs[i].deltas[name] + obs[i].r_values[name]))
|
||||
tmp_obs = Obs(new_samples, sorted(weight.names), idl=[obs[i].idl[name] for name in sorted(weight.names)])
|
||||
tmp_obs = Obs(new_samples, sorted(obs[i].names), idl=[obs[i].idl[name] for name in sorted(obs[i].names)])
|
||||
|
||||
if kwargs.get('all_configs'):
|
||||
new_weight = weight
|
||||
else:
|
||||
new_weight = Obs([w_deltas[name] + weight.r_values[name] for name in sorted(weight.names)], sorted(weight.names), idl=[obs[i].idl[name] for name in sorted(weight.names)])
|
||||
new_weight = Obs([w_deltas[name] + weight.r_values[name] for name in sorted(obs[i].names)], sorted(obs[i].names), idl=[obs[i].idl[name] for name in sorted(obs[i].names)])
|
||||
|
||||
result.append(derived_observable(lambda x, **kwargs: x[0] / x[1], [tmp_obs, new_weight], **kwargs))
|
||||
result[-1].reweighted = True
|
||||
|
@ -1213,6 +1317,8 @@ def correlate(obs_a, obs_b):
|
|||
|
||||
if sorted(obs_a.names) != sorted(obs_b.names):
|
||||
raise Exception('Ensembles do not fit')
|
||||
if len(obs_a.cov_names) or len(obs_b.cov_names):
|
||||
raise Exception('Error: Not possible to correlate Obs that contain covobs!')
|
||||
for name in obs_a.names:
|
||||
if obs_a.shape[name] != obs_b.shape[name]:
|
||||
raise Exception('Shapes of ensemble', name, 'do not fit')
|
||||
|
@ -1243,71 +1349,7 @@ def covariance(obs1, obs2, correlation=False, **kwargs):
|
|||
The gamma method has to be applied first to both observables.
|
||||
|
||||
If abs(covariance(obs1, obs2)) > obs1.dvalue * obs2.dvalue, the covariance
|
||||
is constrained to the maximum value in order to make sure that covariance
|
||||
matrices are positive semidefinite.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
obs1 : Obs
|
||||
First Obs
|
||||
obs2 : Obs
|
||||
Second Obs
|
||||
correlation : bool
|
||||
if true the correlation instead of the covariance is
|
||||
returned (default False)
|
||||
"""
|
||||
if set(obs1.names).isdisjoint(set(obs2.names)):
|
||||
return 0.
|
||||
|
||||
for name in sorted(set(obs1.names + obs2.names)):
|
||||
if (obs1.shape.get(name) != obs2.shape.get(name)) and (obs1.shape.get(name) is not None) and (obs2.shape.get(name) is not None):
|
||||
raise Exception('Shapes of ensemble', name, 'do not fit')
|
||||
if (1 != len(set([len(idx) for idx in [obs1.idl[name], obs2.idl[name], _merge_idx([obs1.idl[name], obs2.idl[name]])]]))):
|
||||
raise Exception('Shapes of ensemble', name, 'do not fit')
|
||||
|
||||
if not hasattr(obs1, 'e_names') or not hasattr(obs2, 'e_names'):
|
||||
raise Exception('The gamma method has to be applied to both Obs first.')
|
||||
|
||||
dvalue = 0
|
||||
|
||||
for e_name in obs1.e_names:
|
||||
|
||||
if e_name not in obs2.e_names:
|
||||
continue
|
||||
|
||||
gamma = 0
|
||||
r_length = []
|
||||
for r_name in obs1.e_content[e_name]:
|
||||
if r_name not in obs2.e_content[e_name]:
|
||||
continue
|
||||
|
||||
r_length.append(len(obs1.deltas[r_name]))
|
||||
|
||||
gamma += np.sum(obs1.deltas[r_name] * obs2.deltas[r_name])
|
||||
|
||||
e_N = np.sum(r_length)
|
||||
|
||||
tau_combined = (obs1.e_tauint[e_name] + obs2.e_tauint[e_name]) / 2
|
||||
dvalue += gamma / e_N * (1 + 1 / e_N) / e_N * 2 * tau_combined
|
||||
|
||||
if np.abs(dvalue / obs1.dvalue / obs2.dvalue) > 1.0:
|
||||
dvalue = np.sign(dvalue) * obs1.dvalue * obs2.dvalue
|
||||
|
||||
if correlation:
|
||||
dvalue = dvalue / obs1.dvalue / obs2.dvalue
|
||||
|
||||
return dvalue
|
||||
|
||||
|
||||
def covariance2(obs1, obs2, correlation=False, **kwargs):
|
||||
"""Alternative implementation of the covariance of two observables.
|
||||
|
||||
covariance(obs, obs) is equal to obs.dvalue ** 2
|
||||
The gamma method has to be applied first to both observables.
|
||||
|
||||
If abs(covariance(obs1, obs2)) > obs1.dvalue * obs2.dvalue, the covariance
|
||||
is constrained to the maximum value in order to make sure that covariance
|
||||
matrices are positive semidefinite.
|
||||
is constrained to the maximum value.
|
||||
|
||||
Keyword arguments
|
||||
-----------------
|
||||
|
@ -1352,7 +1394,7 @@ def covariance2(obs1, obs2, correlation=False, **kwargs):
|
|||
if set(obs1.names).isdisjoint(set(obs2.names)):
|
||||
return 0.
|
||||
|
||||
if not hasattr(obs1, 'e_names') or not hasattr(obs2, 'e_names'):
|
||||
if not hasattr(obs1, 'e_dvalue') or not hasattr(obs2, 'e_dvalue'):
|
||||
raise Exception('The gamma method has to be applied to both Obs first.')
|
||||
|
||||
dvalue = 0
|
||||
|
@ -1361,9 +1403,9 @@ def covariance2(obs1, obs2, correlation=False, **kwargs):
|
|||
e_n_tauint = {}
|
||||
e_rho = {}
|
||||
|
||||
for e_name in obs1.e_names:
|
||||
for e_name in obs1.mc_names:
|
||||
|
||||
if e_name not in obs2.e_names:
|
||||
if e_name not in obs2.mc_names:
|
||||
continue
|
||||
|
||||
idl_d = {}
|
||||
|
@ -1408,12 +1450,19 @@ def covariance2(obs1, obs2, correlation=False, **kwargs):
|
|||
# Make sure no entry of tauint is smaller than 0.5
|
||||
e_n_tauint[e_name][e_n_tauint[e_name] < 0.5] = 0.500000000001
|
||||
|
||||
window = max(obs1.e_windowsize[e_name], obs2.e_windowsize[e_name])
|
||||
window = min(obs1.e_windowsize[e_name], obs2.e_windowsize[e_name])
|
||||
# Bias correction hep-lat/0306017 eq. (49)
|
||||
e_dvalue[e_name] = 2 * (e_n_tauint[e_name][window] + obs1.tau_exp[e_name] * np.abs(e_rho[e_name][window + 1])) * (1 + (2 * window + 1) / e_N) * e_gamma[e_name][0] / e_N
|
||||
|
||||
dvalue += e_dvalue[e_name]
|
||||
|
||||
for e_name in obs1.cov_names:
|
||||
|
||||
if e_name not in obs2.cov_names:
|
||||
continue
|
||||
|
||||
dvalue += float(np.dot(np.transpose(obs1.covobs[e_name].grad), np.dot(obs1.covobs[e_name].cov, obs2.covobs[e_name].grad)))
|
||||
|
||||
if np.abs(dvalue / obs1.dvalue / obs2.dvalue) > 1.0:
|
||||
dvalue = np.sign(dvalue) * obs1.dvalue * obs2.dvalue
|
||||
|
||||
|
@ -1423,70 +1472,6 @@ def covariance2(obs1, obs2, correlation=False, **kwargs):
|
|||
return dvalue
|
||||
|
||||
|
||||
def covariance3(obs1, obs2, correlation=False, **kwargs):
|
||||
"""Another alternative implementation of the covariance of two observables.
|
||||
|
||||
covariance2(obs, obs) is equal to obs.dvalue ** 2
|
||||
Currently only works if ensembles are identical.
|
||||
The gamma method has to be applied first to both observables.
|
||||
|
||||
If abs(covariance2(obs1, obs2)) > obs1.dvalue * obs2.dvalue, the covariance
|
||||
is constrained to the maximum value in order to make sure that covariance
|
||||
matrices are positive semidefinite.
|
||||
|
||||
Keyword arguments
|
||||
-----------------
|
||||
correlation -- if true the correlation instead of the covariance is
|
||||
returned (default False)
|
||||
plot -- if true, the integrated autocorrelation time for each ensemble is
|
||||
plotted.
|
||||
"""
|
||||
|
||||
for name in sorted(set(obs1.names + obs2.names)):
|
||||
if (obs1.shape.get(name) != obs2.shape.get(name)) and (obs1.shape.get(name) is not None) and (obs2.shape.get(name) is not None):
|
||||
raise Exception('Shapes of ensemble', name, 'do not fit')
|
||||
if (1 != len(set([len(idx) for idx in [obs1.idl[name], obs2.idl[name], _merge_idx([obs1.idl[name], obs2.idl[name]])]]))):
|
||||
raise Exception('Shapes of ensemble', name, 'do not fit')
|
||||
|
||||
if not hasattr(obs1, 'e_names') or not hasattr(obs2, 'e_names'):
|
||||
raise Exception('The gamma method has to be applied to both Obs first.')
|
||||
|
||||
tau_exp = []
|
||||
S = []
|
||||
for e_name in sorted(set(obs1.e_names + obs2.e_names)):
|
||||
t_1 = obs1.tau_exp.get(e_name)
|
||||
t_2 = obs2.tau_exp.get(e_name)
|
||||
if t_1 is None:
|
||||
t_1 = 0
|
||||
if t_2 is None:
|
||||
t_2 = 0
|
||||
tau_exp.append(max(t_1, t_2))
|
||||
S_1 = obs1.S.get(e_name)
|
||||
S_2 = obs2.S.get(e_name)
|
||||
if S_1 is None:
|
||||
S_1 = Obs.S_global
|
||||
if S_2 is None:
|
||||
S_2 = Obs.S_global
|
||||
S.append(max(S_1, S_2))
|
||||
|
||||
check_obs = obs1 + obs2
|
||||
check_obs.gamma_method(tau_exp=tau_exp, S=S)
|
||||
|
||||
if kwargs.get('plot'):
|
||||
check_obs.plot_tauint()
|
||||
check_obs.plot_rho()
|
||||
|
||||
cov = (check_obs.dvalue ** 2 - obs1.dvalue ** 2 - obs2.dvalue ** 2) / 2
|
||||
|
||||
if np.abs(cov / obs1.dvalue / obs2.dvalue) > 1.0:
|
||||
cov = np.sign(cov) * obs1.dvalue * obs2.dvalue
|
||||
|
||||
if correlation:
|
||||
cov = cov / obs1.dvalue / obs2.dvalue
|
||||
|
||||
return cov
|
||||
|
||||
|
||||
def pseudo_Obs(value, dvalue, name, samples=1000):
|
||||
"""Generate a pseudo Obs with given value, dvalue and name
|
||||
|
||||
|
@ -1519,39 +1504,7 @@ def pseudo_Obs(value, dvalue, name, samples=1000):
|
|||
return res
|
||||
|
||||
|
||||
def dump_object(obj, name, **kwargs):
|
||||
"""Dump object into pickle file.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
obj : object
|
||||
object to be saved in the pickle file
|
||||
name : str
|
||||
name of the file
|
||||
path : str
|
||||
specifies a custom path for the file (default '.')
|
||||
"""
|
||||
if 'path' in kwargs:
|
||||
file_name = kwargs.get('path') + '/' + name + '.p'
|
||||
else:
|
||||
file_name = name + '.p'
|
||||
with open(file_name, 'wb') as fb:
|
||||
pickle.dump(obj, fb)
|
||||
|
||||
|
||||
def load_object(path):
|
||||
"""Load object from pickle file.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
path : str
|
||||
path to the file
|
||||
"""
|
||||
with open(path, 'rb') as file:
|
||||
return pickle.load(file)
|
||||
|
||||
|
||||
def import_jackknife(jacks, name):
|
||||
def import_jackknife(jacks, name, idl=None):
|
||||
"""Imports jackknife samples and returns an Obs
|
||||
|
||||
Parameters
|
||||
|
@ -1565,7 +1518,7 @@ def import_jackknife(jacks, name):
|
|||
length = len(jacks) - 1
|
||||
prj = (np.ones((length, length)) - (length - 1) * np.identity(length))
|
||||
samples = jacks[1:] @ prj
|
||||
new_obs = Obs([samples], [name])
|
||||
new_obs = Obs([samples], [name], idl=idl)
|
||||
new_obs._value = jacks[0]
|
||||
return new_obs
|
||||
|
||||
|
@ -1583,6 +1536,8 @@ def merge_obs(list_of_obs):
|
|||
replist = [item for obs in list_of_obs for item in obs.names]
|
||||
if (len(replist) == len(set(replist))) is False:
|
||||
raise Exception('list_of_obs contains duplicate replica: %s' % (str(replist)))
|
||||
if any([len(o.cov_names) for o in list_of_obs]):
|
||||
raise Exception('Not possible to merge data that contains covobs!')
|
||||
new_dict = {}
|
||||
idl_dict = {}
|
||||
for o in list_of_obs:
|
||||
|
@ -1595,3 +1550,46 @@ def merge_obs(list_of_obs):
|
|||
o.is_merged = {name: np.any([oi.is_merged.get(name, False) for oi in list_of_obs]) for name in o.names}
|
||||
o.reweighted = np.max([oi.reweighted for oi in list_of_obs])
|
||||
return o
|
||||
|
||||
|
||||
def cov_Obs(means, cov, name, grad=None):
|
||||
"""Create an Obs based on mean(s) and a covariance matrix
|
||||
|
||||
Parameters
|
||||
----------
|
||||
mean : list of floats or float
|
||||
N mean value(s) of the new Obs
|
||||
cov : list or array
|
||||
2d (NxN) Covariance matrix, 1d diagonal entries or 0d covariance
|
||||
name : str
|
||||
identifier for the covariance matrix
|
||||
grad : list or array
|
||||
Gradient of the Covobs wrt. the means belonging to cov.
|
||||
"""
|
||||
|
||||
def covobs_to_obs(co):
|
||||
"""Make an Obs out of a Covobs
|
||||
|
||||
Parameters
|
||||
----------
|
||||
co : Covobs
|
||||
Covobs to be embedded into the Obs
|
||||
"""
|
||||
o = Obs([], [])
|
||||
o._value = co.value
|
||||
o.names.append(co.name)
|
||||
o._covobs[co.name] = co
|
||||
o._dvalue = np.sqrt(co.errsq())
|
||||
return o
|
||||
|
||||
ol = []
|
||||
if isinstance(means, (float, int)):
|
||||
means = [means]
|
||||
|
||||
for i in range(len(means)):
|
||||
ol.append(covobs_to_obs(Covobs(means[i], cov, name, pos=i, grad=grad)))
|
||||
if ol[0].covobs[name].N != len(means):
|
||||
raise Exception('You have to provide %d mean values!' % (ol[0].N))
|
||||
if len(ol) == 1:
|
||||
return ol[0]
|
||||
return ol
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
import numpy as np
|
||||
import scipy.optimize
|
||||
from autograd import jacobian
|
||||
from .obs import derived_observable, pseudo_Obs
|
||||
from .obs import derived_observable
|
||||
|
||||
|
||||
def find_root(d, func, guess=1.0, **kwargs):
|
||||
|
@ -33,4 +34,5 @@ def find_root(d, func, guess=1.0, **kwargs):
|
|||
da = jacobian(lambda u, v: func(v, u))(d.value, root[0])
|
||||
deriv = - da / dx
|
||||
|
||||
return derived_observable(lambda x, **kwargs: x[0], [pseudo_Obs(root, 0.0, d.names[0], d.shape[d.names[0]]), d], man_grad=[0, deriv])
|
||||
res = derived_observable(lambda x, **kwargs: (x[0] + np.finfo(np.float64).eps) / (d.value + np.finfo(np.float64).eps) * root[0], [d], man_grad=[deriv])
|
||||
return res
|
||||
|
|
|
@ -1 +1 @@
|
|||
__version__ = "2.0.0"
|
||||
__version__ = "2.0.0+dev"
|
||||
|
|
|
@ -1,4 +0,0 @@
|
|||
[pytest]
|
||||
filterwarnings =
|
||||
ignore::RuntimeWarning:autograd.*:
|
||||
ignore::RuntimeWarning:numdifftools.*:
|
4
setup.py
4
setup.py
|
@ -3,11 +3,11 @@
|
|||
from setuptools import setup, find_packages
|
||||
|
||||
setup(name='pyerrors',
|
||||
version='2.0.0',
|
||||
version='2.0.0+dev',
|
||||
description='Error analysis for lattice QCD',
|
||||
author='Fabian Joswig',
|
||||
author_email='fabian.joswig@ed.ac.uk',
|
||||
packages=find_packages(),
|
||||
python_requires='>=3.6.0',
|
||||
install_requires=['numpy>=1.16', 'autograd>=1.2', 'numdifftools', 'matplotlib>=3.3', 'scipy', 'iminuit<2', 'h5py']
|
||||
install_requires=['numpy>=1.16', 'autograd @ git+https://github.com/HIPS/autograd.git', 'numdifftools', 'matplotlib>=3.3', 'scipy', 'iminuit>=2', 'h5py']
|
||||
)
|
||||
|
|
|
@ -31,6 +31,13 @@ def test_function_overloading():
|
|||
assert np.isclose(con[0].dvalue, t2.dvalue)
|
||||
assert np.allclose(con[0].deltas['t'], t2.deltas['t'])
|
||||
|
||||
np.arcsin(corr_a)
|
||||
np.arccos(corr_a)
|
||||
np.arctan(corr_a)
|
||||
np.arcsinh(corr_a)
|
||||
np.arccosh(corr_a + 1.1)
|
||||
np.arctanh(corr_a)
|
||||
|
||||
|
||||
def test_modify_correlator():
|
||||
corr_content = []
|
||||
|
@ -47,16 +54,20 @@ def test_modify_correlator():
|
|||
corr.roll(np.random.randint(100))
|
||||
corr.deriv(symmetric=True)
|
||||
corr.deriv(symmetric=False)
|
||||
corr.deriv().deriv()
|
||||
corr.second_deriv()
|
||||
corr.second_deriv().second_deriv()
|
||||
|
||||
|
||||
|
||||
def test_m_eff():
|
||||
my_corr = pe.correlators.Corr([pe.pseudo_Obs(10, 0.1, 't'), pe.pseudo_Obs(9, 0.05, 't'), pe.pseudo_Obs(8, 0.1, 't'), pe.pseudo_Obs(7, 0.05, 't')])
|
||||
my_corr = pe.correlators.Corr([pe.pseudo_Obs(10, 0.1, 't'), pe.pseudo_Obs(9, 0.05, 't'), pe.pseudo_Obs(9, 0.1, 't'), pe.pseudo_Obs(10, 0.05, 't')])
|
||||
my_corr.m_eff('log')
|
||||
my_corr.m_eff('cosh')
|
||||
my_corr.m_eff('sinh')
|
||||
my_corr.m_eff('arccosh')
|
||||
|
||||
with pytest.warns(RuntimeWarning):
|
||||
my_corr.m_eff('sinh')
|
||||
|
||||
def test_reweighting():
|
||||
my_corr = pe.correlators.Corr([pe.pseudo_Obs(10, 0.1, 't'), pe.pseudo_Obs(0, 0.05, 't')])
|
||||
|
@ -79,6 +90,52 @@ def test_T_symmetry():
|
|||
T_symmetric = my_corr.T_symmetry(my_corr)
|
||||
|
||||
|
||||
def test_fit_correlator():
|
||||
my_corr = pe.correlators.Corr([pe.pseudo_Obs(1.01324, 0.05, 't'), pe.pseudo_Obs(2.042345, 0.0004, 't')])
|
||||
|
||||
def f(a, x):
|
||||
y = a[0] + a[1] * x
|
||||
return y
|
||||
|
||||
fit_res = my_corr.fit(f)
|
||||
assert fit_res[0] == my_corr[0]
|
||||
assert fit_res[1] == my_corr[1] - my_corr[0]
|
||||
|
||||
|
||||
def test_plateau():
|
||||
my_corr = pe.correlators.Corr([pe.pseudo_Obs(1.01324, 0.05, 't'), pe.pseudo_Obs(1.042345, 0.008, 't')])
|
||||
|
||||
my_corr.plateau([0, 1], method="fit")
|
||||
my_corr.plateau([0, 1], method="mean")
|
||||
with pytest.raises(Exception):
|
||||
my_corr.plateau()
|
||||
|
||||
|
||||
def test_padded_correlator():
|
||||
my_list = [pe.Obs([np.random.normal(1.0, 0.1, 100)], ['ens1']) for o in range(8)]
|
||||
my_corr = pe.Corr(my_list, padding=[7, 3])
|
||||
my_corr.reweighted
|
||||
[o for o in my_corr]
|
||||
|
||||
|
||||
def test_corr_exceptions():
|
||||
obs_a = pe.Obs([np.random.normal(0.1, 0.1, 100)], ['test'])
|
||||
obs_b= pe.Obs([np.random.normal(0.1, 0.1, 99)], ['test'])
|
||||
with pytest.raises(Exception):
|
||||
pe.Corr([obs_a, obs_b])
|
||||
|
||||
obs_a = pe.Obs([np.random.normal(0.1, 0.1, 100)], ['test'])
|
||||
obs_b= pe.Obs([np.random.normal(0.1, 0.1, 100)], ['test'], idl=[range(1, 200, 2)])
|
||||
with pytest.raises(Exception):
|
||||
pe.Corr([obs_a, obs_b])
|
||||
|
||||
obs_a = pe.Obs([np.random.normal(0.1, 0.1, 100)], ['test'])
|
||||
obs_b= pe.Obs([np.random.normal(0.1, 0.1, 100)], ['test2'])
|
||||
with pytest.raises(Exception):
|
||||
pe.Corr([obs_a, obs_b])
|
||||
|
||||
|
||||
|
||||
def test_utility():
|
||||
corr_content = []
|
||||
for t in range(8):
|
||||
|
@ -90,10 +147,20 @@ def test_utility():
|
|||
corr.print([2, 4])
|
||||
corr.show()
|
||||
|
||||
corr.dump('test_dump')
|
||||
corr.dump('test_dump', datatype="pickle", path='.')
|
||||
corr.dump('test_dump', datatype="pickle")
|
||||
new_corr = pe.load_object('test_dump.p')
|
||||
os.remove('test_dump.p')
|
||||
for o_a, o_b in zip(corr.content, new_corr.content):
|
||||
assert np.isclose(o_a[0].value, o_b[0].value)
|
||||
assert np.isclose(o_a[0].dvalue, o_b[0].dvalue)
|
||||
assert np.allclose(o_a[0].deltas['t'], o_b[0].deltas['t'])
|
||||
|
||||
corr.dump('test_dump', datatype="json.gz", path='.')
|
||||
corr.dump('test_dump', datatype="json.gz")
|
||||
new_corr = pe.input.json.load_json('test_dump')
|
||||
os.remove('test_dump.json.gz')
|
||||
for o_a, o_b in zip(corr.content, new_corr.content):
|
||||
assert np.isclose(o_a[0].value, o_b[0].value)
|
||||
assert np.isclose(o_a[0].dvalue, o_b[0].dvalue)
|
||||
assert np.allclose(o_a[0].deltas['t'], o_b[0].deltas['t'])
|
||||
|
|
94
tests/covobs_test.py
Normal file
94
tests/covobs_test.py
Normal file
|
@ -0,0 +1,94 @@
|
|||
import autograd.numpy as np
|
||||
import pyerrors as pe
|
||||
import pytest
|
||||
|
||||
np.random.seed(0)
|
||||
|
||||
|
||||
def test_covobs():
|
||||
val = 1.123124
|
||||
cov = .243423
|
||||
name = 'Covariance'
|
||||
co = pe.cov_Obs(val, cov, name)
|
||||
co.gamma_method()
|
||||
co.details()
|
||||
assert (co.dvalue == np.sqrt(cov))
|
||||
assert (co.value == val)
|
||||
|
||||
do = 2 * co
|
||||
assert (do.covobs[name].grad[0] == 2)
|
||||
|
||||
do = co * co
|
||||
assert (do.covobs[name].grad[0] == 2 * val)
|
||||
assert np.array_equal(do.covobs[name].cov, co.covobs[name].cov)
|
||||
|
||||
pi = [16.7457, -19.0475]
|
||||
cov = [[3.49591, -6.07560], [-6.07560, 10.5834]]
|
||||
|
||||
cl = pe.cov_Obs(pi, cov, 'rAP')
|
||||
pl = pe.misc.gen_correlated_data(pi, np.asarray(cov), 'rAPpseudo')
|
||||
|
||||
def rAP(p, g0sq):
|
||||
return -0.0010666 * g0sq * (1 + np.exp(p[0] + p[1] / g0sq))
|
||||
|
||||
for g0sq in [1, 1.5, 1.8]:
|
||||
oc = rAP(cl, g0sq)
|
||||
oc.gamma_method()
|
||||
op = rAP(pl, g0sq)
|
||||
op.gamma_method()
|
||||
assert(np.isclose(oc.value, op.value, rtol=1e-14, atol=1e-14))
|
||||
|
||||
[o.gamma_method() for o in cl]
|
||||
assert(pe.covariance(cl[0], cl[1]) == cov[0][1])
|
||||
assert(pe.covariance(cl[0], cl[1]) == cov[1][0])
|
||||
|
||||
do = cl[0] * cl[1]
|
||||
assert(np.array_equal(do.covobs['rAP'].grad, np.transpose([pi[1], pi[0]]).reshape(2, 1)))
|
||||
|
||||
|
||||
def test_covobs_overloading():
|
||||
covobs = pe.cov_Obs([0.5, 0.5], np.array([[0.02, 0.02], [0.02, 0.02]]), 'test')
|
||||
assert (covobs[0] / covobs[1]) == 1
|
||||
assert (covobs[0] - covobs[1]) == 0
|
||||
|
||||
my_obs = pe.pseudo_Obs(2.3, 0.2, 'obs')
|
||||
|
||||
assert (my_obs * covobs[0] / covobs[1]) == my_obs
|
||||
|
||||
covobs = pe.cov_Obs(0.0, 0.3, 'test')
|
||||
assert not covobs.is_zero()
|
||||
|
||||
|
||||
def test_covobs_name_collision():
|
||||
covobs = pe.cov_Obs(0.5, 0.002, 'test')
|
||||
my_obs = pe.pseudo_Obs(2.3, 0.2, 'test')
|
||||
with pytest.raises(Exception):
|
||||
summed_obs = my_obs + covobs
|
||||
covobs2 = pe.cov_Obs(0.3, 0.001, 'test')
|
||||
with pytest.raises(Exception):
|
||||
summed_obs = covobs + covobs2
|
||||
|
||||
|
||||
def test_covobs_replica_separator():
|
||||
with pytest.raises(Exception):
|
||||
covobs = pe.cov_Obs(0.5, 0.002, 'test|r2')
|
||||
|
||||
|
||||
def test_covobs_init():
|
||||
covobs = pe.cov_Obs(0.5, 0.002, 'test')
|
||||
covobs = pe.cov_Obs([1, 2], [0.1, 0.2], 'test')
|
||||
covobs = pe.cov_Obs([1, 2], np.array([0.1, 0.2]), 'test')
|
||||
covobs = pe.cov_Obs([1, 2], [[0.1, 0.2], [0.1, 0.2]], 'test')
|
||||
covobs = pe.cov_Obs([1, 2], np.array([[0.1, 0.2], [0.1, 0.2]]), 'test')
|
||||
|
||||
|
||||
|
||||
def test_covobs_exceptions():
|
||||
with pytest.raises(Exception):
|
||||
covobs = pe.cov_Obs(0.1, [[0.1, 0.2], [0.1, 0.2]], 'test')
|
||||
with pytest.raises(Exception):
|
||||
covobs = pe.cov_Obs(0.1, np.array([[0.1, 0.2], [0.1, 0.2]]), 'test')
|
||||
with pytest.raises(Exception):
|
||||
covobs = pe.cov_Obs([0.5, 0.1], np.array([[2, 1, 3], [1, 2, 3]]), 'test')
|
||||
with pytest.raises(Exception):
|
||||
covobs = pe.cov_Obs([0.5, 0.1], np.random.random((2, 2, 2)), 'test')
|
|
@ -10,3 +10,54 @@ def test_gamma_matrices():
|
|||
assert np.allclose(matrix @ matrix, np.identity(4))
|
||||
assert np.allclose(matrix, matrix.T.conj())
|
||||
assert np.allclose(pe.dirac.gamma5, pe.dirac.gamma[0] @ pe.dirac.gamma[1] @ pe.dirac.gamma[2] @ pe.dirac.gamma[3])
|
||||
|
||||
|
||||
def test_grid_dirac():
|
||||
for gamma in ['Identity',
|
||||
'Gamma5',
|
||||
'GammaX',
|
||||
'GammaY',
|
||||
'GammaZ',
|
||||
'GammaT',
|
||||
'GammaXGamma5',
|
||||
'GammaYGamma5',
|
||||
'GammaZGamma5',
|
||||
'GammaTGamma5',
|
||||
'SigmaXT',
|
||||
'SigmaXY',
|
||||
'SigmaXZ',
|
||||
'SigmaYT',
|
||||
'SigmaYZ',
|
||||
'SigmaZT']:
|
||||
pe.dirac.Grid_gamma(gamma)
|
||||
with pytest.raises(Exception):
|
||||
pe.dirac.Grid_gamma('Not a gamma matrix')
|
||||
|
||||
|
||||
def test_epsilon_tensor():
|
||||
check = {(1, 2, 3) : 1.0,
|
||||
(3, 1, 2) : 1.0,
|
||||
(2, 3, 1) : 1.0,
|
||||
(1, 1, 1) : 0.0,
|
||||
(3, 2, 1) : -1.0,
|
||||
(1, 3, 2) : -1.0,
|
||||
(1, 1, 3) : 0.0}
|
||||
for key, value in check.items():
|
||||
assert pe.dirac.epsilon_tensor(*key) == value
|
||||
with pytest.raises(Exception):
|
||||
pe.dirac.epsilon_tensor(0, 1, 3)
|
||||
|
||||
|
||||
def test_epsilon_tensor_rank4():
|
||||
check = {(1, 4, 3, 2) : -1.0,
|
||||
(1, 2, 3, 4) : 1.0,
|
||||
(2, 1, 3, 4) : -1.0,
|
||||
(4, 3, 2, 1) : 1.0,
|
||||
(3, 2, 4, 3) : 0.0,
|
||||
(0, 1, 2, 3) : 1.0,
|
||||
(1, 1, 1, 1) : 0.0,
|
||||
(1, 2, 3, 1) : 0.0}
|
||||
for key, value in check.items():
|
||||
assert pe.dirac.epsilon_tensor_rank4(*key) == value
|
||||
with pytest.raises(Exception):
|
||||
pe.dirac.epsilon_tensor_rank4(0, 1, 3, 4)
|
||||
|
|
|
@ -10,6 +10,26 @@ import pytest
|
|||
np.random.seed(0)
|
||||
|
||||
|
||||
def test_fit_lin():
|
||||
x = [0, 2]
|
||||
y = [pe.pseudo_Obs(0, 0.1, 'ensemble'),
|
||||
pe.pseudo_Obs(2, 0.1, 'ensemble')]
|
||||
|
||||
res = pe.fits.fit_lin(x, y)
|
||||
|
||||
assert res[0] == y[0]
|
||||
assert res[1] == (y[1] - y[0]) / (x[1] - x[0])
|
||||
|
||||
x = y = [pe.pseudo_Obs(0, 0.1, 'ensemble'),
|
||||
pe.pseudo_Obs(2, 0.1, 'ensemble')]
|
||||
|
||||
res = pe.fits.fit_lin(x, y)
|
||||
|
||||
m = (y[1] - y[0]) / (x[1] - x[0])
|
||||
assert res[0] == y[1] - x[1] * m
|
||||
assert res[1] == m
|
||||
|
||||
|
||||
def test_least_squares():
|
||||
dim = 10 + int(30 * np.random.rand())
|
||||
x = np.arange(dim)
|
||||
|
@ -29,9 +49,13 @@ def test_least_squares():
|
|||
y = a[0] * np.exp(-a[1] * x)
|
||||
return y
|
||||
|
||||
out = pe.least_squares(x, oy, func)
|
||||
out = pe.least_squares(x, oy, func, expected_chisquare=True, resplot=True, qqplot=True)
|
||||
beta = out.fit_parameters
|
||||
|
||||
str(out)
|
||||
repr(out)
|
||||
len(out)
|
||||
|
||||
for i in range(2):
|
||||
beta[i].gamma_method(S=1.0)
|
||||
assert math.isclose(beta[i].value, popt[i], abs_tol=1e-5)
|
||||
|
@ -46,6 +70,21 @@ def test_least_squares():
|
|||
assert((out.fit_parameters[0] - beta[0]).is_zero())
|
||||
assert((out.fit_parameters[1] - beta[1]).is_zero())
|
||||
|
||||
oyc = []
|
||||
for i, item in enumerate(x):
|
||||
oyc.append(pe.cov_Obs(y[i], yerr[i]**2, 'cov' + str(i)))
|
||||
|
||||
outc = pe.least_squares(x, oyc, func)
|
||||
betac = outc.fit_parameters
|
||||
|
||||
for i in range(2):
|
||||
betac[i].gamma_method(S=1.0)
|
||||
assert math.isclose(betac[i].value, popt[i], abs_tol=1e-5)
|
||||
assert math.isclose(pcov[i, i], betac[i].dvalue ** 2, abs_tol=1e-3)
|
||||
assert math.isclose(pe.covariance(betac[0], betac[1]), pcov[0, 1], abs_tol=1e-3)
|
||||
|
||||
|
||||
def test_correlated_fit():
|
||||
num_samples = 400
|
||||
N = 10
|
||||
|
||||
|
@ -54,17 +93,15 @@ def test_least_squares():
|
|||
r = np.zeros((N, N))
|
||||
for i in range(N):
|
||||
for j in range(N):
|
||||
r[i, j] = np.exp(-0.1 * np.fabs(i - j))
|
||||
r[i, j] = np.exp(-0.8 * np.fabs(i - j))
|
||||
|
||||
errl = np.sqrt([3.4, 2.5, 3.6, 2.8, 4.2, 4.7, 4.9, 5.1, 3.2, 4.2])
|
||||
errl *= 4
|
||||
for i in range(N):
|
||||
for j in range(N):
|
||||
r[i, j] *= errl[i] * errl[j]
|
||||
|
||||
c = cholesky(r, lower=True)
|
||||
y = np.dot(c, x)
|
||||
|
||||
x = np.arange(N)
|
||||
for linear in [True, False]:
|
||||
data = []
|
||||
|
@ -89,7 +126,7 @@ def test_least_squares():
|
|||
for i in range(2):
|
||||
diff = fitp[i] - fitpc[i]
|
||||
diff.gamma_method()
|
||||
assert(diff.is_zero_within_error(sigma=1.5))
|
||||
assert(diff.is_zero_within_error(sigma=5))
|
||||
|
||||
|
||||
def test_total_least_squares():
|
||||
|
@ -120,9 +157,13 @@ def test_total_least_squares():
|
|||
odr.set_job(fit_type=0, deriv=1)
|
||||
output = odr.run()
|
||||
|
||||
out = pe.total_least_squares(ox, oy, func)
|
||||
out = pe.total_least_squares(ox, oy, func, expected_chisquare=True)
|
||||
beta = out.fit_parameters
|
||||
|
||||
str(out)
|
||||
repr(out)
|
||||
len(out)
|
||||
|
||||
for i in range(2):
|
||||
beta[i].gamma_method(S=1.0)
|
||||
assert math.isclose(beta[i].value, output.beta[i], rel_tol=1e-5)
|
||||
|
@ -135,6 +176,44 @@ def test_total_least_squares():
|
|||
assert(diff / beta[0] < 1e-3 * beta[0].dvalue)
|
||||
assert((out.fit_parameters[1] - beta[1]).is_zero())
|
||||
|
||||
oxc = []
|
||||
for i, item in enumerate(x):
|
||||
oxc.append(pe.cov_Obs(x[i], xerr[i]**2, 'covx' + str(i)))
|
||||
|
||||
oyc = []
|
||||
for i, item in enumerate(x):
|
||||
oyc.append(pe.cov_Obs(y[i], yerr[i]**2, 'covy' + str(i)))
|
||||
|
||||
outc = pe.total_least_squares(oxc, oyc, func)
|
||||
betac = outc.fit_parameters
|
||||
|
||||
for i in range(2):
|
||||
betac[i].gamma_method(S=1.0)
|
||||
assert math.isclose(betac[i].value, output.beta[i], rel_tol=1e-3)
|
||||
assert math.isclose(output.cov_beta[i, i], betac[i].dvalue ** 2, rel_tol=2.5e-1), str(output.cov_beta[i, i]) + ' ' + str(betac[i].dvalue ** 2)
|
||||
assert math.isclose(pe.covariance(betac[0], betac[1]), output.cov_beta[0, 1], rel_tol=2.5e-1)
|
||||
|
||||
outc = pe.total_least_squares(oxc, oyc, func, const_par=[betac[1]])
|
||||
|
||||
diffc = outc.fit_parameters[0] - betac[0]
|
||||
assert(diffc / betac[0] < 1e-3 * betac[0].dvalue)
|
||||
assert((outc.fit_parameters[1] - betac[1]).is_zero())
|
||||
|
||||
outc = pe.total_least_squares(oxc, oy, func)
|
||||
betac = outc.fit_parameters
|
||||
|
||||
for i in range(2):
|
||||
betac[i].gamma_method(S=1.0)
|
||||
assert math.isclose(betac[i].value, output.beta[i], rel_tol=1e-3)
|
||||
assert math.isclose(output.cov_beta[i, i], betac[i].dvalue ** 2, rel_tol=2.5e-1), str(output.cov_beta[i, i]) + ' ' + str(betac[i].dvalue ** 2)
|
||||
assert math.isclose(pe.covariance(betac[0], betac[1]), output.cov_beta[0, 1], rel_tol=2.5e-1)
|
||||
|
||||
outc = pe.total_least_squares(oxc, oy, func, const_par=[betac[1]])
|
||||
|
||||
diffc = outc.fit_parameters[0] - betac[0]
|
||||
assert(diffc / betac[0] < 1e-3 * betac[0].dvalue)
|
||||
assert((outc.fit_parameters[1] - betac[1]).is_zero())
|
||||
|
||||
|
||||
def test_odr_derivatives():
|
||||
x = []
|
||||
|
@ -153,7 +232,196 @@ def test_odr_derivatives():
|
|||
out = pe.total_least_squares(x, y, func)
|
||||
fit1 = out.fit_parameters
|
||||
|
||||
with pytest.warns(DeprecationWarning):
|
||||
tfit = pe.fits.fit_general(x, y, func, base_step=0.1, step_ratio=1.1, num_steps=20)
|
||||
tfit = fit_general(x, y, func, base_step=0.1, step_ratio=1.1, num_steps=20)
|
||||
assert np.abs(np.max(np.array(list(fit1[1].deltas.values()))
|
||||
- np.array(list(tfit[1].deltas.values())))) < 10e-8
|
||||
|
||||
|
||||
def test_r_value_persistence():
|
||||
def f(a, x):
|
||||
return a[0] + a[1] * x
|
||||
|
||||
a = pe.pseudo_Obs(1.1, .1, 'a')
|
||||
assert np.isclose(a.value, a.r_values['a'])
|
||||
|
||||
a_2 = a ** 2
|
||||
assert np.isclose(a_2.value, a_2.r_values['a'])
|
||||
|
||||
b = pe.pseudo_Obs(2.1, .2, 'b')
|
||||
|
||||
y = [a, b]
|
||||
[o.gamma_method() for o in y]
|
||||
|
||||
fitp = pe.fits.least_squares([1, 2], y, f)
|
||||
|
||||
assert np.isclose(fitp[0].value, fitp[0].r_values['a'])
|
||||
assert np.isclose(fitp[0].value, fitp[0].r_values['b'])
|
||||
assert np.isclose(fitp[1].value, fitp[1].r_values['a'])
|
||||
assert np.isclose(fitp[1].value, fitp[1].r_values['b'])
|
||||
|
||||
fitp = pe.fits.total_least_squares(y, y, f)
|
||||
|
||||
assert np.isclose(fitp[0].value, fitp[0].r_values['a'])
|
||||
assert np.isclose(fitp[0].value, fitp[0].r_values['b'])
|
||||
assert np.isclose(fitp[1].value, fitp[1].r_values['a'])
|
||||
assert np.isclose(fitp[1].value, fitp[1].r_values['b'])
|
||||
|
||||
fitp = pe.fits.least_squares([1, 2], y, f, priors=y)
|
||||
|
||||
assert np.isclose(fitp[0].value, fitp[0].r_values['a'])
|
||||
assert np.isclose(fitp[0].value, fitp[0].r_values['b'])
|
||||
assert np.isclose(fitp[1].value, fitp[1].r_values['a'])
|
||||
assert np.isclose(fitp[1].value, fitp[1].r_values['b'])
|
||||
|
||||
|
||||
def test_prior_fit():
|
||||
def f(a, x):
|
||||
return a[0] + a[1] * x
|
||||
|
||||
a = pe.pseudo_Obs(0.0, 0.1, 'a')
|
||||
b = pe.pseudo_Obs(1.0, 0.2, 'a')
|
||||
|
||||
y = [a, b]
|
||||
with pytest.raises(Exception):
|
||||
fitp = pe.fits.least_squares([0, 1], 1 * np.array(y), f, priors=['0.0(8)', '1.0(8)'])
|
||||
|
||||
[o.gamma_method() for o in y]
|
||||
|
||||
fitp = pe.fits.least_squares([0, 1], y, f, priors=['0.0(8)', '1.0(8)'])
|
||||
fitp = pe.fits.least_squares([0, 1], y, f, priors=y, resplot=True, qqplot=True)
|
||||
|
||||
|
||||
def test_error_band():
|
||||
def f(a, x):
|
||||
return a[0] + a[1] * x
|
||||
|
||||
a = pe.pseudo_Obs(0.0, 0.1, 'a')
|
||||
b = pe.pseudo_Obs(1.0, 0.2, 'a')
|
||||
|
||||
x = [0, 1]
|
||||
y = [a, b]
|
||||
|
||||
fitp = pe.fits.least_squares(x, y, f)
|
||||
|
||||
with pytest.raises(Exception):
|
||||
pe.fits.error_band(x, f, fitp.fit_parameters)
|
||||
fitp.gamma_method()
|
||||
pe.fits.error_band(x, f, fitp.fit_parameters)
|
||||
|
||||
|
||||
def test_ks_test():
|
||||
def f(a, x):
|
||||
y = a[0] + a[1] * x
|
||||
return y
|
||||
|
||||
fit_res = []
|
||||
|
||||
for i in range(20):
|
||||
data = []
|
||||
for j in range(10):
|
||||
data.append(pe.pseudo_Obs(j + np.random.normal(0.0, 0.25), 0.25, 'test'))
|
||||
my_corr = pe.Corr(data)
|
||||
|
||||
fit_res.append(my_corr.fit(f, silent=True))
|
||||
|
||||
pe.fits.ks_test()
|
||||
pe.fits.ks_test(fit_res)
|
||||
|
||||
|
||||
def fit_general(x, y, func, silent=False, **kwargs):
|
||||
"""Performs a non-linear fit to y = func(x) and returns a list of Obs corresponding to the fit parameters.
|
||||
|
||||
Plausibility of the results should be checked. To control the numerical differentiation
|
||||
the kwargs of numdifftools.step_generators.MaxStepGenerator can be used.
|
||||
|
||||
func has to be of the form
|
||||
|
||||
def func(a, x):
|
||||
y = a[0] + a[1] * x + a[2] * np.sinh(x)
|
||||
return y
|
||||
|
||||
y has to be a list of Obs, the dvalues of the Obs are used as yerror for the fit.
|
||||
x can either be a list of floats in which case no xerror is assumed, or
|
||||
a list of Obs, where the dvalues of the Obs are used as xerror for the fit.
|
||||
|
||||
Keyword arguments
|
||||
-----------------
|
||||
silent -- If true all output to the console is omitted (default False).
|
||||
initial_guess -- can provide an initial guess for the input parameters. Relevant for non-linear fits
|
||||
with many parameters.
|
||||
"""
|
||||
|
||||
if not callable(func):
|
||||
raise TypeError('func has to be a function.')
|
||||
|
||||
for i in range(10):
|
||||
try:
|
||||
func(np.arange(i), 0)
|
||||
except:
|
||||
pass
|
||||
else:
|
||||
break
|
||||
n_parms = i
|
||||
if not silent:
|
||||
print('Fit with', n_parms, 'parameters')
|
||||
|
||||
global print_output, beta0
|
||||
print_output = 1
|
||||
if 'initial_guess' in kwargs:
|
||||
beta0 = kwargs.get('initial_guess')
|
||||
if len(beta0) != n_parms:
|
||||
raise Exception('Initial guess does not have the correct length.')
|
||||
else:
|
||||
beta0 = np.arange(n_parms)
|
||||
|
||||
if len(x) != len(y):
|
||||
raise Exception('x and y have to have the same length')
|
||||
|
||||
if all(isinstance(n, pe.Obs) for n in x):
|
||||
obs = x + y
|
||||
x_constants = None
|
||||
xerr = [o.dvalue for o in x]
|
||||
yerr = [o.dvalue for o in y]
|
||||
elif all(isinstance(n, float) or isinstance(n, int) for n in x) or isinstance(x, np.ndarray):
|
||||
obs = y
|
||||
x_constants = x
|
||||
xerr = None
|
||||
yerr = [o.dvalue for o in y]
|
||||
else:
|
||||
raise Exception('Unsupported types for x')
|
||||
|
||||
def do_the_fit(obs, **kwargs):
|
||||
|
||||
global print_output, beta0
|
||||
|
||||
func = kwargs.get('function')
|
||||
yerr = kwargs.get('yerr')
|
||||
length = len(yerr)
|
||||
|
||||
xerr = kwargs.get('xerr')
|
||||
|
||||
if length == len(obs):
|
||||
assert 'x_constants' in kwargs
|
||||
data = RealData(kwargs.get('x_constants'), obs, sy=yerr)
|
||||
fit_type = 2
|
||||
elif length == len(obs) // 2:
|
||||
data = RealData(obs[:length], obs[length:], sx=xerr, sy=yerr)
|
||||
fit_type = 0
|
||||
else:
|
||||
raise Exception('x and y do not fit together.')
|
||||
|
||||
model = Model(func)
|
||||
|
||||
odr = ODR(data, model, beta0, partol=np.finfo(np.float64).eps)
|
||||
odr.set_job(fit_type=fit_type, deriv=1)
|
||||
output = odr.run()
|
||||
if print_output and not silent:
|
||||
print(*output.stopreason)
|
||||
print('chisquare/d.o.f.:', output.res_var)
|
||||
print_output = 0
|
||||
beta0 = output.beta
|
||||
return output.beta[kwargs.get('n')]
|
||||
res = []
|
||||
for n in range(n_parms):
|
||||
res.append(pe.derived_observable(do_the_fit, obs, function=func, xerr=xerr, yerr=yerr, x_constants=x_constants, num_grad=True, n=n, **kwargs))
|
||||
return res
|
||||
|
|
126
tests/io_test.py
Normal file
126
tests/io_test.py
Normal file
|
@ -0,0 +1,126 @@
|
|||
import os
|
||||
import gzip
|
||||
import numpy as np
|
||||
import pyerrors as pe
|
||||
import pyerrors.input.json as jsonio
|
||||
|
||||
|
||||
def test_jsonio():
|
||||
o = pe.pseudo_Obs(1.0, .2, 'one')
|
||||
o2 = pe.pseudo_Obs(0.5, .1, 'two|r1')
|
||||
o3 = pe.pseudo_Obs(0.5, .1, 'two|r2')
|
||||
o4 = pe.merge_obs([o2, o3])
|
||||
otag = 'This has been merged!'
|
||||
o4.tag = otag
|
||||
do = o - .2 * o4
|
||||
co1 = pe.cov_Obs(1., .123, 'cov1')
|
||||
co3 = pe.cov_Obs(4., .1 ** 2, 'cov3')
|
||||
do *= co1 / co3
|
||||
do.tag = {'A': 2}
|
||||
|
||||
o5 = pe.pseudo_Obs(0.8, .1, 'two|r2')
|
||||
co2 = pe.cov_Obs([1, 2], [[.12, .004], [.004, .02]], 'cov2')
|
||||
o5 /= co2[0]
|
||||
o3 /= co2[1]
|
||||
o5.tag = 2 * otag
|
||||
testl = [o3, o5]
|
||||
|
||||
arr = np.array([o3, o5])
|
||||
mat = np.array([[pe.pseudo_Obs(1.0, .1, 'mat'), pe.pseudo_Obs(0.3, .1, 'mat')], [pe.pseudo_Obs(0.2, .1, 'mat'), pe.pseudo_Obs(2.0, .4, 'mat')]])
|
||||
mat[0][1].tag = ['This', 'is', 2, None]
|
||||
mat[1][0].tag = '{testt}'
|
||||
mat[1][1].tag = '[tag]'
|
||||
|
||||
tt1 = pe.Obs([np.random.rand(100)], ['t|r1'], idl=[range(2, 202, 2)])
|
||||
tt2 = pe.Obs([np.random.rand(100)], ['t|r2'], idl=[range(2, 202, 2)])
|
||||
tt3 = pe.Obs([np.random.rand(102)], ['qe'])
|
||||
|
||||
tt = tt1 + tt2 + tt3
|
||||
|
||||
tt.tag = 'Test Obs: Ä'
|
||||
|
||||
ol = [o4, do, testl, mat, arr, np.array([o]), np.array([tt, tt]), [tt, tt], co1, co2, np.array(co2), co1 / co2[0]]
|
||||
fname = 'test_rw'
|
||||
|
||||
jsonio.dump_to_json(ol, fname, indent=1, description='[I am a tricky description]')
|
||||
|
||||
rl = jsonio.load_json(fname)
|
||||
|
||||
os.remove(fname + '.json.gz')
|
||||
|
||||
for o, r in zip(ol, rl):
|
||||
assert np.all(o == r)
|
||||
|
||||
for i in range(len(ol)):
|
||||
if isinstance(ol[i], pe.Obs):
|
||||
o = ol[i] - rl[i]
|
||||
assert(o.is_zero())
|
||||
assert(ol[i].tag == rl[i].tag)
|
||||
or1 = np.ravel(ol[i])
|
||||
or2 = np.ravel(rl[i])
|
||||
for j in range(len(or1)):
|
||||
o = or1[j] - or2[j]
|
||||
assert(o.is_zero())
|
||||
|
||||
description = {'I': {'Am': {'a': 'nested dictionary!'}}}
|
||||
jsonio.dump_to_json(ol, fname, indent=0, gz=False, description=description)
|
||||
|
||||
rl = jsonio.load_json(fname, gz=False, full_output=True)
|
||||
|
||||
os.remove(fname + '.json')
|
||||
|
||||
for o, r in zip(ol, rl['obsdata']):
|
||||
assert np.all(o == r)
|
||||
|
||||
assert(description == rl['description'])
|
||||
|
||||
|
||||
def test_json_string_reconstruction():
|
||||
my_obs = pe.Obs([np.random.rand(100)], ['name'])
|
||||
|
||||
json_string = pe.input.json.create_json_string(my_obs)
|
||||
reconstructed_obs1 = pe.input.json.import_json_string(json_string)
|
||||
assert my_obs == reconstructed_obs1
|
||||
|
||||
compressed_string = gzip.compress(json_string.encode('utf-8'))
|
||||
|
||||
reconstructed_string = gzip.decompress(compressed_string).decode('utf-8')
|
||||
reconstructed_obs2 = pe.input.json.import_json_string(reconstructed_string)
|
||||
|
||||
assert reconstructed_string == json_string
|
||||
assert my_obs == reconstructed_obs2
|
||||
|
||||
|
||||
def test_json_corr_io():
|
||||
my_list = [pe.Obs([np.random.normal(1.0, 0.1, 100)], ['ens1']) for o in range(8)]
|
||||
rw_list = pe.reweight(pe.Obs([np.random.normal(1.0, 0.1, 100)], ['ens1']), my_list)
|
||||
|
||||
for obs_list in [my_list, rw_list]:
|
||||
for tag in [None, "test"]:
|
||||
obs_list[3].tag = tag
|
||||
for fp in [0, 2]:
|
||||
for bp in [0, 7]:
|
||||
for corr_tag in [None, 'my_Corr_tag']:
|
||||
my_corr = pe.Corr(obs_list, padding=[fp, bp])
|
||||
my_corr.tag = corr_tag
|
||||
pe.input.json.dump_to_json(my_corr, 'corr')
|
||||
recover = pe.input.json.load_json('corr')
|
||||
os.remove('corr.json.gz')
|
||||
assert np.all([o.is_zero() for o in [x for x in (my_corr - recover) if x is not None]])
|
||||
assert my_corr.tag == recover.tag
|
||||
assert my_corr.reweighted == recover.reweighted
|
||||
|
||||
|
||||
def test_json_corr_2d_io():
|
||||
obs_list = [np.array([[pe.pseudo_Obs(1.0 + i, 0.1 * i, 'test'), pe.pseudo_Obs(0.0, 0.1 * i, 'test')], [pe.pseudo_Obs(0.0, 0.1 * i, 'test'), pe.pseudo_Obs(1.0 + i, 0.1 * i, 'test')]]) for i in range(4)]
|
||||
|
||||
for tag in [None, "test"]:
|
||||
obs_list[3][0, 1].tag = tag
|
||||
for padding in [0, 1]:
|
||||
my_corr = pe.Corr(obs_list, padding=[padding, padding])
|
||||
my_corr.tag = tag
|
||||
pe.input.json.dump_to_json(my_corr, 'corr')
|
||||
recover = pe.input.json.load_json('corr')
|
||||
os.remove('corr.json.gz')
|
||||
assert np.all([np.all([o.is_zero() for o in q]) for q in [x.ravel() for x in (my_corr - recover) if x is not None]])
|
||||
assert my_corr.tag == recover.tag
|
|
@ -29,22 +29,23 @@ def get_complex_matrix(dimension):
|
|||
|
||||
|
||||
def test_matmul():
|
||||
for dim in [4, 8]:
|
||||
for dim in [4, 6]:
|
||||
for const in [1, pe.cov_Obs([1.0, 1.0], [[0.001,0.0001], [0.0001, 0.002]], 'norm')[1]]:
|
||||
my_list = []
|
||||
length = 1000 + np.random.randint(200)
|
||||
length = 100 + np.random.randint(200)
|
||||
for i in range(dim ** 2):
|
||||
my_list.append(pe.Obs([np.random.rand(length), np.random.rand(length + 1)], ['t1', 't2']))
|
||||
my_array = np.array(my_list).reshape((dim, dim))
|
||||
my_array = const * np.array(my_list).reshape((dim, dim))
|
||||
tt = pe.linalg.matmul(my_array, my_array) - my_array @ my_array
|
||||
for t, e in np.ndenumerate(tt):
|
||||
assert e.is_zero(), t
|
||||
|
||||
my_list = []
|
||||
length = 1000 + np.random.randint(200)
|
||||
length = 100 + np.random.randint(200)
|
||||
for i in range(dim ** 2):
|
||||
my_list.append(pe.CObs(pe.Obs([np.random.rand(length), np.random.rand(length + 1)], ['t1', 't2']),
|
||||
pe.Obs([np.random.rand(length), np.random.rand(length + 1)], ['t1', 't2'])))
|
||||
my_array = np.array(my_list).reshape((dim, dim))
|
||||
my_array = np.array(my_list).reshape((dim, dim)) * const
|
||||
tt = pe.linalg.matmul(my_array, my_array) - my_array @ my_array
|
||||
for t, e in np.ndenumerate(tt):
|
||||
assert e.is_zero(), t
|
||||
|
@ -55,28 +56,104 @@ def test_jack_matmul():
|
|||
check1 = pe.linalg.jack_matmul(tt, tt) - pe.linalg.matmul(tt, tt)
|
||||
[o.gamma_method() for o in check1.ravel()]
|
||||
assert np.all([o.is_zero_within_error(0.1) for o in check1.ravel()])
|
||||
assert np.all([o.dvalue < 0.001 for o in check1.ravel()])
|
||||
trace1 = np.trace(check1)
|
||||
trace1.gamma_method()
|
||||
assert trace1.dvalue < 0.001
|
||||
|
||||
tt2 = get_complex_matrix(8)
|
||||
check2 = pe.linalg.jack_matmul(tt2, tt2) - pe.linalg.matmul(tt2, tt2)
|
||||
tr = np.random.rand(8, 8)
|
||||
check2 = pe.linalg.jack_matmul(tt, tr) - pe.linalg.matmul(tt, tr)
|
||||
[o.gamma_method() for o in check2.ravel()]
|
||||
assert np.all([o.real.is_zero_within_error(0.1) for o in check2.ravel()])
|
||||
assert np.all([o.imag.is_zero_within_error(0.1) for o in check2.ravel()])
|
||||
assert np.all([o.is_zero_within_error(0.1) for o in check2.ravel()])
|
||||
assert np.all([o.dvalue < 0.001 for o in check2.ravel()])
|
||||
trace2 = np.trace(check2)
|
||||
trace2.gamma_method()
|
||||
assert trace2.real.dvalue < 0.001
|
||||
assert trace2.imag.dvalue < 0.001
|
||||
assert trace2.dvalue < 0.001
|
||||
|
||||
tt2 = get_complex_matrix(8)
|
||||
check3 = pe.linalg.jack_matmul(tt2, tt2) - pe.linalg.matmul(tt2, tt2)
|
||||
[o.gamma_method() for o in check3.ravel()]
|
||||
assert np.all([o.real.is_zero_within_error(0.1) for o in check3.ravel()])
|
||||
assert np.all([o.imag.is_zero_within_error(0.1) for o in check3.ravel()])
|
||||
assert np.all([o.real.dvalue < 0.001 for o in check3.ravel()])
|
||||
assert np.all([o.imag.dvalue < 0.001 for o in check3.ravel()])
|
||||
trace3 = np.trace(check3)
|
||||
trace3.gamma_method()
|
||||
assert trace3.real.dvalue < 0.001
|
||||
assert trace3.imag.dvalue < 0.001
|
||||
|
||||
tr2 = np.random.rand(8, 8) + 1j * np.random.rand(8, 8)
|
||||
check4 = pe.linalg.jack_matmul(tt2, tr2) - pe.linalg.matmul(tt2, tr2)
|
||||
[o.gamma_method() for o in check4.ravel()]
|
||||
assert np.all([o.real.is_zero_within_error(0.1) for o in check4.ravel()])
|
||||
assert np.all([o.imag.is_zero_within_error(0.1) for o in check4.ravel()])
|
||||
assert np.all([o.real.dvalue < 0.001 for o in check4.ravel()])
|
||||
assert np.all([o.imag.dvalue < 0.001 for o in check4.ravel()])
|
||||
trace4 = np.trace(check4)
|
||||
trace4.gamma_method()
|
||||
assert trace4.real.dvalue < 0.001
|
||||
assert trace4.imag.dvalue < 0.001
|
||||
|
||||
|
||||
def test_einsum():
|
||||
|
||||
def _perform_real_check(arr):
|
||||
[o.gamma_method() for o in arr]
|
||||
assert np.all([o.is_zero_within_error(0.001) for o in arr])
|
||||
assert np.all([o.dvalue < 0.001 for o in arr])
|
||||
|
||||
def _perform_complex_check(arr):
|
||||
[o.gamma_method() for o in arr]
|
||||
assert np.all([o.real.is_zero_within_error(0.001) for o in arr])
|
||||
assert np.all([o.real.dvalue < 0.001 for o in arr])
|
||||
assert np.all([o.imag.is_zero_within_error(0.001) for o in arr])
|
||||
assert np.all([o.imag.dvalue < 0.001 for o in arr])
|
||||
|
||||
|
||||
tt = [get_real_matrix(4), get_real_matrix(3)]
|
||||
q = np.tensordot(tt[0], tt[1], 0)
|
||||
c1 = tt[1] @ q
|
||||
c2 = pe.linalg.einsum('ij,abjd->abid', tt[1], q)
|
||||
check1 = c1 - c2
|
||||
_perform_real_check(check1.ravel())
|
||||
check2 = np.trace(tt[0]) - pe.linalg.einsum('ii', tt[0])
|
||||
_perform_real_check([check2])
|
||||
check3 = np.trace(tt[1]) - pe.linalg.einsum('ii', tt[1])
|
||||
_perform_real_check([check3])
|
||||
|
||||
tt = [get_real_matrix(4), np.random.random((3, 3))]
|
||||
q = np.tensordot(tt[0], tt[1], 0)
|
||||
c1 = tt[1] @ q
|
||||
c2 = pe.linalg.einsum('ij,abjd->abid', tt[1], q)
|
||||
check1 = c1 - c2
|
||||
_perform_real_check(check1.ravel())
|
||||
|
||||
tt = [get_complex_matrix(4), get_complex_matrix(3)]
|
||||
q = np.tensordot(tt[0], tt[1], 0)
|
||||
c1 = tt[1] @ q
|
||||
c2 = pe.linalg.einsum('ij,abjd->abid', tt[1], q)
|
||||
check1 = c1 - c2
|
||||
_perform_complex_check(check1.ravel())
|
||||
check2 = np.trace(tt[0]) - pe.linalg.einsum('ii', tt[0])
|
||||
_perform_complex_check([check2])
|
||||
check3 = np.trace(tt[1]) - pe.linalg.einsum('ii', tt[1])
|
||||
_perform_complex_check([check3])
|
||||
|
||||
tt = [get_complex_matrix(4), np.random.random((3, 3))]
|
||||
q = np.tensordot(tt[0], tt[1], 0)
|
||||
c1 = tt[1] @ q
|
||||
c2 = pe.linalg.einsum('ij,abjd->abid', tt[1], q)
|
||||
check1 = c1 - c2
|
||||
_perform_complex_check(check1.ravel())
|
||||
|
||||
|
||||
def test_multi_dot():
|
||||
for dim in [4, 8]:
|
||||
for dim in [4, 6]:
|
||||
my_list = []
|
||||
length = 1000 + np.random.randint(200)
|
||||
for i in range(dim ** 2):
|
||||
my_list.append(pe.Obs([np.random.rand(length), np.random.rand(length + 1)], ['t1', 't2']))
|
||||
my_array = np.array(my_list).reshape((dim, dim))
|
||||
my_array = pe.cov_Obs(1.0, 0.002, 'cov') * np.array(my_list).reshape((dim, dim))
|
||||
tt = pe.linalg.matmul(my_array, my_array, my_array, my_array) - my_array @ my_array @ my_array @ my_array
|
||||
for t, e in np.ndenumerate(tt):
|
||||
assert e.is_zero(), t
|
||||
|
@ -86,12 +163,25 @@ def test_multi_dot():
|
|||
for i in range(dim ** 2):
|
||||
my_list.append(pe.CObs(pe.Obs([np.random.rand(length), np.random.rand(length + 1)], ['t1', 't2']),
|
||||
pe.Obs([np.random.rand(length), np.random.rand(length + 1)], ['t1', 't2'])))
|
||||
my_array = np.array(my_list).reshape((dim, dim))
|
||||
my_array = np.array(my_list).reshape((dim, dim)) * pe.cov_Obs(1.0, 0.002, 'cov')
|
||||
tt = pe.linalg.matmul(my_array, my_array, my_array, my_array) - my_array @ my_array @ my_array @ my_array
|
||||
for t, e in np.ndenumerate(tt):
|
||||
assert e.is_zero(), t
|
||||
|
||||
|
||||
def test_jack_multi_dot():
|
||||
for dim in [2, 4, 8]:
|
||||
my_array = get_real_matrix(dim)
|
||||
|
||||
tt = pe.linalg.jack_matmul(my_array, my_array, my_array) - pe.linalg.matmul(my_array, my_array, my_array)
|
||||
|
||||
for t, e in np.ndenumerate(tt):
|
||||
e.gamma_method()
|
||||
assert e.is_zero_within_error(0.01)
|
||||
assert e.is_zero(atol=1e-1), t
|
||||
assert np.isclose(e.value, 0.0)
|
||||
|
||||
|
||||
def test_matmul_irregular_histories():
|
||||
dim = 2
|
||||
length = 500
|
||||
|
@ -99,13 +189,13 @@ def test_matmul_irregular_histories():
|
|||
standard_array = []
|
||||
for i in range(dim ** 2):
|
||||
standard_array.append(pe.Obs([np.random.normal(1.1, 0.2, length)], ['ens1']))
|
||||
standard_matrix = np.array(standard_array).reshape((dim, dim))
|
||||
standard_matrix = np.array(standard_array).reshape((dim, dim)) * pe.cov_Obs(1.0, 0.002, 'cov') * pe.pseudo_Obs(0.1, 0.002, 'qr')
|
||||
|
||||
for idl in [range(1, 501, 2), range(250, 273), [2, 8, 19, 20, 78]]:
|
||||
irregular_array = []
|
||||
for i in range(dim ** 2):
|
||||
irregular_array.append(pe.Obs([np.random.normal(1.1, 0.2, len(idl))], ['ens1'], idl=[idl]))
|
||||
irregular_matrix = np.array(irregular_array).reshape((dim, dim))
|
||||
irregular_matrix = np.array(irregular_array).reshape((dim, dim)) * pe.cov_Obs([1.0, 1.0], [[0.001,0.0001], [0.0001, 0.002]], 'norm')[0]
|
||||
|
||||
t1 = standard_matrix @ irregular_matrix
|
||||
t2 = pe.linalg.matmul(standard_matrix, irregular_matrix)
|
||||
|
@ -123,7 +213,7 @@ def test_irregular_matrix_inverse():
|
|||
irregular_array = []
|
||||
for i in range(dim ** 2):
|
||||
irregular_array.append(pe.Obs([np.random.normal(1.1, 0.2, len(idl)), np.random.normal(0.25, 0.1, 10)], ['ens1', 'ens2'], idl=[idl, range(1, 11)]))
|
||||
irregular_matrix = np.array(irregular_array).reshape((dim, dim))
|
||||
irregular_matrix = np.array(irregular_array).reshape((dim, dim)) * pe.cov_Obs(1.0, 0.002, 'cov') * pe.pseudo_Obs(1.0, 0.002, 'ens2|r23')
|
||||
|
||||
invertible_irregular_matrix = np.identity(dim) + irregular_matrix @ irregular_matrix.T
|
||||
|
||||
|
@ -154,8 +244,8 @@ def test_complex_matrix_inverse():
|
|||
base_matrix = np.empty((dimension, dimension), dtype=object)
|
||||
matrix = np.empty((dimension, dimension), dtype=complex)
|
||||
for (n, m), entry in np.ndenumerate(base_matrix):
|
||||
exponent_real = np.random.normal(3, 5)
|
||||
exponent_imag = np.random.normal(3, 5)
|
||||
exponent_real = np.random.normal(2, 3)
|
||||
exponent_imag = np.random.normal(2, 3)
|
||||
base_matrix[n, m] = pe.CObs(pe.pseudo_Obs(2 + 10 ** exponent_real, 10 ** (exponent_real - 1), 't'),
|
||||
pe.pseudo_Obs(2 + 10 ** exponent_imag, 10 ** (exponent_imag - 1), 't'))
|
||||
|
||||
|
@ -210,6 +300,10 @@ def test_matrix_functions():
|
|||
for j in range(dim):
|
||||
assert tmp[j].is_zero()
|
||||
|
||||
# Check eig function
|
||||
e2 = pe.linalg.eig(sym)
|
||||
assert np.all(np.sort(e) == np.sort(e2))
|
||||
|
||||
# Check svd
|
||||
u, v, vh = pe.linalg.svd(sym)
|
||||
diff = sym - u @ np.diag(v) @ vh
|
||||
|
@ -217,6 +311,11 @@ def test_matrix_functions():
|
|||
for (i, j), entry in np.ndenumerate(diff):
|
||||
assert entry.is_zero()
|
||||
|
||||
# Check determinant
|
||||
assert pe.linalg.det(np.diag(np.diag(matrix))) == np.prod(np.diag(matrix))
|
||||
|
||||
pe.linalg.pinv(matrix[:,:3])
|
||||
|
||||
|
||||
def test_complex_matrix_operations():
|
||||
dimension = 4
|
||||
|
|
14
tests/mpm_test.py
Normal file
14
tests/mpm_test.py
Normal file
|
@ -0,0 +1,14 @@
|
|||
import numpy as np
|
||||
import pyerrors as pe
|
||||
import pytest
|
||||
|
||||
np.random.seed(0)
|
||||
|
||||
|
||||
def test_mpm():
|
||||
corr_content = []
|
||||
for t in range(8):
|
||||
f = 0.8 * np.exp(-0.4 * t)
|
||||
corr_content.append(pe.pseudo_Obs(np.random.normal(f, 1e-2 * f), 1e-2 * f, 't'))
|
||||
|
||||
res = pe.mpm.matrix_pencil_method(corr_content)
|
|
@ -9,14 +9,64 @@ import pytest
|
|||
np.random.seed(0)
|
||||
|
||||
|
||||
def test_Obs_exceptions():
|
||||
with pytest.raises(Exception):
|
||||
pe.Obs([np.random.rand(10)], ['1', '2'])
|
||||
with pytest.raises(Exception):
|
||||
pe.Obs([np.random.rand(10)], ['1'], idl=[])
|
||||
with pytest.raises(Exception):
|
||||
pe.Obs([np.random.rand(10), np.random.rand(10)], ['1', '1'])
|
||||
with pytest.raises(Exception):
|
||||
pe.Obs([np.random.rand(10), np.random.rand(10)], ['1', 1])
|
||||
with pytest.raises(Exception):
|
||||
pe.Obs([np.random.rand(10)], [1])
|
||||
with pytest.raises(Exception):
|
||||
pe.Obs([np.random.rand(4)], ['name'])
|
||||
with pytest.raises(Exception):
|
||||
pe.Obs([np.random.rand(5)], ['1'], idl=[[5, 3, 2 ,4 ,1]])
|
||||
with pytest.raises(Exception):
|
||||
pe.Obs([np.random.rand(5)], ['1'], idl=['t'])
|
||||
with pytest.raises(Exception):
|
||||
pe.Obs([np.random.rand(5)], ['1'], idl=[range(1, 8)])
|
||||
|
||||
my_obs = pe.Obs([np.random.rand(6)], ['name'])
|
||||
my_obs._value = 0.0
|
||||
my_obs.details()
|
||||
with pytest.raises(Exception):
|
||||
my_obs.plot_tauint()
|
||||
with pytest.raises(Exception):
|
||||
my_obs.plot_rho()
|
||||
with pytest.raises(Exception):
|
||||
my_obs.plot_rep_dist()
|
||||
with pytest.raises(Exception):
|
||||
my_obs.plot_piechart()
|
||||
with pytest.raises(Exception):
|
||||
my_obs.gamma_method(S='2.3')
|
||||
with pytest.raises(Exception):
|
||||
my_obs.gamma_method(tau_exp=2.3)
|
||||
my_obs.gamma_method()
|
||||
my_obs.details()
|
||||
my_obs.plot_rep_dist()
|
||||
|
||||
my_obs += pe.Obs([np.random.rand(6)], ['name2|r1'], idl=[[1, 3, 4, 5, 6, 7]])
|
||||
my_obs += pe.Obs([np.random.rand(6)], ['name2|r2'])
|
||||
my_obs.gamma_method()
|
||||
my_obs.details()
|
||||
|
||||
def test_dump():
|
||||
value = np.random.normal(5, 10)
|
||||
dvalue = np.abs(np.random.normal(0, 1))
|
||||
test_obs = pe.pseudo_Obs(value, dvalue, 't')
|
||||
test_obs.dump('test_dump')
|
||||
test_obs.dump('test_dump', datatype="pickle", path=".")
|
||||
test_obs.dump('test_dump', datatype="pickle")
|
||||
new_obs = pe.load_object('test_dump.p')
|
||||
os.remove('test_dump.p')
|
||||
assert test_obs.deltas['t'].all() == new_obs.deltas['t'].all()
|
||||
assert test_obs == new_obs
|
||||
test_obs.dump('test_dump', dataype="json.gz", path=".")
|
||||
test_obs.dump('test_dump', dataype="json.gz")
|
||||
new_obs = pe.input.json.load_json("test_dump")
|
||||
os.remove('test_dump.json.gz')
|
||||
assert test_obs == new_obs
|
||||
|
||||
|
||||
def test_comparison():
|
||||
|
@ -61,6 +111,12 @@ def test_function_overloading():
|
|||
assert np.sqrt(b ** 2) == b
|
||||
assert np.sqrt(b) ** 2 == b
|
||||
|
||||
np.arcsin(1 / b)
|
||||
np.arccos(1 / b)
|
||||
np.arctan(1 / b)
|
||||
np.arctanh(1 / b)
|
||||
np.sinc(1 / b)
|
||||
|
||||
|
||||
def test_overloading_vectorization():
|
||||
a = np.random.randint(1, 100, 10)
|
||||
|
@ -309,8 +365,10 @@ def test_overloaded_functions():
|
|||
|
||||
|
||||
def test_utils():
|
||||
zero_pseudo_obs = pe.pseudo_Obs(1.0, 0.0, 'null')
|
||||
my_obs = pe.pseudo_Obs(1.0, 0.5, 't|r01')
|
||||
my_obs += pe.pseudo_Obs(1.0, 0.5, 't|r02')
|
||||
str(my_obs)
|
||||
for tau_exp in [0, 5]:
|
||||
my_obs.gamma_method(tau_exp=tau_exp)
|
||||
my_obs.tag = "Test description"
|
||||
|
@ -325,6 +383,8 @@ def test_utils():
|
|||
my_obs.plot_piechart()
|
||||
assert my_obs > (my_obs - 1)
|
||||
assert my_obs < (my_obs + 1)
|
||||
float(my_obs)
|
||||
str(my_obs)
|
||||
|
||||
|
||||
def test_cobs():
|
||||
|
@ -372,6 +432,23 @@ def test_reweighting():
|
|||
r_obs2 = r_obs[0] * my_obs
|
||||
assert r_obs2.reweighted
|
||||
|
||||
my_irregular_obs = pe.Obs([np.random.rand(500)], ['t'], idl=[range(1, 1001, 2)])
|
||||
assert not my_irregular_obs.reweighted
|
||||
r_obs = pe.reweight(my_obs, [my_irregular_obs], all_configs=True)
|
||||
r_obs = pe.reweight(my_obs, [my_irregular_obs], all_configs=False)
|
||||
r_obs = pe.reweight(my_obs, [my_obs])
|
||||
assert r_obs[0].reweighted
|
||||
r_obs2 = r_obs[0] * my_obs
|
||||
assert r_obs2.reweighted
|
||||
my_covobs = pe.cov_Obs(1.0, 0.003, 'cov')
|
||||
with pytest.raises(Exception):
|
||||
pe.reweight(my_obs, [my_covobs])
|
||||
my_obs2 = pe.Obs([np.random.rand(1000)], ['t2'])
|
||||
with pytest.raises(Exception):
|
||||
pe.reweight(my_obs, [my_obs + my_obs2])
|
||||
with pytest.raises(Exception):
|
||||
pe.reweight(my_irregular_obs, [my_obs])
|
||||
|
||||
|
||||
def test_merge_obs():
|
||||
my_obs1 = pe.Obs([np.random.rand(100)], ['t'])
|
||||
|
@ -379,6 +456,22 @@ def test_merge_obs():
|
|||
merged = pe.merge_obs([my_obs1, my_obs2])
|
||||
diff = merged - my_obs2 - my_obs1
|
||||
assert diff == -(my_obs1.value + my_obs2.value) / 2
|
||||
with pytest.raises(Exception):
|
||||
pe.merge_obs([my_obs1, my_obs1])
|
||||
my_covobs = pe.cov_Obs(1.0, 0.003, 'cov')
|
||||
with pytest.raises(Exception):
|
||||
pe.merge_obs([my_obs1, my_covobs])
|
||||
|
||||
|
||||
|
||||
def test_merge_obs_r_values():
|
||||
a1 = pe.pseudo_Obs(1.1, .1, 'a|1')
|
||||
a2 = pe.pseudo_Obs(1.2, .1, 'a|2')
|
||||
a = pe.merge_obs([a1, a2])
|
||||
|
||||
assert np.isclose(a.r_values['a|1'], a1.value)
|
||||
assert np.isclose(a.r_values['a|2'], a2.value)
|
||||
assert np.isclose(a.value, np.mean([a1.value, a2.value]))
|
||||
|
||||
|
||||
def test_correlate():
|
||||
|
@ -401,6 +494,17 @@ def test_correlate():
|
|||
corr3 = pe.correlate(my_obs5, my_obs6)
|
||||
assert my_obs5.idl == corr3.idl
|
||||
|
||||
my_new_obs = pe.Obs([np.random.rand(100)], ['q3'])
|
||||
with pytest.raises(Exception):
|
||||
pe.correlate(my_obs1, my_new_obs)
|
||||
my_covobs = pe.cov_Obs(1.0, 0.003, 'cov')
|
||||
with pytest.raises(Exception):
|
||||
pe.correlate(my_covobs, my_covobs)
|
||||
r_obs = pe.reweight(my_obs1, [my_obs1])[0]
|
||||
with pytest.warns(RuntimeWarning):
|
||||
pe.correlate(r_obs, r_obs)
|
||||
|
||||
|
||||
|
||||
def test_irregular_error_propagation():
|
||||
obs_list = [pe.Obs([np.random.rand(100)], ['t']),
|
||||
|
@ -409,6 +513,7 @@ def test_irregular_error_propagation():
|
|||
pe.Obs([np.random.rand(6)], ['t'], idl=[[4, 18, 27, 29, 57, 80]]),
|
||||
pe.Obs([np.random.rand(50)], ['t'], idl=[list(range(1, 26)) + list(range(50, 100, 2))])]
|
||||
for obs1 in obs_list:
|
||||
obs1.details()
|
||||
for obs2 in obs_list:
|
||||
assert obs1 == (obs1 / obs2) * obs2
|
||||
assert obs1 == (obs1 * obs2) / obs2
|
||||
|
@ -487,7 +592,7 @@ def test_gamma_method_irregular():
|
|||
assert((ae.e_tauint['a'] + 4 * ae.e_dtauint['a'] > ao.e_tauint['a']))
|
||||
|
||||
|
||||
def test_covariance2_symmetry():
|
||||
def test_covariance_symmetry():
|
||||
value1 = np.random.normal(5, 10)
|
||||
dvalue1 = np.abs(np.random.normal(0, 1))
|
||||
test_obs1 = pe.pseudo_Obs(value1, dvalue1, 't')
|
||||
|
@ -496,8 +601,8 @@ def test_covariance2_symmetry():
|
|||
dvalue2 = np.abs(np.random.normal(0, 1))
|
||||
test_obs2 = pe.pseudo_Obs(value2, dvalue2, 't')
|
||||
test_obs2.gamma_method()
|
||||
cov_ab = pe.covariance2(test_obs1, test_obs2)
|
||||
cov_ba = pe.covariance2(test_obs2, test_obs1)
|
||||
cov_ab = pe.covariance(test_obs1, test_obs2)
|
||||
cov_ba = pe.covariance(test_obs2, test_obs1)
|
||||
assert np.abs(cov_ab - cov_ba) <= 10 * np.finfo(np.float64).eps
|
||||
assert np.abs(cov_ab) < test_obs1.dvalue * test_obs2.dvalue * (1 + 10 * np.finfo(np.float64).eps)
|
||||
|
||||
|
@ -510,12 +615,18 @@ def test_covariance2_symmetry():
|
|||
idx = [i + 1 for i in range(len(configs)) if configs[i] == 1]
|
||||
a = pe.Obs([zero_arr], ['t'], idl=[idx])
|
||||
a.gamma_method()
|
||||
assert np.isclose(a.dvalue**2, pe.covariance2(a, a), atol=100, rtol=1e-4)
|
||||
assert np.isclose(a.dvalue**2, pe.covariance(a, a), atol=100, rtol=1e-4)
|
||||
|
||||
cov_ab = pe.covariance2(test_obs1, a)
|
||||
cov_ba = pe.covariance2(a, test_obs1)
|
||||
cov_ab = pe.covariance(test_obs1, a)
|
||||
cov_ba = pe.covariance(a, test_obs1)
|
||||
assert np.abs(cov_ab - cov_ba) <= 10 * np.finfo(np.float64).eps
|
||||
assert np.abs(cov_ab) < test_obs1.dvalue * test_obs2.dvalue * (1 + 10 * np.finfo(np.float64).eps)
|
||||
assert np.abs(cov_ab) < test_obs1.dvalue * a.dvalue * (1 + 10 * np.finfo(np.float64).eps)
|
||||
|
||||
|
||||
def test_empty_obs():
|
||||
o = pe.Obs([np.random.rand(100)], ['test'])
|
||||
q = o + pe.Obs([], [])
|
||||
assert q == o
|
||||
|
||||
|
||||
def test_jackknife():
|
||||
|
@ -542,4 +653,3 @@ def test_import_jackknife():
|
|||
my_jacks = my_obs.export_jackknife()
|
||||
reconstructed_obs = pe.import_jackknife(my_jacks, 'test')
|
||||
assert my_obs == reconstructed_obs
|
||||
|
||||
|
|
|
@ -15,5 +15,18 @@ def test_root_linear():
|
|||
my_root = pe.roots.find_root(my_obs, root_function)
|
||||
|
||||
assert np.isclose(my_root.value, value)
|
||||
assert np.isclose(my_root.value, my_root.r_values['t'])
|
||||
difference = my_obs - my_root
|
||||
assert difference.is_zero()
|
||||
|
||||
|
||||
def test_root_linear_idl():
|
||||
|
||||
def root_function(x, d):
|
||||
return x - d
|
||||
|
||||
my_obs = pe.Obs([np.random.rand(50)], ['t'], idl=[range(20, 120, 2)])
|
||||
my_root = pe.roots.find_root(my_obs, root_function)
|
||||
|
||||
difference = my_obs - my_root
|
||||
assert difference.is_zero()
|
||||
|
|
Loading…
Add table
Reference in a new issue