Merge pull request #6 from fjosw/feature/v2.0

Feature/v2.0
This commit is contained in:
Fabian Joswig 2021-10-19 10:03:36 +01:00 committed by GitHub
commit 3d8887cd65
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
23 changed files with 1624 additions and 1440 deletions

35
.github/workflows/CI.yml vendored Normal file
View file

@ -0,0 +1,35 @@
name: CI
on:
push:
branches:
- master
- develop
pull_request:
jobs:
pytest:
runs-on: ubuntu-latest
strategy:
fail-fast: true
matrix:
python-version: ["3.6", "3.7", "3.8", "3.9"]
steps:
- name: Checkout source
uses: actions/checkout@v2
- name: Setup python
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install
run: |
python -m pip install --upgrade pip
pip install .
pip install pytest
pip install pytest-cov
- name: Run tests
run: pytest --cov=pyerrors -v

26
.github/workflows/flake8.yml vendored Normal file
View file

@ -0,0 +1,26 @@
name: flake8 Lint
on:
push:
branches:
- master
- develop
pull_request:
jobs:
flake8-lint:
runs-on: ubuntu-latest
name: Lint
steps:
- name: Check out source repository
uses: actions/checkout@v2
- name: Set up Python environment
uses: actions/setup-python@v1
with:
python-version: "3.8"
- name: flake8 Lint
uses: py-actions/flake8@v1
with:
ignore: "E501,E722"
exclude: "__init__.py, input/__init__.py"
path: "pyerrors"

2
.gitignore vendored
View file

@ -1,6 +1,8 @@
__pycache__
*.pyc
.ipynb_*
.coverage
.cache
examples/B1k2_pcac_plateau.p
examples/Untitled.*
core.*

View file

@ -2,16 +2,32 @@
All notable changes to this project will be documented in this file.
## [2.0.0] - 2021-??-??
### Added
- `CObs` class added which can handle complex valued Markov chain Monte Carlo data and the corresponding error propagation
- Matrix to matrix operations like the matrix inverse now also work for complex matrices and matrices containing entries that are not `Obs` but `float` or `int`
- `Obs` objects now have methods `is_zero` and `is_zero_within_error`
### Changed
- Additional attributes can no longer be added to existing `Obs`. This makes it no longer possible to import `Obs` created with previous versions of pyerrors
- The default value for `Corr.prange` is now `None`
- The `input` module was restructured to contain one submodule per data source
### Deprecated
- The function `plot_corrs` was deprecated as all its functionality is now contained within `Corr.show`
- The kwarg `bias_correction` in `derived_observable` was removed
- Obs no longer have an attribute `e_Q`
## [1.1.0] - 2021-10-11
### Added
- `Corr` class added
- roots module added which can find the roots of a function that depends on Monte Carlo data via pyerrors Obs
- input/hadrons module added which can read hdf5 files written by [Hadrons](https://github.com/aportelli/Hadrons)
- read_rwms can now read reweighting factors in the format used by openQCD-2.0
- `roots` module added which can find the roots of a function that depends on Monte Carlo data via pyerrors `Obs`
- `input/hadrons` module added which can read hdf5 files written by [Hadrons](https://github.com/aportelli/Hadrons)
- `read_rwms` can now read reweighting factors in the format used by openQCD-2.0
## [1.0.1] - 2020-11-03
### Fixed
- Bug in pyerrors.covariance fixed that appeared when working with several
- Bug in `pyerrors.covariance` fixed that appeared when working with several
replica of different length.
## [1.0.0] - 2020-10-13

View file

@ -1,13 +1,12 @@
[![flake8 Lint](https://github.com/fjosw/pyerrors/actions/workflows/flake8.yml/badge.svg)](https://github.com/fjosw/pyerrors/actions/workflows/flake8.yml) [![CI](https://github.com/fjosw/pyerrors/actions/workflows/CI.yml/badge.svg)](https://github.com/fjosw/pyerrors/actions/workflows/CI.yml) [![](https://img.shields.io/badge/python-3.6+-blue.svg)](https://www.python.org/downloads/)
# pyerrors
pyerrors is a python package for error computation and propagation of Markov Chain Monte Carlo data.
It is based on the gamma method [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017). Some of its features are:
* automatic differentiation as suggested in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289) (partly based on the [autograd](https://github.com/HIPS/autograd) package)
* the treatment of slow modes in the simulation as suggested in [arXiv:1009.5228](https://arxiv.org/abs/1009.5228)
* multi ensemble analyses
* non-linear fits with y-errors and exact linear error propagation based on automatic differentiation as introduced in [arXiv:1809.01289]
* non-linear fits with x- and y-errors and exact linear error propagation based on automatic differentiation
* matrix valued operations and their error propagation based on automatic differentiation (cholesky decomposition, calculation of eigenvalues and eigenvectors, singular value decomposition...)
* implementation of the matrix-pencil-method [IEEE Trans. Acoust. 38, 814-824 (1990)](https://ieeexplore.ieee.org/document/56027) for the extraction of energy levels, especially suited for noisy data and excited states
`pyerrors` is a python package for error computation and propagation of Markov chain Monte Carlo data.
It is based on the **gamma method** [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017). Some of its features are:
* **automatic differentiation** as suggested in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289) (partly based on the [autograd](https://github.com/HIPS/autograd) package)
* **treatment of slow modes** in the simulation as suggested in [arXiv:1009.5228](https://arxiv.org/abs/1009.5228)
* coherent **error propagation** for data from **different Markov chains**
* **non-linear fits with x- and y-errors** and exact linear error propagation based on automatic differentiation as introduced in [arXiv:1809.01289]
* **real and complex matrix operations** and their error propagation based on automatic differentiation (cholesky decomposition, calculation of eigenvalues and eigenvectors, singular value decomposition...)
There exist similar implementations of gamma method error analysis suites in
- [Fortran](https://gitlab.ift.uam-csic.es/alberto/aderrors)
@ -15,16 +14,9 @@ There exist similar implementations of gamma method error analysis suites in
- [Python 3](https://github.com/mbruno46/pyobs)
## Installation
pyerrors requires python versions >= 3.5.0
Install the package for the local user:
To install the current `develop` version of `pyerrors` run
```bash
pip install . --user
```
Run tests to verify the installation:
```bash
pytest .
pip install git+https://github.com/fjosw/pyerrors.git
```
## Usage

View file

@ -1,15 +1,13 @@
import warnings
import numpy as np
import autograd.numpy as anp
#from scipy.special.orthogonal import _IntegerType
from .pyerrors import *
import matplotlib.pyplot as plt
import scipy.linalg
from .pyerrors import Obs, dump_object
from .fits import standard_fit
from .linalg import *
from .linalg import eigh, mat_mat_op
from .roots import find_root
from matplotlib import pyplot as plt
from matplotlib.ticker import NullFormatter # useful for `logit` scale
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
#import PySimpleGUI as sg
import matplotlib
class Corr:
"""The class for a correlator (time dependent sequence of pe.Obs).
@ -24,49 +22,46 @@ class Corr:
"""
def __init__(self, data_input, padding_front=0, padding_back=0,prange=None):
#All data_input should be a list of things at different timeslices. This needs to be verified
def __init__(self, data_input, padding_front=0, padding_back=0, prange=None):
# All data_input should be a list of things at different timeslices. This needs to be verified
if not (isinstance(data_input, list)):
if not isinstance(data_input, list):
raise TypeError('Corr__init__ expects a list of timeslices.')
# data_input can have multiple shapes. The simplest one is a list of Obs.
#We check, if this is the case
# We check, if this is the case
if all([isinstance(item, Obs) for item in data_input]):
self.content=[np.asarray([item]) for item in data_input]
#Wrapping the Obs in an array ensures that the data structure is consistent with smearing matrices.
self.N = 1 # number of smearings
self.content = [np.asarray([item]) for item in data_input]
# Wrapping the Obs in an array ensures that the data structure is consistent with smearing matrices.
self.N = 1 # number of smearings
#data_input in the form [np.array(Obs,NxN)]
elif all([isinstance(item,np.ndarray) or item==None for item in data_input]) and any([isinstance(item,np.ndarray)for item in data_input]):
# data_input in the form [np.array(Obs,NxN)]
elif all([isinstance(item, np.ndarray) or item is None for item in data_input]) and any([isinstance(item, np.ndarray) for item in data_input]):
self.content = data_input
noNull=[a for a in self.content if not (a is None)] #To check if the matrices are correct for all undefined elements
noNull = [a for a in self.content if not (a is None)] # To check if the matrices are correct for all undefined elements
self.N = noNull[0].shape[0]
# The checks are now identical to the case above
if self.N > 1 and noNull[0].shape[0] != noNull[0].shape[1]:
raise Exception("Smearing matrices are not NxN")
if (not all([item.shape == noNull[0].shape for item in noNull])):
if (not all([item.shape == noNull[0].shape for item in noNull])):
raise Exception("Items in data_input are not of identical shape." + str(noNull))
else: # In case its a list of something else.
raise Exception ("data_input contains item of wrong type")
else: # In case its a list of something else.
raise Exception("data_input contains item of wrong type")
self.tag = None
#We now apply some padding to our list. In case that our list represents a correlator of length T but is not defined at every value.
#An undefined timeslice is represented by the None object
# We now apply some padding to our list. In case that our list represents a correlator of length T but is not defined at every value.
# An undefined timeslice is represented by the None object
self.content = [None] * padding_front + self.content + [None] * padding_back
self.T = len(self.content) #for convenience: will be used a lot
self.T = len(self.content) # for convenience: will be used a lot
#The attribute "prange" [start,end] marks a range of two timeslices.
#This is useful for keeping track of plateaus and fitranges.
#The prange can be inherited from other Corrs, if the operation should not alter a chosen range eg. multiplication with a constant.
if not prange is None:
self.prange=prange
# The attribute "range" [start,end] marks a range of two timeslices.
# This is useful for keeping track of plateaus and fitranges.
# The range can be inherited from other Corrs, if the operation should not alter a chosen range eg. multiplication with a constant.
self.prange = prange
self.gamma_method()
def gamma_method(self):
for item in self.content:
if not(item is None):
@ -75,66 +70,69 @@ class Corr:
else:
for i in range(self.N):
for j in range(self.N):
item[i,j].gamma_method()
item[i, j].gamma_method()
#We need to project the Correlator with a Vector to get a single value at each timeslice.
#The method can use one or two vectors.
#If two are specified it returns v1@G@v2 (the order might be very important.)
#By default it will return the lowest source, which usually means unsmeared-unsmeared (0,0), but it does not have to
# We need to project the Correlator with a Vector to get a single value at each timeslice.
# The method can use one or two vectors.
# If two are specified it returns v1@G@v2 (the order might be very important.)
# By default it will return the lowest source, which usually means unsmeared-unsmeared (0,0), but it does not have to
def projected(self, vector_l=None, vector_r=None):
if self.N == 1:
raise Exception("Trying to project a Corr, that already has N=1.")
#This Exception is in no way necessary. One could just return self
#But there is no scenario, where a user would want that to happen and the error message might be more informative.
# This Exception is in no way necessary. One could just return self
# But there is no scenario, where a user would want that to happen and the error message might be more informative.
self.gamma_method()
if vector_l is None:
vector_l,vector_r=np.asarray([1.] + (self.N - 1) * [0.]),np.asarray([1.] + (self.N - 1) * [0.])
vector_l, vector_r = np.asarray([1.] + (self.N - 1) * [0.]), np.asarray([1.] + (self.N - 1) * [0.])
elif(vector_r is None):
vector_r=vector_l
vector_r = vector_l
if not vector_l.shape == vector_r.shape == (self.N,):
raise Exception("Vectors are of wrong shape!")
#We always normalize before projecting! But we only raise a warning, when it is clear, they where not meant to be normalized.
if (not(0.95 < vector_r@vector_r < 1.05)) or (not(0.95 < vector_l@vector_l < 1.05)):
# We always normalize before projecting! But we only raise a warning, when it is clear, they where not meant to be normalized.
if (not (0.95 < vector_r @ vector_r < 1.05)) or (not (0.95 < vector_l @ vector_l < 1.05)):
print("Vectors are normalized before projection!")
vector_l,vector_r = vector_l / np.sqrt((vector_l@vector_l)), vector_r / np.sqrt(vector_r@vector_r)
vector_l, vector_r = vector_l / np.sqrt((vector_l @ vector_l)), vector_r / np.sqrt(vector_r @ vector_r)
newcontent = [None if (item is None) else np.asarray([vector_l.T@item@vector_r]) for item in self.content]
newcontent = [None if (item is None) else np.asarray([vector_l.T @ item @ vector_r]) for item in self.content]
return Corr(newcontent)
def sum(self):
return np.sqrt(self.N)*self.projected(np.ones(self.N))
#For purposes of debugging and verification, one might want to see a single smearing level. smearing will return a Corr at the specified i,j. where both are integers 0<=i,j<N.
return np.sqrt(self.N) * self.projected(np.ones(self.N))
# For purposes of debugging and verification, one might want to see a single smearing level. smearing will return a Corr at the specified i,j. where both are integers 0<=i,j<N.
def smearing(self, i, j):
if self.N == 1:
raise Exception("Trying to pick smearing from projected Corr")
newcontent=[None if(item is None) else item[i, j] for item in self.content]
newcontent = [None if(item is None) else item[i, j] for item in self.content]
return Corr(newcontent)
#Obs and Matplotlib do not play nicely
#We often want to retrieve x,y,y_err as lists to pass them to something like pyplot.errorbar
# Obs and Matplotlib do not play nicely
# We often want to retrieve x,y,y_err as lists to pass them to something like pyplot.errorbar
def plottable(self):
if self.N != 1:
raise Exception("Can only make Corr[N=1] plottable") #We could also autoproject to the groundstate or expect vectors, but this is supposed to be a super simple function.
x_list = [x for x in range(self.T) if (not self.content[x] is None)]
y_list = [y[0].value for y in self.content if (not y is None)]
y_err_list = [y[0].dvalue for y in self.content if (not y is None)]
raise Exception("Can only make Corr[N=1] plottable") # We could also autoproject to the groundstate or expect vectors, but this is supposed to be a super simple function.
x_list = [x for x in range(self.T) if not self.content[x] is None]
y_list = [y[0].value for y in self.content if y is not None]
y_err_list = [y[0].dvalue for y in self.content if y is not None]
return x_list, y_list, y_err_list
#symmetric returns a Corr, that has been symmetrized.
#A symmetry checker is still to be implemented
#The method will not delete any redundant timeslices (Bad for memory, Great for convenience)
# symmetric returns a Corr, that has been symmetrized.
# A symmetry checker is still to be implemented
# The method will not delete any redundant timeslices (Bad for memory, Great for convenience)
def symmetric(self):
if self.T%2 != 0:
if self.T % 2 != 0:
raise Exception("Can not symmetrize odd T")
if np.argmax(np.abs(self.content)) != 0:
warnings.warn("Correlator does not seem to be symmetric around x0=0.", RuntimeWarning)
newcontent = [self.content[0]]
for t in range(1, self.T):
if (self.content[t] is None) or (self.content[self.T - t] is None):
@ -143,15 +141,17 @@ class Corr:
newcontent.append(0.5 * (self.content[t] + self.content[self.T - t]))
if(all([x is None for x in newcontent])):
raise Exception("Corr could not be symmetrized: No redundant values")
return Corr(newcontent, prange=self.prange if hasattr(self,"prange") else None)
return Corr(newcontent, prange=self.prange)
def anti_symmetric(self):
if self.T%2 != 0:
if self.T % 2 != 0:
raise Exception("Can not symmetrize odd T")
newcontent=[self.content[0]]
if not all([o.is_zero_within_error() for o in self.content[0]]):
warnings.warn("Correlator does not seem to be anti-symmetric around x0=0.", RuntimeWarning)
newcontent = [self.content[0]]
for t in range(1, self.T):
if (self.content[t] is None) or (self.content[self.T - t] is None):
newcontent.append(None)
@ -159,20 +159,18 @@ class Corr:
newcontent.append(0.5 * (self.content[t] - self.content[self.T - t]))
if(all([x is None for x in newcontent])):
raise Exception("Corr could not be symmetrized: No redundant values")
return Corr(newcontent,prange= self.prange if hasattr(self,"prange") else None)
return Corr(newcontent, prange=self.prange)
#This method will symmetrice the matrices and therefore make them positive definit.
# This method will symmetrice the matrices and therefore make them positive definit.
def smearing_symmetric(self):
if self.N > 1:
transposed = [None if (G is None) else G.T for G in self.content]
return 0.5 * (Corr(transposed)+self)
return 0.5 * (Corr(transposed) + self)
if self.N == 1:
raise Exception("Trying to symmetrize a smearing matrix, that already has N=1.")
#We also include a simple GEVP method based on Scipy.linalg
def GEVP(self, t0, ts,state=1):
# We also include a simple GEVP method based on Scipy.linalg
def GEVP(self, t0, ts, state=1):
if (self.content[t0] is None) or (self.content[ts] is None):
raise Exception("Corr not defined at t0/ts")
G0, Gt = np.empty([self.N, self.N], dtype="double"), np.empty([self.N, self.N], dtype="double")
@ -181,39 +179,35 @@ class Corr:
G0[i, j] = self.content[t0][i, j].value
Gt[i, j] = self.content[ts][i, j].value
sp_val,sp_vec = scipy.linalg.eig(Gt, G0)
sp_vec=sp_vec[:,np.argsort(sp_val)[-state]] #we only want the eigenvector belonging to the selected state
sp_vec = sp_vec/np.sqrt(sp_vec@sp_vec)
sp_val, sp_vec = scipy.linalg.eig(Gt, G0)
sp_vec = sp_vec[:, np.argsort(sp_val)[-state]] # We only want the eigenvector belonging to the selected state
sp_vec = sp_vec / np.sqrt(sp_vec @ sp_vec)
return sp_vec
def Eigenvalue(self,t0,state=1):
G=self.smearing_symmetric()
G0=G.content[t0]
def Eigenvalue(self, t0, state=1):
G = self.smearing_symmetric()
G0 = G.content[t0]
L = mat_mat_op(anp.linalg.cholesky, G0)
Li = mat_mat_op(anp.linalg.inv, L)
LT=L.T
LTi=mat_mat_op(anp.linalg.inv, LT)
newcontent=[]
LT = L.T
LTi = mat_mat_op(anp.linalg.inv, LT)
newcontent = []
for t in range(self.T):
Gt=G.content[t]
M=Li@Gt@LTi
Gt = G.content[t]
M = Li @ Gt @ LTi
eigenvalues = eigh(M)[0]
#print(eigenvalues)
eigenvalue=eigenvalues[-state]
eigenvalue = eigenvalues[-state]
newcontent.append(eigenvalue)
return Corr(newcontent)
def roll(self, dt):
return Corr(list(np.roll(np.array(self.content, dtype=object), dt)))
def deriv(self, symmetric=True): #Defaults to symmetric derivative
def deriv(self, symmetric=True): # Defaults to symmetric derivative
if not symmetric:
newcontent = []
for t in range(self.T - 1):
if (self.content[t] is None) or (self.content[t+1] is None):
if (self.content[t] is None) or (self.content[t + 1] is None):
newcontent.append(None)
else:
newcontent.append(self.content[t + 1] - self.content[t])
@ -222,8 +216,8 @@ class Corr:
return Corr(newcontent, padding_back=1)
if symmetric:
newcontent = []
for t in range(1, self.T-1):
if (self.content[t-1] is None) or (self.content[t+1] is None):
for t in range(1, self.T - 1):
if (self.content[t - 1] is None) or (self.content[t + 1] is None):
newcontent.append(None)
else:
newcontent.append(0.5 * (self.content[t + 1] - self.content[t - 1]))
@ -231,11 +225,10 @@ class Corr:
raise Exception('Derivative is undefined at all timeslices')
return Corr(newcontent, padding_back=1, padding_front=1)
def second_deriv(self):
newcontent = []
for t in range(1, self.T-1):
if (self.content[t-1] is None) or (self.content[t+1] is None):
for t in range(1, self.T - 1):
if (self.content[t - 1] is None) or (self.content[t + 1] is None):
newcontent.append(None)
else:
newcontent.append((self.content[t + 1] - 2 * self.content[t] + self.content[t - 1]))
@ -243,19 +236,20 @@ class Corr:
raise Exception("Derivative is undefined at all timeslices")
return Corr(newcontent, padding_back=1, padding_front=1)
def m_eff(self, variant='log', guess=1.0):
"""Returns the effective mass of the correlator as correlator object
Parameters
----------
variant -- log: uses the standard effective mass log(C(t) / C(t+1))
periodic : Solves C(t) / C(t+1) = cosh(m * (t - T/2)) / cosh(m * (t + 1 - T/2)) for m. See, e.g., arXiv:1205.5380
cosh : Use periodicitiy of the correlator by solving C(t) / C(t+1) = cosh(m * (t - T/2)) / cosh(m * (t + 1 - T/2)) for m.
sinh : Use anti-periodicitiy of the correlator by solving C(t) / C(t+1) = sinh(m * (t - T/2)) / sinh(m * (t + 1 - T/2)) for m.
See, e.g., arXiv:1205.5380
guess -- guess for the root finder, only relevant for the root variant
"""
if self.N != 1:
raise Exception('Correlator must be projected before getting m_eff')
if variant is 'log':
if variant == 'log':
newcontent = []
for t in range(self.T - 1):
if (self.content[t] is None) or (self.content[t + 1] is None):
@ -267,91 +261,98 @@ class Corr:
return np.log(Corr(newcontent, padding_back=1))
elif variant is 'periodic':
elif variant in ['periodic', 'cosh', 'sinh']:
if variant in ['periodic', 'cosh']:
func = anp.cosh
else:
func = anp.sinh
def root_function(x, d):
return func(x * (t - self.T / 2)) / func(x * (t + 1 - self.T / 2)) - d
newcontent = []
for t in range(self.T - 1):
if (self.content[t] is None) or (self.content[t + 1] is None):
newcontent.append(None)
# Fill the two timeslices in the middle of the lattice with their predecessors
elif variant == 'sinh' and t in [self.T / 2, self.T / 2 - 1]:
newcontent.append(newcontent[-1])
else:
func = lambda x, d : anp.cosh(x * (t - self.T / 2)) / anp.cosh(x * (t + 1 - self.T / 2)) - d
newcontent.append(np.abs(find_root(self.content[t][0] / self.content[t + 1][0], func, guess=guess)))
newcontent.append(np.abs(find_root(self.content[t][0] / self.content[t + 1][0], root_function, guess=guess)))
if(all([x is None for x in newcontent])):
raise Exception('m_eff is undefined at all timeslices')
return Corr(newcontent, padding_back=1)
elif variant is 'arccosh':
newcontent=[]
for t in range(1,self.T-1):
if (self.content[t] is None) or (self.content[t+1] is None)or (self.content[t-1] is None):
elif variant == 'arccosh':
newcontent = []
for t in range(1, self.T - 1):
if (self.content[t] is None) or (self.content[t + 1] is None) or (self.content[t - 1] is None):
newcontent.append(None)
else:
newcontent.append((self.content[t+1]+self.content[t-1])/(2*self.content[t]))
newcontent.append((self.content[t + 1] + self.content[t - 1]) / (2 * self.content[t]))
if(all([x is None for x in newcontent])):
raise Exception("m_eff is undefined at all timeslices")
return np.arccosh(Corr(newcontent,padding_back=1,padding_front=1))
return np.arccosh(Corr(newcontent, padding_back=1, padding_front=1))
else:
raise Exception('Unkown variant.')
#We want to apply a pe.standard_fit directly to the Corr using an arbitrary function and range.
# We want to apply a pe.standard_fit directly to the Corr using an arbitrary function and range.
def fit(self, function, fitrange=None, silent=False, **kwargs):
if self.N != 1:
raise Exception("Correlator must be projected before fitting")
#The default behaviour is:
#1 use explicit fitrange
# if none is provided, use the range of the corr
# if this is also not set, use the whole length of the corr (This could come with a warning!)
# The default behaviour is:
# 1 use explicit fitrange
# if none is provided, use the range of the corr
# if this is also not set, use the whole length of the corr (This could come with a warning!)
if fitrange is None:
if hasattr(self,"prange"):
fitrange=self.prange
if self.prange:
fitrange = self.prange
else:
fitrange=[0, self.T]
fitrange = [0, self.T]
xs = [x for x in range(fitrange[0], fitrange[1]) if not self.content[x] is None]
ys = [self.content[x][0] for x in range(fitrange[0], fitrange[1]) if not self.content[x] is None]
result = standard_fit(xs, ys, function, silent=silent, **kwargs)
if isinstance(result, list):
[item.gamma_method() for item in result if isinstance(item,Obs)]
[item.gamma_method() for item in result if isinstance(item, Obs)]
elif isinstance(result, dict):
[item.gamma_method() for item in result['fit_parameters'] if isinstance(item,Obs)]
[item.gamma_method() for item in result['fit_parameters'] if isinstance(item, Obs)]
else:
raise Exception('Unexpected fit result.')
return result
#we want to quickly get a plateau
def plateau(self, plateau_range=None, method="fit"):
if not plateau_range:
if hasattr(self,"prange"):
plateau_range=self.prange
if self.prange:
plateau_range = self.prange
else:
raise Exception("no plateau range provided")
if self.N != 1:
raise Exception("Correlator must be projected before getting a plateau.")
if(all([self.content[t] is None for t in range(plateau_range[0], plateau_range[1])])):
raise Exception("plateau is undefined at all timeslices in plateaurange.")
raise Exception("plateau is undefined at all timeslices in plateaurange.")
if method == "fit":
def const_func(a, t):
return a[0] # At some point pe.standard fit had an issue with single parameter fits. Being careful does not hurt
return self.fit(const_func,plateau_range)[0]
elif method in ["avg","average","mean"]:
returnvalue= np.mean([item[0] for item in self.content[plateau_range[0]:plateau_range[1]+1] if not item is None])
return a[0] # At some point pe.standard fit had an issue with single parameter fits. Being careful does not hurt
return self.fit(const_func, plateau_range)[0]
elif method in ["avg", "average", "mean"]:
returnvalue = np.mean([item[0] for item in self.content[plateau_range[0]:plateau_range[1] + 1] if item is not None])
returnvalue.gamma_method()
return returnvalue
else:
raise Exception("Unsupported plateau method: " + method)
def set_prange(self, prange):
if not len(prange)==2:
if not len(prange) == 2:
raise Exception("prange must be a list or array with two values")
if not ((isinstance(prange[0],int)) and (isinstance(prange[1],int))):
if not ((isinstance(prange[0], int)) and (isinstance(prange[1], int))):
raise Exception("Start and end point must be integers")
if not (0<=prange[0]<=self.T and 0<=prange[1]<=self.T and prange[0]<prange[1] ):
if not (0 <= prange[0] <= self.T and 0 <= prange[1] <= self.T and prange[0] < prange[1]):
raise Exception("Start and end point must define a range in the interval 0,T")
self.prange = prange
@ -368,15 +369,15 @@ class Corr:
logscale -- Sets y-axis to logscale
save -- path to file in which the figure should be saved
"""
if self.N!=1:
if self.N != 1:
raise Exception("Correlator must be projected before plotting")
if x_range is None:
if x_range is None:
x_range = [0, self.T]
fig = plt.figure()
ax1 = fig.add_subplot(111)
x,y,y_err=self.plottable()
x, y, y_err = self.plottable()
ax1.errorbar(x, y, y_err, label=self.tag)
if logscale:
ax1.set_yscale('log')
@ -384,8 +385,8 @@ class Corr:
# we generate ylim instead of using autoscaling.
if y_range is None:
try:
y_min=min([(x[0].value - x[0].dvalue) for x in self.content[x_range[0]:x_range[1]] if (x is not None) and x[0].dvalue < 2 * np.abs(x[0].value)])
y_max=max([(x[0].value + x[0].dvalue) for x in self.content[x_range[0]:x_range[1]] if (x is not None) and x[0].dvalue < 2 * np.abs(x[0].value)])
y_min = min([(x[0].value - x[0].dvalue) for x in self.content[x_range[0]: x_range[1]] if (x is not None) and x[0].dvalue < 2 * np.abs(x[0].value)])
y_max = max([(x[0].value + x[0].dvalue) for x in self.content[x_range[0]: x_range[1]] if (x is not None) and x[0].dvalue < 2 * np.abs(x[0].value)])
ax1.set_ylim([y_min - 0.1 * (y_max - y_min), y_max + 0.1 * (y_max - y_min)])
except:
pass
@ -394,26 +395,26 @@ class Corr:
if comp:
if isinstance(comp, Corr) or isinstance(comp, list):
for corr in comp if isinstance(comp, list) else [comp]:
x,y,y_err=corr.plottable()
x, y, y_err = corr.plottable()
plt.errorbar(x, y, y_err, label=corr.tag, mfc=plt.rcParams['axes.facecolor'])
else:
raise Exception('comp must be a correlator or a list of correlators.')
if plateau:
if isinstance(plateau, Obs):
ax1.axhline(y=plateau.value, linewidth=2, color=plt.rcParams['text.color'], alpha=0.6, marker=',', ls='--', label=plateau.__repr__()[4:-1])
ax1.axhline(y=plateau.value, linewidth=2, color=plt.rcParams['text.color'], alpha=0.6, marker=',', ls='--', label=str(plateau))
ax1.axhspan(plateau.value - plateau.dvalue, plateau.value + plateau.dvalue, alpha=0.25, color=plt.rcParams['text.color'], ls='-')
else:
raise Exception('plateau must be an Obs')
if hasattr(self,"prange"):
if self.prange:
ax1.axvline(self.prange[0], 0, 1, ls='-', marker=',')
ax1.axvline(self.prange[1], 0, 1, ls='-', marker=',')
if fit_res:
x_samples = np.arange(x_range[0], x_range[1] + 1, 0.05)
ax1.plot(x_samples,
fit_res['fit_function']([o.value for o in fit_res['fit_parameters']], x_samples)
, ls='-', marker=',', lw=2)
fit_res['fit_function']([o.value for o in fit_res['fit_parameters']], x_samples),
ls='-', marker=',', lw=2)
ax1.set_xlabel(r'$x_0 / a$')
if ylabel:
@ -422,19 +423,19 @@ class Corr:
handles, labels = ax1.get_legend_handles_labels()
if labels:
legend = ax1.legend()
ax1.legend()
plt.draw()
if save:
if isinstance(save, str):
fig.savefig(save)
else:
raise Exception('safe has to be a string.')
raise Exception("Safe has to be a string.")
return
def dump(self,filename):
dump_object(self,filename)
def dump(self, filename):
dump_object(self, filename)
return
def print(self, range=[0, None]):
@ -450,262 +451,193 @@ class Corr:
else:
content_string += str(i + range[0])
for element in sub_corr:
content_string += '\t' + element.__repr__()[4:-1]
content_string += '\t' + str(element)
content_string += '\n'
return content_string
def __str__(self):
return self.__repr__()
#return ("Corr[T="+str(self.T)+" , N="+str(self.N)+" , content="+str([o[0] for o in [o for o in self.content]])+"]")
#We define the basic operations, that can be performed with correlators.
#While */+- get defined here, they only work for Corr*Obs and not Obs*Corr.
#This is because Obs*Corr checks Obs.__mul__ first and does not catch an exception.
#One could try and tell Obs to check if the y in __mul__ is a Corr and
# We define the basic operations, that can be performed with correlators.
# While */+- get defined here, they only work for Corr*Obs and not Obs*Corr.
# This is because Obs*Corr checks Obs.__mul__ first and does not catch an exception.
# One could try and tell Obs to check if the y in __mul__ is a Corr and
def __add__(self, y):
if isinstance(y, Corr):
if ((self.N!=y.N) or (self.T!=y.T) ):
if ((self.N != y.N) or (self.T != y.T)):
raise Exception("Addition of Corrs with different shape")
newcontent=[]
newcontent = []
for t in range(self.T):
if (self.content[t] is None) or (y.content[t] is None):
newcontent.append(None)
else:
newcontent.append(self.content[t]+y.content[t])
newcontent.append(self.content[t] + y.content[t])
return Corr(newcontent)
elif isinstance(y, Obs) or isinstance(y, int) or isinstance(y,float):
newcontent=[]
elif isinstance(y, Obs) or isinstance(y, int) or isinstance(y, float):
newcontent = []
for t in range(self.T):
if (self.content[t] is None):
newcontent.append(None)
else:
newcontent.append(self.content[t]+y)
return Corr(newcontent,prange= self.prange if hasattr(self,"prange") else None)
newcontent.append(self.content[t] + y)
return Corr(newcontent, prange=self.prange)
else:
raise TypeError("Corr + wrong type")
def __mul__(self,y):
if isinstance(y,Corr):
if not((self.N==1 or y.N==1 or self.N==y.N) and self.T==y.T):
def __mul__(self, y):
if isinstance(y, Corr):
if not((self.N == 1 or y.N == 1 or self.N == y.N) and self.T == y.T):
raise Exception("Multiplication of Corr object requires N=N or N=1 and T=T")
newcontent=[]
newcontent = []
for t in range(self.T):
if (self.content[t] is None) or (y.content[t] is None):
newcontent.append(None)
else:
newcontent.append(self.content[t]*y.content[t])
newcontent.append(self.content[t] * y.content[t])
return Corr(newcontent)
elif isinstance(y, Obs) or isinstance(y, int) or isinstance(y,float):
newcontent=[]
elif isinstance(y, Obs) or isinstance(y, int) or isinstance(y, float):
newcontent = []
for t in range(self.T):
if (self.content[t] is None):
newcontent.append(None)
else:
newcontent.append(self.content[t]*y)
return Corr(newcontent,prange= self.prange if hasattr(self,"prange") else None)
newcontent.append(self.content[t] * y)
return Corr(newcontent, prange=self.prange)
else:
raise TypeError("Corr * wrong type")
def __truediv__(self,y):
if isinstance(y,Corr):
if not((self.N==1 or y.N==1 or self.N==y.N) and self.T==y.T):
def __truediv__(self, y):
if isinstance(y, Corr):
if not((self.N == 1 or y.N == 1 or self.N == y.N) and self.T == y.T):
raise Exception("Multiplication of Corr object requires N=N or N=1 and T=T")
newcontent=[]
newcontent = []
for t in range(self.T):
if (self.content[t] is None) or (y.content[t] is None):
newcontent.append(None)
else:
newcontent.append(self.content[t]/y.content[t])
#Here we set the entire timeslice to undefined, if one of the smearings has encountered an division by zero.
#While this might throw away perfectly good values in other smearings, we will never have to check, if all values in our matrix are defined
newcontent.append(self.content[t] / y.content[t])
# Here we set the entire timeslice to undefined, if one of the smearings has encountered an division by zero.
# While this might throw away perfectly good values in other smearings, we will never have to check, if all values in our matrix are defined
for t in range(self.T):
if newcontent[t] is None:
continue
if np.isnan(np.sum(newcontent[t]).value):
newcontent[t]=None
newcontent[t] = None
if all([item is None for item in newcontent]):
raise Exception("Division returns completely undefined correlator")
return Corr(newcontent)
elif isinstance(y, Obs):
if y.value==0:
raise Exception("Division by Zero will return undefined correlator")
newcontent=[]
if y.value == 0:
raise Exception('Division by zero will return undefined correlator')
newcontent = []
for t in range(self.T):
if (self.content[t] is None):
newcontent.append(None)
else:
newcontent.append(self.content[t]/y)
if hasattr(self,"prange"):
newrange=self.prange
else:
newrange=None
return Corr(newcontent,prange=newrange)
newcontent.append(self.content[t] / y)
return Corr(newcontent, prange=self.prange)
elif isinstance(y, int) or isinstance(y,float):
if y==0:
raise Exception("Division by Zero will return undefined correlator")
newcontent=[]
elif isinstance(y, int) or isinstance(y, float):
if y == 0:
raise Exception('Division by zero will return undefined correlator')
newcontent = []
for t in range(self.T):
if (self.content[t] is None):
newcontent.append(None)
else:
newcontent.append(self.content[t]/y)
return Corr(newcontent,prange= self.prange if hasattr(self,"prange") else None)
newcontent.append(self.content[t] / y)
return Corr(newcontent, prange=self.prange)
else:
raise TypeError("Corr / wrong type")
raise TypeError('Corr / wrong type')
def __neg__(self):
newcontent=[None if (item is None) else -1.*item for item in self.content]
if hasattr(self,"prange"):
newrange=self.prange
else:
newrange=None
return Corr(newcontent,prange=newrange)
newcontent = [None if (item is None) else -1. * item for item in self.content]
return Corr(newcontent, prange=self.prange)
def __sub__(self,y):
return self +(-y)
def __sub__(self, y):
return self + (-y)
def __pow__(self, y):
if isinstance(y, Obs) or isinstance(y,int) or isinstance(y,float):
newcontent=[None if (item is None) else item**y for item in self.content]
return Corr(newcontent,prange= self.prange if hasattr(self,"prange") else None)
if isinstance(y, Obs) or isinstance(y, int) or isinstance(y, float):
newcontent = [None if (item is None) else item**y for item in self.content]
return Corr(newcontent, prange=self.prange)
else:
raise TypeError("type of exponent not supported")
raise TypeError('Type of exponent not supported')
def __abs__(self):
newcontent=[None if (item is None) else np.abs(item) for item in self.content]
return Corr(newcontent,prange= self.prange if hasattr(self,"prange") else None)
newcontent = [None if (item is None) else np.abs(item) for item in self.content]
return Corr(newcontent, prange=self.prange)
#The numpy functions:
# The numpy functions:
def sqrt(self):
return self**0.5
def log(self):
newcontent=[None if (item is None) else np.log(item) for item in self.content]
return Corr(newcontent,prange= self.prange if hasattr(self,"prange") else None)
newcontent = [None if (item is None) else np.log(item) for item in self.content]
return Corr(newcontent, prange=self.prange)
def exp(self):
newcontent=[None if (item is None) else np.exp(item) for item in self.content]
return Corr(newcontent,prange= self.prange if hasattr(self,"prange") else None)
newcontent = [None if (item is None) else np.exp(item) for item in self.content]
return Corr(newcontent, prange=self.prange)
def _apply_func_to_corr(self, func):
newcontent = [None if (item is None) else func(item) for item in self.content]
for t in range(self.T):
if newcontent[t] is None:
continue
if np.isnan(np.sum(newcontent[t]).value):
newcontent[t] = None
if all([item is None for item in newcontent]):
raise Exception('Operation returns undefined correlator')
return Corr(newcontent)
def sin(self):
newcontent=[None if (item is None) else np.sin(item) for item in self.content]
return Corr(newcontent)
return self._apply_func_to_corr(np.sin)
def cos(self):
newcontent=[None if (item is None) else np.cos(item) for item in self.content]
return Corr(newcontent)
return self._apply_func_to_corr(np.cos)
def tan(self):
newcontent=[None if (item is None) else np.tan(item) for item in self.content]
for t in range(self.T):
if newcontent[t] is None:
continue
if np.isnan(np.sum(newcontent[t]).value):
newcontent[t]=None
if all([item is None for item in newcontent]):
raise Exception("Operation returns completely undefined correlator")
return Corr(newcontent)
return self._apply_func_to_corr(np.tan)
def sinh(self):
newcontent=[None if (item is None) else np.sinh(item) for item in self.content]
return Corr(newcontent)
return self._apply_func_to_corr(np.sinh)
def cosh(self):
newcontent=[None if (item is None) else np.cosh(item) for item in self.content]
return Corr(newcontent)
return self._apply_func_to_corr(np.cosh)
def tanh(self):
newcontent=[None if (item is None) else np.tanh(item) for item in self.content]
for t in range(self.T):
if newcontent[t] is None:
continue
if np.isnan(np.sum(newcontent[t]).value):
newcontent[t]=None
if all([item is None for item in newcontent]):
raise Exception("Operation returns completely undefined correlator")
return Corr(newcontent)
return self._apply_func_to_corr(np.tanh)
def arcsin(self):
newcontent=[None if (item is None) else np.arcsin(item) for item in self.content]
for t in range(self.T):
if newcontent[t] is None:
continue
if np.isnan(np.sum(newcontent[t]).value):
newcontent[t]=None
if all([item is None for item in newcontent]):
raise Exception("Operation returns completely undefined correlator")
return Corr(newcontent)
return self._apply_func_to_corr(np.arcsin)
def arccos(self):
newcontent=[None if (item is None) else np.arccos(item) for item in self.content]
for t in range(self.T):
if newcontent[t] is None:
continue
if np.isnan(np.sum(newcontent[t]).value):
newcontent[t]=None
if all([item is None for item in newcontent]):
raise Exception("Operation returns completely undefined correlator")
return Corr(newcontent)
return self._apply_func_to_corr(np.arccos)
def arctan(self):
newcontent=[None if (item is None) else np.arctan(item) for item in self.content]
for t in range(self.T):
if newcontent[t] is None:
continue
if np.isnan(np.sum(newcontent[t]).value):
newcontent[t]=None
if all([item is None for item in newcontent]):
raise Exception("Operation returns completely undefined correlator")
return Corr(newcontent)
return self._apply_func_to_corr(np.arctan)
def arcsinh(self):
newcontent=[None if (item is None) else np.arcsinh(item) for item in self.content]
for t in range(self.T):
if newcontent[t] is None:
continue
if np.isnan(np.sum(newcontent[t]).value):
newcontent[t]=None
if all([item is None for item in newcontent]):
raise Exception("Operation returns completely undefined correlator")
return Corr(newcontent)
return self._apply_func_to_corr(np.arcsinh)
def arccosh(self):
newcontent=[None if (item is None) else np.arccosh(item) for item in self.content]
for t in range(self.T):
if newcontent[t] is None:
continue
if np.isnan(np.sum(newcontent[t]).value):
newcontent[t]=None
if all([item is None for item in newcontent]):
raise Exception("Operation returns completely undefined correlator")
return Corr(newcontent)
return self._apply_func_to_corr(np.arccosh)
def arctanh(self):
newcontent=[None if (item is None) else np.arctanh(item) for item in self.content]
for t in range(self.T):
if newcontent[t] is None:
continue
if np.isnan(np.sum(newcontent[t]).value):
newcontent[t]=None
if all([item is None for item in newcontent]):
raise Exception("Operation returns completely undefined correlator")
return Corr(newcontent)
return self._apply_func_to_corr(np.arctanh)
# Right hand side operations (require tweak in main module to work)
def __rsub__(self, y):
return -self + y
#right hand side operations (require tweak in main module to work)
def __rsub__(self,y):
return -self+y
def __rmul__(self, y):
return self * y
def __radd__(self,y):
def __radd__(self, y):
return self + y

View file

@ -1,6 +1,7 @@
#!/usr/bin/env python
# coding: utf-8
import warnings
import numpy as np
import autograd.numpy as anp
import scipy.optimize
@ -146,7 +147,7 @@ def standard_fit(x, y, func, silent=False, **kwargs):
result_dict['chisquare/expected_chisquare'] = chisquare / expected_chisquare
if not silent:
print('chisquare/expected_chisquare:',
result_dict['chisquare/expected_chisquare'])
result_dict['chisquare/expected_chisquare'])
hess_inv = np.linalg.pinv(jacobian(jacobian(chisqfunc))(fit_result.x))
@ -256,7 +257,7 @@ def odr_fit(x, y, func, silent=False, **kwargs):
data = RealData(x_f, y_f, sx=dx_f, sy=dy_f)
model = Model(func)
odr = ODR(data, model, x0, partol=np.finfo(np.float).eps)
odr = ODR(data, model, x0, partol=np.finfo(np.float64).eps)
odr.set_job(fit_type=0, deriv=1)
output = odr.run()
@ -300,12 +301,12 @@ def odr_fit(x, y, func, silent=False, **kwargs):
P_phi = A @ np.linalg.inv(A.T @ A) @ A.T
expected_chisquare = np.trace((np.identity(P_phi.shape[0]) - P_phi) @ W @ cov @ W)
if expected_chisquare <= 0.0:
print('Warning, negative expected_chisquare.')
warnings.warn("Negative expected_chisquare.", RuntimeWarning)
expected_chisquare = np.abs(expected_chisquare)
result_dict['chisquare/expected_chisquare'] = odr_chisquare(np.concatenate((output.beta, output.xplus.ravel()))) / expected_chisquare
if not silent:
print('chisquare/expected_chisquare:',
result_dict['chisquare/expected_chisquare'])
result_dict['chisquare/expected_chisquare'])
hess_inv = np.linalg.pinv(jacobian(jacobian(odr_chisquare))(np.concatenate((output.beta, output.xplus.ravel()))))
@ -378,7 +379,7 @@ def prior_fit(x, y, func, priors, silent=False, **kwargs):
result_dict['fit_function'] = func
if Obs.e_tag_global < 4:
print('WARNING: e_tag_global is smaller than 4, this can cause problems when calculating errors from fits with priors')
warnings.warn("e_tag_global is smaller than 4, this can cause problems when calculating errors from fits with priors", RuntimeWarning)
x = np.asarray(x)
@ -610,10 +611,8 @@ def covariance_matrix(y):
def error_band(x, func, beta):
"""Returns the error band for an array of sample values x, for given fit function func with optimized parameters beta."""
cov = covariance_matrix(beta)
if np.any(np.abs(cov - cov.T) > 1000 * np.finfo(np.float).eps):
print('Warning, Covariance matrix is not symmetric within floating point precision')
print('cov - cov.T:')
print(cov - cov.T)
if np.any(np.abs(cov - cov.T) > 1000 * np.finfo(np.float64).eps):
warnings.warn("Covariance matrix is not symmetric within floating point precision", RuntimeWarning)
deriv = []
for i, item in enumerate(x):
@ -630,7 +629,6 @@ def error_band(x, func, beta):
def fit_general(x, y, func, silent=False, **kwargs):
"""Performs a non-linear fit to y = func(x) and returns a list of Obs corresponding to the fit parameters.
WARNING: In the current version the fits are performed with numerical derivatives.
Plausibility of the results should be checked. To control the numerical differentiation
the kwargs of numdifftools.step_generators.MaxStepGenerator can be used.
@ -651,9 +649,7 @@ def fit_general(x, y, func, silent=False, **kwargs):
with many parameters.
"""
if not silent:
print('WARNING: This function is deprecated and will be removed in future versions.')
print('New fit functions with exact error propagation are now available as alternative.')
warnings.warn("New fit functions with exact error propagation are now available as alternative.", DeprecationWarning)
if not callable(func):
raise TypeError('func has to be a function.')
@ -716,7 +712,7 @@ def fit_general(x, y, func, silent=False, **kwargs):
model = Model(func)
odr = ODR(data, model, beta0, partol=np.finfo(np.float).eps)
odr = ODR(data, model, beta0, partol=np.finfo(np.float64).eps)
odr.set_job(fit_type=fit_type, deriv=1)
output = odr.run()
if print_output and not silent:

View file

@ -1,3 +1,5 @@
from .input import *
from . import bdio
from . import hadrons
from . import sfcf
from . import openQCD
from . import misc

View file

@ -65,7 +65,7 @@ def read_ADerrors(file_path, bdio_path='./libbdio.so', **kwargs):
print('Reading of bdio file started')
while 1 > 0:
record = bdio_seek_record(fbdio)
bdio_seek_record(fbdio)
ruinfo = bdio_get_ruinfo(fbdio)
if ruinfo == 7:
@ -75,13 +75,13 @@ def read_ADerrors(file_path, bdio_path='./libbdio.so', **kwargs):
if ruinfo < 0:
# EOF reached
break
rlen = bdio_get_rlen(fbdio)
bdio_get_rlen(fbdio)
def read_c_double():
d_buf = ctypes.c_double
pd_buf = d_buf()
ppd_buf = ctypes.c_void_p(ctypes.addressof(pd_buf))
iread = bdio_read_f64(ppd_buf, ctypes.c_size_t(8), ctypes.c_void_p(fbdio))
bdio_read_f64(ppd_buf, ctypes.c_size_t(8), ctypes.c_void_p(fbdio))
return pd_buf.value
mean = read_c_double()
@ -91,7 +91,7 @@ def read_ADerrors(file_path, bdio_path='./libbdio.so', **kwargs):
d_buf = ctypes.c_size_t
pd_buf = d_buf()
ppd_buf = ctypes.c_void_p(ctypes.addressof(pd_buf))
iread = bdio_read_int32(ppd_buf, ctypes.c_size_t(4), ctypes.c_void_p(fbdio))
bdio_read_int32(ppd_buf, ctypes.c_size_t(4), ctypes.c_void_p(fbdio))
return pd_buf.value
neid = read_c_size_t()
@ -137,7 +137,7 @@ def read_ADerrors(file_path, bdio_path='./libbdio.so', **kwargs):
d_buf = ctypes.c_double * np.sum(ndata)
pd_buf = d_buf()
ppd_buf = ctypes.c_void_p(ctypes.addressof(pd_buf))
iread = bdio_read_f64(ppd_buf, ctypes.c_size_t(8 * np.sum(ndata)), ctypes.c_void_p(fbdio))
bdio_read_f64(ppd_buf, ctypes.c_size_t(8 * np.sum(ndata)), ctypes.c_void_p(fbdio))
delta = pd_buf[:]
samples = np.split(np.asarray(delta) + mean, np.cumsum([a for su in vrep for a in su])[:-1])
@ -212,8 +212,7 @@ def write_ADerrors(obs_list, file_path, bdio_path='./libbdio.so', **kwargs):
fbdio = bdio_open(ctypes.c_char_p(b_path), ctypes.c_char_p(b_write), b_form)
for obs in obs_list:
mean = obs.value
# mean = obs.value
neid = len(obs.e_names)
vrep = [[obs.shape[o] for o in sl] for sl in list(obs.e_content.values())]
vrep_write = [item for sublist in vrep for item in sublist]
@ -251,12 +250,12 @@ def write_ADerrors(obs_list, file_path, bdio_path='./libbdio.so', **kwargs):
def write_c_double(double):
pd_buf = ctypes.c_double(double)
ppd_buf = ctypes.c_void_p(ctypes.addressof(pd_buf))
iwrite = bdio_write_f64(ppd_buf, ctypes.c_size_t(8), ctypes.c_void_p(fbdio))
bdio_write_f64(ppd_buf, ctypes.c_size_t(8), ctypes.c_void_p(fbdio))
def write_c_size_t(int32):
pd_buf = ctypes.c_size_t(int32)
ppd_buf = ctypes.c_void_p(ctypes.addressof(pd_buf))
iwrite = bdio_write_int32(ppd_buf, ctypes.c_size_t(4), ctypes.c_void_p(fbdio))
bdio_write_int32(ppd_buf, ctypes.c_size_t(4), ctypes.c_void_p(fbdio))
write_c_double(obs.value)
write_c_size_t(neid)
@ -365,7 +364,7 @@ def read_mesons(file_path, bdio_path='./libbdio.so', **kwargs):
print('Reading of bdio file started')
while 1 > 0:
record = bdio_seek_record(fbdio)
bdio_seek_record(fbdio)
ruinfo = bdio_get_ruinfo(fbdio)
if ruinfo < 0:
# EOF reached
@ -530,7 +529,7 @@ def read_dSdm(file_path, bdio_path='./libbdio.so', **kwargs):
corr_type = [] # Contains correlator data type (important for reading out numerical data)
corr_props = [] # Contains propagator types (Component of corr_kappa)
d0 = 0 # tvals
d1 = 0 # nnoise
# d1 = 0 # nnoise
prop_kappa = [] # Contains propagator kappas (Component of corr_kappa)
# Check noise type for multiple replica?
cnfg_no = -1
@ -541,7 +540,7 @@ def read_dSdm(file_path, bdio_path='./libbdio.so', **kwargs):
print('Reading of bdio file started')
while 1 > 0:
record = bdio_seek_record(fbdio)
bdio_seek_record(fbdio)
ruinfo = bdio_get_ruinfo(fbdio)
if ruinfo < 0:
# EOF reached
@ -613,7 +612,6 @@ def read_dSdm(file_path, bdio_path='./libbdio.so', **kwargs):
print('Number of configurations: ', cnfg_no + 1)
corr_kappa = [] # Contains kappa values for both propagators of given correlation function
corr_source = []
for item in corr_props:
corr_kappa.append(float(prop_kappa[int(item)]))

View file

@ -32,8 +32,10 @@ def read_meson_hd5(path, filestem, ens_id, meson='meson_0', tree='meson'):
if not files:
raise Exception('No files starting with', filestem, 'in folder', path)
def get_cnfg_number(n):
return int(n[len(filestem) + 1:-3])
# Sort according to configuration number
get_cnfg_number = lambda x: int(x[len(filestem) + 1:-3])
files.sort(key=get_cnfg_number)
# Check that configurations are evenly spaced

View file

@ -1,733 +0,0 @@
#!/usr/bin/env python
# coding: utf-8
import sys
import os
import fnmatch
import re
import struct
import autograd.numpy as np # Thinly-wrapped numpy
from ..pyerrors import Obs
from ..fits import fit_lin
def read_sfcf(path, prefix, name, **kwargs):
"""Read sfcf C format from given folder structure.
Keyword arguments
-----------------
im -- if True, read imaginary instead of real part of the correlation function.
single -- if True, read a boundary-to-boundary correlation function with a single value
b2b -- if True, read a time-dependent boundary-to-boundary correlation function
names -- Alternative labeling for replicas/ensembles. Has to have the appropriate length
"""
if kwargs.get('im'):
im = 1
part = 'imaginary'
else:
im = 0
part = 'real'
if kwargs.get('single'):
b2b = 1
single = 1
else:
b2b = 0
single = 0
if kwargs.get('b2b'):
b2b = 1
read = 0
T = 0
start = 0
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(dirnames)
break
if not ls:
print('Error, directory not found')
sys.exit()
for exc in ls:
if fnmatch.fnmatch(exc, prefix + '*'):
ls = list(set(ls) - set(exc))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0]))
replica = len(ls)
print('Read', part, 'part of', name, 'from', prefix, ',', replica, 'replica')
if 'names' in kwargs:
new_names = kwargs.get('names')
if len(new_names) != replica:
raise Exception('Names does not have the required length', replica)
else:
new_names = ls
print(replica, 'replica')
for i, item in enumerate(ls):
print(item)
sub_ls = []
for (dirpath, dirnames, filenames) in os.walk(path+'/'+item):
sub_ls.extend(dirnames)
break
for exc in sub_ls:
if fnmatch.fnmatch(exc, 'cfg*'):
sub_ls = list(set(sub_ls) - set(exc))
sub_ls.sort(key=lambda x: int(x[3:]))
no_cfg = len(sub_ls)
print(no_cfg, 'configurations')
if i == 0:
with open(path + '/' + item + '/' + sub_ls[0] + '/' + name) as fp:
for k, line in enumerate(fp):
if read == 1 and not line.strip() and k > start + 1:
break
if read == 1 and k >= start:
T += 1
if '[correlator]' in line:
read = 1
start = k + 7 + b2b
T -= b2b
deltas = []
for j in range(T):
deltas.append([])
sublength = len(sub_ls)
for j in range(T):
deltas[j].append(np.zeros(sublength))
for cnfg, subitem in enumerate(sub_ls):
with open(path + '/' + item + '/' + subitem + '/'+name) as fp:
for k, line in enumerate(fp):
if(k >= start and k < start + T):
floats = list(map(float, line.split()))
deltas[k-start][i][cnfg] = floats[1 + im - single]
result = []
for t in range(T):
result.append(Obs(deltas[t], new_names))
return result
def read_sfcf_c(path, prefix, name, quarks='.*', noffset=0, wf=0, wf2=0, **kwargs):
"""Read sfcf c format from given folder structure.
Arguments
-----------------
quarks -- Label of the quarks used in the sfcf input file
noffset -- Offset of the source (only relevant when wavefunctions are used)
wf -- ID of wave function
wf2 -- ID of the second wavefunction (only relevant for boundary-to-boundary correlation functions)
Keyword arguments
-----------------
im -- if True, read imaginary instead of real part of the correlation function.
b2b -- if True, read a time-dependent boundary-to-boundary correlation function
names -- Alternative labeling for replicas/ensembles. Has to have the appropriate length
"""
if kwargs.get('im'):
im = 1
part = 'imaginary'
else:
im = 0
part = 'real'
if kwargs.get('b2b'):
b2b = 1
else:
b2b = 0
read = 0
T = 0
start = 0
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(dirnames)
break
if not ls:
print('Error, directory not found')
sys.exit()
# Exclude folders with different names
for exc in ls:
if not fnmatch.fnmatch(exc, prefix+'*'):
ls = list(set(ls) - set([exc]))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0])) # New version, to cope with ids, etc.
replica = len(ls)
if 'names' in kwargs:
new_names = kwargs.get('names')
if len(new_names) != replica:
raise Exception('Names does not have the required length', replica)
else:
new_names = ls
print('Read', part, 'part of', name, 'from', prefix[:-1], ',', replica, 'replica')
for i, item in enumerate(ls):
sub_ls = []
for (dirpath, dirnames, filenames) in os.walk(path+'/'+item):
sub_ls.extend(filenames)
break
for exc in sub_ls:
if not fnmatch.fnmatch(exc, prefix+'*'):
sub_ls = list(set(sub_ls) - set([exc]))
sub_ls.sort(key=lambda x: int(re.findall(r'\d+', x)[-1]))
first_cfg = int(re.findall(r'\d+', sub_ls[0])[-1])
last_cfg = len(sub_ls) + first_cfg - 1
for cfg in range(1, len(sub_ls)):
if int(re.findall(r'\d+', sub_ls[cfg])[-1]) != first_cfg + cfg:
last_cfg = cfg + first_cfg - 1
break
no_cfg = last_cfg - first_cfg + 1
print(item, ':', no_cfg, 'evenly spaced configurations (', first_cfg, '-', last_cfg, ') ,', len(sub_ls) - no_cfg, 'configs omitted\n')
if i == 0:
pattern = 'name ' + name + '\nquarks ' + quarks + '\noffset ' + str(noffset) + '\nwf ' + str(wf)
if b2b:
pattern += '\nwf_2 ' + str(wf2)
with open(path+'/'+item+'/'+sub_ls[0], 'r') as file:
content = file.read()
match = re.search(pattern, content)
if match:
start_read = content.count('\n', 0, match.start()) + 5 + b2b
end_match = re.search('\n\s*\n', content[match.start():])
T = content[match.start():].count('\n', 0, end_match.start()) - 4 - b2b
assert T > 0
print(T, 'entries, starting to read in line', start_read)
else:
raise Exception('Correlator with pattern\n' + pattern + '\nnot found.')
deltas = []
for j in range(T):
deltas.append([])
sublength = no_cfg
for j in range(T):
deltas[j].append(np.zeros(sublength))
for cfg in range(no_cfg):
with open(path+'/'+item+'/'+sub_ls[cfg]) as fp:
for k, line in enumerate(fp):
if k == start_read - 5 - b2b:
if line.strip() != 'name ' + name:
raise Exception('Wrong format', sub_ls[cfg])
if(k >= start_read and k < start_read + T):
floats = list(map(float, line.split()))
deltas[k-start_read][i][cfg] = floats[-2:][im]
result = []
for t in range(T):
result.append(Obs(deltas[t], new_names))
return result
def read_qtop(path, prefix, **kwargs):
"""Read qtop format from given folder structure.
Keyword arguments
-----------------
target -- specifies the topological sector to be reweighted to (default 0)
full -- if true read the charge instead of the reweighting factor.
"""
if 'target' in kwargs:
target = kwargs.get('target')
else:
target = 0
if kwargs.get('full'):
full = 1
else:
full = 0
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(filenames)
break
if not ls:
print('Error, directory not found')
sys.exit()
# Exclude files with different names
for exc in ls:
if not fnmatch.fnmatch(exc, prefix+'*'):
ls = list(set(ls) - set([exc]))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0])) # New version, to cope with ids, etc.
replica = len(ls)
print('Read Q_top from', prefix[:-1], ',', replica, 'replica')
deltas = []
for rep in range(replica):
tmp = []
with open(path+'/'+ls[rep]) as fp:
for k, line in enumerate(fp):
floats = list(map(float, line.split()))
if full == 1:
tmp.append(floats[1])
else:
if int(floats[1]) == target:
tmp.append(1.0)
else:
tmp.append(0.0)
deltas.append(np.array(tmp))
result = Obs(deltas, [(w.split('.'))[0] for w in ls])
return result
def parse_array_openQCD2(d, n, size, wa, quadrupel=False):
arr = []
if d == 2:
tot = 0
for i in range(n[d-1]-1):
if quadrupel:
tmp = wa[tot:n[d-1]]
tmp2 = []
for i in range(len(tmp)):
if i % 2 == 0:
tmp2.append(tmp[i])
arr.append(tmp2)
else:
arr.append(np.asarray(wa[tot:n[d-1]]))
return arr
# mimic the read_array routine of openQCD-2.0.
# fp is the opened file handle
# returns the dict array
# at this point we only parse a 2d array
# d = 2
# n = [nfct[irw], 2*nsrc[irw]]
def read_array_openQCD2(fp):
t = fp.read(4)
d = struct.unpack('i', t)[0]
t = fp.read(4*d)
n = struct.unpack('%di' % (d), t)
t = fp.read(4)
size = struct.unpack('i', t)[0]
if size == 4:
types = 'i'
elif size == 8:
types = 'd'
elif size == 16:
types = 'dd'
else:
print('Type not known!')
m = n[0]
for i in range(1,d):
m *= n[i]
t = fp.read(m*size)
tmp = struct.unpack('%d%s' % (m, types), t)
arr = parse_array_openQCD2(d, n, size, tmp, quadrupel=True)
return {'d': d, 'n': n, 'size': size, 'arr': arr}
def read_rwms(path, prefix, names=None, **kwargs):
"""Read rwms format from given folder structure. Returns a list of length nrw
Keyword arguments
-----------------
new_format -- if True, the array of the associated numbers of Hasenbusch factors is extracted (v>=openQCD1.6)
r_start -- list which contains the first config to be read for each replicum
r_stop -- list which contains the last config to be read for each replicum
oqcd_ver_string -- openQCD version
postfix -- postfix of the file to read, e.g. '.ms1' for openQCD-files
"""
#oqcd versions implemented in this method
known_oqcd_versions = ['1.4','1.6','2.0']
if 'oqcd_ver_string' in kwargs:
ver_str = kwargs.get('oqcd_ver_string')
if not (ver_str in known_oqcd_versions):
raise Exception('Unknown openQCD version defined!')
else: #Set defaults for openQCD Version to be version 1.4, emulate the old behaviour of this method
# Deprecate this kwarg in version 2.0.
if 'new_format':
ver_str = '1.6'
else:
ver_str = '1.4'
print("Working with openQCD version " + ver_str)
if 'postfix' in kwargs:
postfix = kwargs.get('postfix')
else:
postfix = ''
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(filenames)
break
if not ls:
print('Error, directory not found')
sys.exit()
# Exclude files with different names
for exc in ls:
if not fnmatch.fnmatch(exc, prefix + '*'+postfix+'.dat'):
ls = list(set(ls) - set([exc]))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0]))
#ls = fnames
#print(ls)
replica = len(ls)
if 'r_start' in kwargs:
r_start = kwargs.get('r_start')
if len(r_start) != replica:
raise Exception('r_start does not match number of replicas')
# Adjust Configuration numbering to python index
r_start = [o - 1 if o else None for o in r_start]
else:
r_start = [None] * replica
if 'r_stop' in kwargs:
r_stop = kwargs.get('r_stop')
if len(r_stop) != replica:
raise Exception('r_stop does not match number of replicas')
else:
r_stop = [None] * replica
print('Read reweighting factors from', prefix[:-1], ',', replica, 'replica', end='')
print_err = 0
if 'print_err' in kwargs:
print_err = 1
print()
deltas = []
for rep in range(replica):
tmp_array = []
with open(path+ '/' + ls[rep], 'rb') as fp:
#header
t = fp.read(4) # number of reweighting factors
if rep == 0:
nrw = struct.unpack('i', t)[0]
if ver_str == '2.0':
nrw = int(nrw/2)
for k in range(nrw):
deltas.append([])
else:
if ((nrw != struct.unpack('i', t)[0] and (not ver_str == '2.0')) or (nrw != struct.unpack('i', t)[0]/2 and ver_str == '2.0')):# little weird if-clause due to the /2 operation needed.
print('Error: different number of reweighting factors for replicum', rep)
sys.exit()
for k in range(nrw):
tmp_array.append([])
# This block is necessary for openQCD1.6 and openQCD2.0 ms1 files
nfct = []
if ver_str in ['1.6','2.0']:
for i in range(nrw):
t = fp.read(4)
nfct.append(struct.unpack('i', t)[0])
#print('nfct: ', nfct) # Hasenbusch factor, 1 for rat reweighting
else:
for i in range(nrw):
nfct.append(1)
nsrc = []
for i in range(nrw):
t = fp.read(4)
nsrc.append(struct.unpack('i', t)[0])
if (ver_str == '2.0'):
if not struct.unpack('i', fp.read(4))[0] == 0:
print('something is wrong!')
#body
while 0 < 1:
t = fp.read(4)
if len(t) < 4:
break
if print_err:
config_no = struct.unpack('i', t)
for i in range(nrw):
if(ver_str == '2.0'):
tmpd = read_array_openQCD2(fp)
tmpd = read_array_openQCD2(fp)
tmp_rw = tmpd['arr']
tmp_nfct = 1.0
for j in range(tmpd['n'][0]):
tmp_nfct *= np.mean(np.exp(-np.asarray(tmp_rw[j])))
if print_err:
print(config_no, i, j, np.mean(np.exp(-np.asarray(tmp_rw[j]))), np.std(np.exp(-np.asarray(tmp_rw[j]))))
print('Sources:', np.exp(-np.asarray(tmp_rw[j])))
print('Partial factor:', tmp_nfct)
elif(ver_str=='1.6' or ver_str=='1.4'):
tmp_nfct = 1.0
for j in range(nfct[i]):
t = fp.read(8 * nsrc[i])
t = fp.read(8 * nsrc[i])
tmp_rw = struct.unpack('d' * nsrc[i], t)
tmp_nfct *= np.mean(np.exp(-np.asarray(tmp_rw)))
if print_err:
print(config_no, i, j, np.mean(np.exp(-np.asarray(tmp_rw))), np.std(np.exp(-np.asarray(tmp_rw))))
print('Sources:', np.exp(-np.asarray(tmp_rw)))
print('Partial factor:', tmp_nfct)
tmp_array[i].append(tmp_nfct)
for k in range(nrw):
deltas[k].append(tmp_array[k][r_start[rep]:r_stop[rep]])
print(',', nrw, 'reweighting factors with', nsrc, 'sources')
result = []
for t in range(nrw):
if names == None:
result.append(Obs(deltas[t], [w.split(".")[0] for w in ls]))
else:
print(names)
result.append(Obs(deltas[t], names))
return result
def read_pbp(path, prefix, **kwargs):
"""Read pbp format from given folder structure. Returns a list of length nrw
Keyword arguments
-----------------
r_start -- list which contains the first config to be read for each replicum
r_stop -- list which contains the last config to be read for each replicum
"""
extract_nfct = 1
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(filenames)
break
if not ls:
print('Error, directory not found')
sys.exit()
# Exclude files with different names
for exc in ls:
if not fnmatch.fnmatch(exc, prefix + '*.dat'):
ls = list(set(ls) - set([exc]))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0]))
replica = len(ls)
if 'r_start' in kwargs:
r_start = kwargs.get('r_start')
if len(r_start) != replica:
raise Exception('r_start does not match number of replicas')
# Adjust Configuration numbering to python index
r_start = [o - 1 if o else None for o in r_start]
else:
r_start = [None] * replica
if 'r_stop' in kwargs:
r_stop = kwargs.get('r_stop')
if len(r_stop) != replica:
raise Exception('r_stop does not match number of replicas')
else:
r_stop = [None] * replica
print('Read <bar{psi}\psi> from', prefix[:-1], ',', replica, 'replica', end='')
print_err = 0
if 'print_err' in kwargs:
print_err = 1
print()
deltas = []
for rep in range(replica):
tmp_array = []
with open(path+ '/' + ls[rep], 'rb') as fp:
#header
t = fp.read(4) # number of reweighting factors
if rep == 0:
nrw = struct.unpack('i', t)[0]
for k in range(nrw):
deltas.append([])
else:
if nrw != struct.unpack('i', t)[0]:
print('Error: different number of reweighting factors for replicum', rep)
sys.exit()
for k in range(nrw):
tmp_array.append([])
# This block is necessary for openQCD1.6 ms1 files
nfct = []
if extract_nfct == 1:
for i in range(nrw):
t = fp.read(4)
nfct.append(struct.unpack('i', t)[0])
print('nfct: ', nfct) # Hasenbusch factor, 1 for rat reweighting
else:
for i in range(nrw):
nfct.append(1)
nsrc = []
for i in range(nrw):
t = fp.read(4)
nsrc.append(struct.unpack('i', t)[0])
#body
while 0 < 1:
t = fp.read(4)
if len(t) < 4:
break
if print_err:
config_no = struct.unpack('i', t)
for i in range(nrw):
tmp_nfct = 1.0
for j in range(nfct[i]):
t = fp.read(8 * nsrc[i])
t = fp.read(8 * nsrc[i])
tmp_rw = struct.unpack('d' * nsrc[i], t)
tmp_nfct *= np.mean(np.asarray(tmp_rw))
if print_err:
print(config_no, i, j, np.mean(np.asarray(tmp_rw)), np.std(np.asarray(tmp_rw)))
print('Sources:', np.asarray(tmp_rw))
print('Partial factor:', tmp_nfct)
tmp_array[i].append(tmp_nfct)
for k in range(nrw):
deltas[k].append(tmp_array[k][r_start[rep]:r_stop[rep]])
print(',', nrw, '<bar{psi}\psi> with', nsrc, 'sources')
result = []
for t in range(nrw):
result.append(Obs(deltas[t], [(w.split('.'))[0] for w in ls]))
return result
def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwargs):
"""Extract t0 from given .ms.dat files. Returns t0 as Obs.
It is assumed that all boundary effects have sufficiently decayed at x0=xmin.
The data around the zero crossing of t^2<E> - 0.3 is fitted with a linear function
from which the exact root is extracted.
Only works with openQCD v 1.2.
Parameters
----------
path -- Path to .ms.dat files
prefix -- Ensemble prefix
dtr_read -- Determines how many trajectories should be skipped when reading the ms.dat files.
Corresponds to dtr_cnfg / dtr_ms in the openQCD input file.
xmin -- First timeslice where the boundary effects have sufficiently decayed.
spatial_extent -- spatial extent of the lattice, required for normalization.
fit_range -- Number of data points left and right of the zero crossing to be included in the linear fit. (Default: 5)
Keyword arguments
-----------------
r_start -- list which contains the first config to be read for each replicum.
r_stop -- list which contains the last config to be read for each replicum.
plaquette -- If true extract the plaquette estimate of t0 instead.
"""
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(filenames)
break
if not ls:
print('Error, directory not found')
sys.exit()
# Exclude files with different names
for exc in ls:
if not fnmatch.fnmatch(exc, prefix + '*.ms.dat'):
ls = list(set(ls) - set([exc]))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0]))
replica = len(ls)
if 'r_start' in kwargs:
r_start = kwargs.get('r_start')
if len(r_start) != replica:
raise Exception('r_start does not match number of replicas')
# Adjust Configuration numbering to python index
r_start = [o - 1 if o else None for o in r_start]
else:
r_start = [None] * replica
if 'r_stop' in kwargs:
r_stop = kwargs.get('r_stop')
if len(r_stop) != replica:
raise Exception('r_stop does not match number of replicas')
else:
r_stop = [None] * replica
print('Extract t0 from', prefix, ',', replica, 'replica')
Ysum = []
for rep in range(replica):
with open(path + '/' + ls[rep], 'rb') as fp:
# Read header
t = fp.read(12)
header = struct.unpack('iii', t)
if rep == 0:
dn = header[0]
nn = header[1]
tmax = header[2]
elif dn != header[0] or nn != header[1] or tmax != header[2]:
raise Exception('Replica parameters do not match.')
t = fp.read(8)
if rep == 0:
eps = struct.unpack('d', t)[0]
print('Step size:', eps, ', Maximal t value:', dn * (nn) * eps)
elif eps != struct.unpack('d', t)[0]:
raise Exception('Values for eps do not match among replica.')
Ysl = []
# Read body
while 0 < 1:
t = fp.read(4)
if(len(t) < 4):
break
nc = struct.unpack('i', t)[0]
t = fp.read(8 * tmax * (nn + 1))
if kwargs.get('plaquette'):
if nc % dtr_read == 0:
Ysl.append(struct.unpack('d' * tmax * (nn + 1), t))
t = fp.read(8 * tmax * (nn + 1))
if not kwargs.get('plaquette'):
if nc % dtr_read == 0:
Ysl.append(struct.unpack('d' * tmax * (nn + 1), t))
t = fp.read(8 * tmax * (nn + 1))
Ysum.append([])
for i, item in enumerate(Ysl):
Ysum[-1].append([np.mean(item[current + xmin:current + tmax - xmin]) for current in range(0, len(item), tmax)])
t2E_dict = {}
for n in range(nn + 1):
samples = []
for nrep, rep in enumerate(Ysum):
samples.append([])
for cnfg in rep:
samples[-1].append(cnfg[n])
samples[-1] = samples[-1][r_start[nrep]:r_stop[nrep]]
new_obs = Obs(samples, [(w.split('.'))[0] for w in ls])
t2E_dict[n * dn * eps] = (n * dn * eps) ** 2 * new_obs / (spatial_extent ** 3) - 0.3
zero_crossing = np.argmax(np.array([o.value for o in t2E_dict.values()]) > 0.0)
x = list(t2E_dict.keys())[zero_crossing - fit_range: zero_crossing + fit_range]
y = list(t2E_dict.values())[zero_crossing - fit_range: zero_crossing + fit_range]
[o.gamma_method() for o in y]
fit_result = fit_lin(x, y)
return -fit_result[0] / fit_result[1]

126
pyerrors/input/misc.py Normal file
View file

@ -0,0 +1,126 @@
#!/usr/bin/env python
# coding: utf-8
import os
import fnmatch
import re
import struct
import numpy as np # Thinly-wrapped numpy
from ..pyerrors import Obs
def read_pbp(path, prefix, **kwargs):
"""Read pbp format from given folder structure. Returns a list of length nrw
Keyword arguments
-----------------
r_start -- list which contains the first config to be read for each replicum
r_stop -- list which contains the last config to be read for each replicum
"""
extract_nfct = 1
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(filenames)
break
if not ls:
raise Exception('Error, directory not found')
# Exclude files with different names
for exc in ls:
if not fnmatch.fnmatch(exc, prefix + '*.dat'):
ls = list(set(ls) - set([exc]))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0]))
replica = len(ls)
if 'r_start' in kwargs:
r_start = kwargs.get('r_start')
if len(r_start) != replica:
raise Exception('r_start does not match number of replicas')
# Adjust Configuration numbering to python index
r_start = [o - 1 if o else None for o in r_start]
else:
r_start = [None] * replica
if 'r_stop' in kwargs:
r_stop = kwargs.get('r_stop')
if len(r_stop) != replica:
raise Exception('r_stop does not match number of replicas')
else:
r_stop = [None] * replica
print(r'Read <bar{psi}\psi> from', prefix[:-1], ',', replica, 'replica', end='')
print_err = 0
if 'print_err' in kwargs:
print_err = 1
print()
deltas = []
for rep in range(replica):
tmp_array = []
with open(path + '/' + ls[rep], 'rb') as fp:
# header
t = fp.read(4) # number of reweighting factors
if rep == 0:
nrw = struct.unpack('i', t)[0]
for k in range(nrw):
deltas.append([])
else:
if nrw != struct.unpack('i', t)[0]:
raise Exception('Error: different number of reweighting factors for replicum', rep)
for k in range(nrw):
tmp_array.append([])
# This block is necessary for openQCD1.6 ms1 files
nfct = []
if extract_nfct == 1:
for i in range(nrw):
t = fp.read(4)
nfct.append(struct.unpack('i', t)[0])
print('nfct: ', nfct) # Hasenbusch factor, 1 for rat reweighting
else:
for i in range(nrw):
nfct.append(1)
nsrc = []
for i in range(nrw):
t = fp.read(4)
nsrc.append(struct.unpack('i', t)[0])
# body
while 0 < 1:
t = fp.read(4)
if len(t) < 4:
break
if print_err:
config_no = struct.unpack('i', t)
for i in range(nrw):
tmp_nfct = 1.0
for j in range(nfct[i]):
t = fp.read(8 * nsrc[i])
t = fp.read(8 * nsrc[i])
tmp_rw = struct.unpack('d' * nsrc[i], t)
tmp_nfct *= np.mean(np.asarray(tmp_rw))
if print_err:
print(config_no, i, j, np.mean(np.asarray(tmp_rw)), np.std(np.asarray(tmp_rw)))
print('Sources:', np.asarray(tmp_rw))
print('Partial factor:', tmp_nfct)
tmp_array[i].append(tmp_nfct)
for k in range(nrw):
deltas[k].append(tmp_array[k][r_start[rep]:r_stop[rep]])
print(',', nrw, r'<bar{psi}\psi> with', nsrc, 'sources')
result = []
for t in range(nrw):
result.append(Obs(deltas[t], [(w.split('.'))[0] for w in ls]))
return result

330
pyerrors/input/openQCD.py Normal file
View file

@ -0,0 +1,330 @@
#!/usr/bin/env python
# coding: utf-8
import os
import fnmatch
import re
import struct
import numpy as np # Thinly-wrapped numpy
from ..pyerrors import Obs
from ..fits import fit_lin
def read_rwms(path, prefix, version='2.0', names=None, **kwargs):
"""Read rwms format from given folder structure. Returns a list of length nrw
Attributes
-----------------
version -- version of openQCD, default 2.0
Keyword arguments
-----------------
r_start -- list which contains the first config to be read for each replicum
r_stop -- list which contains the last config to be read for each replicum
postfix -- postfix of the file to read, e.g. '.ms1' for openQCD-files
"""
known_oqcd_versions = ['1.4', '1.6', '2.0']
if not (version in known_oqcd_versions):
raise Exception('Unknown openQCD version defined!')
print("Working with openQCD version " + version)
if 'postfix' in kwargs:
postfix = kwargs.get('postfix')
else:
postfix = ''
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(filenames)
break
if not ls:
raise Exception('Error, directory not found')
# Exclude files with different names
for exc in ls:
if not fnmatch.fnmatch(exc, prefix + '*' + postfix + '.dat'):
ls = list(set(ls) - set([exc]))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0]))
replica = len(ls)
if 'r_start' in kwargs:
r_start = kwargs.get('r_start')
if len(r_start) != replica:
raise Exception('r_start does not match number of replicas')
# Adjust Configuration numbering to python index
r_start = [o - 1 if o else None for o in r_start]
else:
r_start = [None] * replica
if 'r_stop' in kwargs:
r_stop = kwargs.get('r_stop')
if len(r_stop) != replica:
raise Exception('r_stop does not match number of replicas')
else:
r_stop = [None] * replica
print('Read reweighting factors from', prefix[:-1], ',', replica, 'replica', end='')
print_err = 0
if 'print_err' in kwargs:
print_err = 1
print()
deltas = []
for rep in range(replica):
tmp_array = []
with open(path + '/' + ls[rep], 'rb') as fp:
# header
t = fp.read(4) # number of reweighting factors
if rep == 0:
nrw = struct.unpack('i', t)[0]
if version == '2.0':
nrw = int(nrw / 2)
for k in range(nrw):
deltas.append([])
else:
if ((nrw != struct.unpack('i', t)[0] and (not version == '2.0')) or (nrw != struct.unpack('i', t)[0] / 2 and version == '2.0')): # little weird if-clause due to the /2 operation needed.
raise Exception('Error: different number of reweighting factors for replicum', rep)
for k in range(nrw):
tmp_array.append([])
# This block is necessary for openQCD1.6 and openQCD2.0 ms1 files
nfct = []
if version in ['1.6', '2.0']:
for i in range(nrw):
t = fp.read(4)
nfct.append(struct.unpack('i', t)[0])
# print('nfct: ', nfct) # Hasenbusch factor, 1 for rat reweighting
else:
for i in range(nrw):
nfct.append(1)
nsrc = []
for i in range(nrw):
t = fp.read(4)
nsrc.append(struct.unpack('i', t)[0])
if version == '2.0':
if not struct.unpack('i', fp.read(4))[0] == 0:
print('something is wrong!')
# body
while 0 < 1:
t = fp.read(4)
if len(t) < 4:
break
if print_err:
config_no = struct.unpack('i', t)
for i in range(nrw):
if(version == '2.0'):
tmpd = _read_array_openQCD2(fp)
tmpd = _read_array_openQCD2(fp)
tmp_rw = tmpd['arr']
tmp_nfct = 1.0
for j in range(tmpd['n'][0]):
tmp_nfct *= np.mean(np.exp(-np.asarray(tmp_rw[j])))
if print_err:
print(config_no, i, j, np.mean(np.exp(-np.asarray(tmp_rw[j]))), np.std(np.exp(-np.asarray(tmp_rw[j]))))
print('Sources:', np.exp(-np.asarray(tmp_rw[j])))
print('Partial factor:', tmp_nfct)
elif version == '1.6' or version == '1.4':
tmp_nfct = 1.0
for j in range(nfct[i]):
t = fp.read(8 * nsrc[i])
t = fp.read(8 * nsrc[i])
tmp_rw = struct.unpack('d' * nsrc[i], t)
tmp_nfct *= np.mean(np.exp(-np.asarray(tmp_rw)))
if print_err:
print(config_no, i, j, np.mean(np.exp(-np.asarray(tmp_rw))), np.std(np.exp(-np.asarray(tmp_rw))))
print('Sources:', np.exp(-np.asarray(tmp_rw)))
print('Partial factor:', tmp_nfct)
tmp_array[i].append(tmp_nfct)
for k in range(nrw):
deltas[k].append(tmp_array[k][r_start[rep]:r_stop[rep]])
print(',', nrw, 'reweighting factors with', nsrc, 'sources')
result = []
for t in range(nrw):
if names is None:
result.append(Obs(deltas[t], [w.split(".")[0] for w in ls]))
else:
print(names)
result.append(Obs(deltas[t], names))
return result
def extract_t0(path, prefix, dtr_read, xmin, spatial_extent, fit_range=5, **kwargs):
"""Extract t0 from given .ms.dat files. Returns t0 as Obs.
It is assumed that all boundary effects have sufficiently decayed at x0=xmin.
The data around the zero crossing of t^2<E> - 0.3 is fitted with a linear function
from which the exact root is extracted.
Only works with openQCD v 1.2.
Parameters
----------
path -- Path to .ms.dat files
prefix -- Ensemble prefix
dtr_read -- Determines how many trajectories should be skipped when reading the ms.dat files.
Corresponds to dtr_cnfg / dtr_ms in the openQCD input file.
xmin -- First timeslice where the boundary effects have sufficiently decayed.
spatial_extent -- spatial extent of the lattice, required for normalization.
fit_range -- Number of data points left and right of the zero crossing to be included in the linear fit. (Default: 5)
Keyword arguments
-----------------
r_start -- list which contains the first config to be read for each replicum.
r_stop -- list which contains the last config to be read for each replicum.
plaquette -- If true extract the plaquette estimate of t0 instead.
"""
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(filenames)
break
if not ls:
raise Exception('Error, directory not found')
# Exclude files with different names
for exc in ls:
if not fnmatch.fnmatch(exc, prefix + '*.ms.dat'):
ls = list(set(ls) - set([exc]))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0]))
replica = len(ls)
if 'r_start' in kwargs:
r_start = kwargs.get('r_start')
if len(r_start) != replica:
raise Exception('r_start does not match number of replicas')
# Adjust Configuration numbering to python index
r_start = [o - 1 if o else None for o in r_start]
else:
r_start = [None] * replica
if 'r_stop' in kwargs:
r_stop = kwargs.get('r_stop')
if len(r_stop) != replica:
raise Exception('r_stop does not match number of replicas')
else:
r_stop = [None] * replica
print('Extract t0 from', prefix, ',', replica, 'replica')
Ysum = []
for rep in range(replica):
with open(path + '/' + ls[rep], 'rb') as fp:
# Read header
t = fp.read(12)
header = struct.unpack('iii', t)
if rep == 0:
dn = header[0]
nn = header[1]
tmax = header[2]
elif dn != header[0] or nn != header[1] or tmax != header[2]:
raise Exception('Replica parameters do not match.')
t = fp.read(8)
if rep == 0:
eps = struct.unpack('d', t)[0]
print('Step size:', eps, ', Maximal t value:', dn * (nn) * eps)
elif eps != struct.unpack('d', t)[0]:
raise Exception('Values for eps do not match among replica.')
Ysl = []
# Read body
while 0 < 1:
t = fp.read(4)
if(len(t) < 4):
break
nc = struct.unpack('i', t)[0]
t = fp.read(8 * tmax * (nn + 1))
if kwargs.get('plaquette'):
if nc % dtr_read == 0:
Ysl.append(struct.unpack('d' * tmax * (nn + 1), t))
t = fp.read(8 * tmax * (nn + 1))
if not kwargs.get('plaquette'):
if nc % dtr_read == 0:
Ysl.append(struct.unpack('d' * tmax * (nn + 1), t))
t = fp.read(8 * tmax * (nn + 1))
Ysum.append([])
for i, item in enumerate(Ysl):
Ysum[-1].append([np.mean(item[current + xmin:current + tmax - xmin]) for current in range(0, len(item), tmax)])
t2E_dict = {}
for n in range(nn + 1):
samples = []
for nrep, rep in enumerate(Ysum):
samples.append([])
for cnfg in rep:
samples[-1].append(cnfg[n])
samples[-1] = samples[-1][r_start[nrep]:r_stop[nrep]]
new_obs = Obs(samples, [(w.split('.'))[0] for w in ls])
t2E_dict[n * dn * eps] = (n * dn * eps) ** 2 * new_obs / (spatial_extent ** 3) - 0.3
zero_crossing = np.argmax(np.array([o.value for o in t2E_dict.values()]) > 0.0)
x = list(t2E_dict.keys())[zero_crossing - fit_range: zero_crossing + fit_range]
y = list(t2E_dict.values())[zero_crossing - fit_range: zero_crossing + fit_range]
[o.gamma_method() for o in y]
fit_result = fit_lin(x, y)
return -fit_result[0] / fit_result[1]
def _parse_array_openQCD2(d, n, size, wa, quadrupel=False):
arr = []
if d == 2:
tot = 0
for i in range(n[d - 1] - 1):
if quadrupel:
tmp = wa[tot:n[d - 1]]
tmp2 = []
for i in range(len(tmp)):
if i % 2 == 0:
tmp2.append(tmp[i])
arr.append(tmp2)
else:
arr.append(np.asarray(wa[tot:n[d - 1]]))
return arr
# mimic the read_array routine of openQCD-2.0.
# fp is the opened file handle
# returns the dict array
# at this point we only parse a 2d array
# d = 2
# n = [nfct[irw], 2*nsrc[irw]]
def _read_array_openQCD2(fp):
t = fp.read(4)
d = struct.unpack('i', t)[0]
t = fp.read(4 * d)
n = struct.unpack('%di' % (d), t)
t = fp.read(4)
size = struct.unpack('i', t)[0]
if size == 4:
types = 'i'
elif size == 8:
types = 'd'
elif size == 16:
types = 'dd'
else:
print('Type not known!')
m = n[0]
for i in range(1, d):
m *= n[i]
t = fp.read(m * size)
tmp = struct.unpack('%d%s' % (m, types), t)
arr = _parse_array_openQCD2(d, n, size, tmp, quadrupel=True)
return {'d': d, 'n': n, 'size': size, 'arr': arr}

276
pyerrors/input/sfcf.py Normal file
View file

@ -0,0 +1,276 @@
#!/usr/bin/env python
# coding: utf-8
import os
import fnmatch
import re
import numpy as np # Thinly-wrapped numpy
from ..pyerrors import Obs
def read_sfcf(path, prefix, name, **kwargs):
"""Read sfcf C format from given folder structure.
Keyword arguments
-----------------
im -- if True, read imaginary instead of real part of the correlation function.
single -- if True, read a boundary-to-boundary correlation function with a single value
b2b -- if True, read a time-dependent boundary-to-boundary correlation function
names -- Alternative labeling for replicas/ensembles. Has to have the appropriate length
"""
if kwargs.get('im'):
im = 1
part = 'imaginary'
else:
im = 0
part = 'real'
if kwargs.get('single'):
b2b = 1
single = 1
else:
b2b = 0
single = 0
if kwargs.get('b2b'):
b2b = 1
read = 0
T = 0
start = 0
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(dirnames)
break
if not ls:
raise Exception('Error, directory not found')
for exc in ls:
if fnmatch.fnmatch(exc, prefix + '*'):
ls = list(set(ls) - set(exc))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0]))
replica = len(ls)
print('Read', part, 'part of', name, 'from', prefix, ',', replica, 'replica')
if 'names' in kwargs:
new_names = kwargs.get('names')
if len(new_names) != replica:
raise Exception('Names does not have the required length', replica)
else:
new_names = ls
print(replica, 'replica')
for i, item in enumerate(ls):
print(item)
sub_ls = []
for (dirpath, dirnames, filenames) in os.walk(path + '/' + item):
sub_ls.extend(dirnames)
break
for exc in sub_ls:
if fnmatch.fnmatch(exc, 'cfg*'):
sub_ls = list(set(sub_ls) - set(exc))
sub_ls.sort(key=lambda x: int(x[3:]))
no_cfg = len(sub_ls)
print(no_cfg, 'configurations')
if i == 0:
with open(path + '/' + item + '/' + sub_ls[0] + '/' + name) as fp:
for k, line in enumerate(fp):
if read == 1 and not line.strip() and k > start + 1:
break
if read == 1 and k >= start:
T += 1
if '[correlator]' in line:
read = 1
start = k + 7 + b2b
T -= b2b
deltas = []
for j in range(T):
deltas.append([])
sublength = len(sub_ls)
for j in range(T):
deltas[j].append(np.zeros(sublength))
for cnfg, subitem in enumerate(sub_ls):
with open(path + '/' + item + '/' + subitem + '/' + name) as fp:
for k, line in enumerate(fp):
if(k >= start and k < start + T):
floats = list(map(float, line.split()))
deltas[k - start][i][cnfg] = floats[1 + im - single]
result = []
for t in range(T):
result.append(Obs(deltas[t], new_names))
return result
def read_sfcf_c(path, prefix, name, quarks='.*', noffset=0, wf=0, wf2=0, **kwargs):
"""Read sfcf c format from given folder structure.
Arguments
-----------------
quarks -- Label of the quarks used in the sfcf input file
noffset -- Offset of the source (only relevant when wavefunctions are used)
wf -- ID of wave function
wf2 -- ID of the second wavefunction (only relevant for boundary-to-boundary correlation functions)
Keyword arguments
-----------------
im -- if True, read imaginary instead of real part of the correlation function.
b2b -- if True, read a time-dependent boundary-to-boundary correlation function
names -- Alternative labeling for replicas/ensembles. Has to have the appropriate length
"""
if kwargs.get('im'):
im = 1
part = 'imaginary'
else:
im = 0
part = 'real'
if kwargs.get('b2b'):
b2b = 1
else:
b2b = 0
T = 0
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(dirnames)
break
if not ls:
raise Exception('Error, directory not found')
# Exclude folders with different names
for exc in ls:
if not fnmatch.fnmatch(exc, prefix + '*'):
ls = list(set(ls) - set([exc]))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0])) # New version, to cope with ids, etc.
replica = len(ls)
if 'names' in kwargs:
new_names = kwargs.get('names')
if len(new_names) != replica:
raise Exception('Names does not have the required length', replica)
else:
new_names = ls
print('Read', part, 'part of', name, 'from', prefix[:-1], ',', replica, 'replica')
for i, item in enumerate(ls):
sub_ls = []
for (dirpath, dirnames, filenames) in os.walk(path + '/' + item):
sub_ls.extend(filenames)
break
for exc in sub_ls:
if not fnmatch.fnmatch(exc, prefix + '*'):
sub_ls = list(set(sub_ls) - set([exc]))
sub_ls.sort(key=lambda x: int(re.findall(r'\d+', x)[-1]))
first_cfg = int(re.findall(r'\d+', sub_ls[0])[-1])
last_cfg = len(sub_ls) + first_cfg - 1
for cfg in range(1, len(sub_ls)):
if int(re.findall(r'\d+', sub_ls[cfg])[-1]) != first_cfg + cfg:
last_cfg = cfg + first_cfg - 1
break
no_cfg = last_cfg - first_cfg + 1
print(item, ':', no_cfg, 'evenly spaced configurations (', first_cfg, '-', last_cfg, ') ,', len(sub_ls) - no_cfg, 'configs omitted\n')
if i == 0:
pattern = 'name ' + name + '\nquarks ' + quarks + '\noffset ' + str(noffset) + '\nwf ' + str(wf)
if b2b:
pattern += '\nwf_2 ' + str(wf2)
with open(path + '/' + item + '/' + sub_ls[0], 'r') as file:
content = file.read()
match = re.search(pattern, content)
if match:
start_read = content.count('\n', 0, match.start()) + 5 + b2b
end_match = re.search(r'\n\s*\n', content[match.start():])
T = content[match.start():].count('\n', 0, end_match.start()) - 4 - b2b
assert T > 0
print(T, 'entries, starting to read in line', start_read)
else:
raise Exception('Correlator with pattern\n' + pattern + '\nnot found.')
deltas = []
for j in range(T):
deltas.append([])
sublength = no_cfg
for j in range(T):
deltas[j].append(np.zeros(sublength))
for cfg in range(no_cfg):
with open(path + '/' + item + '/' + sub_ls[cfg]) as fp:
for k, line in enumerate(fp):
if k == start_read - 5 - b2b:
if line.strip() != 'name ' + name:
raise Exception('Wrong format', sub_ls[cfg])
if(k >= start_read and k < start_read + T):
floats = list(map(float, line.split()))
deltas[k - start_read][i][cfg] = floats[-2:][im]
result = []
for t in range(T):
result.append(Obs(deltas[t], new_names))
return result
def read_qtop(path, prefix, **kwargs):
"""Read qtop format from given folder structure.
Keyword arguments
-----------------
target -- specifies the topological sector to be reweighted to (default 0)
full -- if true read the charge instead of the reweighting factor.
"""
if 'target' in kwargs:
target = kwargs.get('target')
else:
target = 0
if kwargs.get('full'):
full = 1
else:
full = 0
ls = []
for (dirpath, dirnames, filenames) in os.walk(path):
ls.extend(filenames)
break
if not ls:
raise Exception('Error, directory not found')
# Exclude files with different names
for exc in ls:
if not fnmatch.fnmatch(exc, prefix + '*'):
ls = list(set(ls) - set([exc]))
if len(ls) > 1:
ls.sort(key=lambda x: int(re.findall(r'\d+', x[len(prefix):])[0])) # New version, to cope with ids, etc.
replica = len(ls)
print('Read Q_top from', prefix[:-1], ',', replica, 'replica')
deltas = []
for rep in range(replica):
tmp = []
with open(path + '/' + ls[rep]) as fp:
for k, line in enumerate(fp):
floats = list(map(float, line.split()))
if full == 1:
tmp.append(floats[1])
else:
if int(floats[1]) == target:
tmp.append(1.0)
else:
tmp.append(0.0)
deltas.append(np.array(tmp))
result = Obs(deltas, [(w.split('.'))[0] for w in ls])
return result

View file

@ -3,7 +3,7 @@
import numpy as np
import autograd.numpy as anp # Thinly-wrapped numpy
from .pyerrors import derived_observable
from .pyerrors import derived_observable, CObs
# This code block is directly taken from the current master branch of autograd and remains
@ -12,11 +12,14 @@ from functools import partial
from autograd.extend import defvjp
_dot = partial(anp.einsum, '...ij,...jk->...ik')
# batched diag
_diag = lambda a: anp.eye(a.shape[-1]) * a
def _diag(a):
return anp.eye(a.shape[-1]) * a
# batched diagonal, similar to matrix_diag in tensorflow
def _matrix_diag(a):
reps = anp.array(a.shape)
reps[:-1] = 1
@ -81,9 +84,30 @@ def scalar_mat_op(op, obs, **kwargs):
def mat_mat_op(op, obs, **kwargs):
"""Computes the matrix to matrix operation op to a given matrix of Obs."""
if kwargs.get('num_grad') is True:
return _num_diff_mat_mat_op(op, obs, **kwargs)
return derived_observable(lambda x, **kwargs: op(x), obs)
# Use real representation to calculate matrix operations for complex matrices
if isinstance(obs.ravel()[0], CObs):
A = np.empty_like(obs)
B = np.empty_like(obs)
for (n, m), entry in np.ndenumerate(obs):
if hasattr(entry, 'real') and hasattr(entry, 'imag'):
A[n, m] = entry.real
B[n, m] = entry.imag
else:
A[n, m] = entry
B[n, m] = 0.0
big_matrix = np.block([[A, -B], [B, A]])
if kwargs.get('num_grad') is True:
op_big_matrix = _num_diff_mat_mat_op(op, big_matrix, **kwargs)
else:
op_big_matrix = derived_observable(lambda x, **kwargs: op(x), big_matrix)
dim = op_big_matrix.shape[0]
op_A = op_big_matrix[0: dim // 2, 0: dim // 2]
op_B = op_big_matrix[dim // 2:, 0: dim // 2]
return (1 + 0j) * op_A + 1j * op_B
else:
if kwargs.get('num_grad') is True:
return _num_diff_mat_mat_op(op, obs, **kwargs)
return derived_observable(lambda x, **kwargs: op(x), obs)
def eigh(obs, **kwargs):

View file

@ -56,6 +56,7 @@ def ks_test(obs=None):
else:
obs_list = obs
# TODO: Rework to apply to Q-values of all fits in memory
Qs = []
for obs_i in obs_list:
for ens in obs_i.e_names:
@ -71,7 +72,7 @@ def ks_test(obs=None):
plt.ylabel('Cumulative probability')
plt.title(str(bins) + ' Q values')
n = np.arange(1, bins + 1) / np.float(bins)
n = np.arange(1, bins + 1) / np.float64(bins)
Xs = np.sort(Qs)
plt.step(Xs, n)
diffs = n - Xs

View file

@ -1,13 +1,13 @@
#!/usr/bin/env python
# coding: utf-8
import warnings
import pickle
import numpy as np
import autograd.numpy as anp # Thinly-wrapped numpy
from autograd import jacobian
import matplotlib.pyplot as plt
import numdifftools as nd
import scipy.special
class Obs:
@ -34,6 +34,11 @@ class Obs:
ensemble.
N_sigma_global -- Standard value for N_sigma (default 1.0)
"""
__slots__ = ['names', 'shape', 'r_values', 'deltas', 'N', 'value', 'dvalue',
'ddvalue', 'reweighted', 'S', 'tau_exp', 'N_sigma', 'e_names',
'e_content', 'e_dvalue', 'e_ddvalue', 'e_tauint', 'e_dtauint',
'e_windowsize', 'e_rho', 'e_drho', 'e_n_tauint', 'e_n_dtauint',
'tag']
e_tag_global = 0
S_global = 2.0
@ -85,12 +90,13 @@ class Obs:
self.e_tauint = {}
self.e_dtauint = {}
self.e_windowsize = {}
self.e_Q = {}
self.e_rho = {}
self.e_drho = {}
self.e_n_tauint = {}
self.e_n_dtauint = {}
self.tag = None
def gamma_method(self, **kwargs):
"""Calculate the error and related properties of the Obs.
@ -297,19 +303,6 @@ class Obs:
self.e_windowsize[e_name] = n
break
if len(self.e_content[e_name]) > 1 and self.e_dvalue[e_name] > np.finfo(np.float).eps:
e_mean = 0
for r_name in self.e_content[e_name]:
e_mean += self.shape[r_name] * self.r_values[r_name]
e_mean /= e_N
xi2 = 0
for r_name in self.e_content[e_name]:
xi2 += self.shape[r_name] * (self.r_values[r_name] - e_mean) ** 2
xi2 /= self.e_dvalue[e_name] ** 2 * e_N
self.e_Q[e_name] = 1 - scipy.special.gammainc((len(self.e_content[e_name]) - 1.0) / 2.0, xi2 / 2.0)
else:
self.e_Q[e_name] = None
self.dvalue += self.e_dvalue[e_name] ** 2
self.ddvalue += (self.e_dvalue[e_name] * self.e_ddvalue[e_name]) ** 2
@ -340,6 +333,12 @@ class Obs:
for e_name in self.e_names:
print(e_name, ':', self.e_content[e_name])
def is_zero_within_error(self):
return np.abs(self.value) <= self.dvalue
def is_zero(self):
return np.isclose(0.0, self.value) and all(np.allclose(0.0, delta) for delta in self.deltas.values())
def plot_tauint(self, save=None):
"""Plot integrated autocorrelation time for each ensemble."""
if not self.e_names:
@ -414,8 +413,8 @@ class Obs:
for r, r_name in enumerate(self.e_content[e_name]):
arr[r] = (self.r_values[r_name] - sub_r_mean) / (self.e_dvalue[e_name] * np.sqrt(e_N / self.shape[r_name] - 1))
plt.hist(arr, rwidth=0.8, bins=len(self.e_content[e_name]))
plt.title('Replica distribution' + e_name + ' (mean=0, var=1), Q=' + str(np.around(self.e_Q[e_name], decimals=2)))
plt.show()
plt.title('Replica distribution' + e_name + ' (mean=0, var=1)')
plt.draw()
def plot_history(self):
"""Plot derived Monte Carlo history for each ensemble."""
@ -436,7 +435,7 @@ class Obs:
plt.errorbar(x, y, fmt='.', markersize=3)
plt.xlim(-0.5, e_N - 0.5)
plt.title(e_name)
plt.show()
plt.draw()
def plot_piechart(self):
"""Plot piechart which shows the fractional contribution of each
@ -450,7 +449,7 @@ class Obs:
fig1, ax1 = plt.subplots()
ax1.pie(sizes, labels=labels, startangle=90, normalize=True)
ax1.axis('equal')
plt.show()
plt.draw()
return dict(zip(self.e_names, sizes))
@ -468,24 +467,42 @@ class Obs:
with open(file_name, 'wb') as fb:
pickle.dump(self, fb)
def __float__(self):
return float(self.value)
def __repr__(self):
return 'Obs[' + str(self) + ']'
def __str__(self):
if self.dvalue == 0.0:
return 'Obs[' + str(self.value) + ']'
return str(self.value)
fexp = np.floor(np.log10(self.dvalue))
if fexp < 0.0:
return 'Obs[{:{form}}({:2.0f})]'.format(self.value, self.dvalue * 10 ** (-fexp + 1), form='.' + str(-int(fexp) + 1) + 'f')
return '{:{form}}({:2.0f})'.format(self.value, self.dvalue * 10 ** (-fexp + 1), form='.' + str(-int(fexp) + 1) + 'f')
elif fexp == 0.0:
return 'Obs[{:.1f}({:1.1f})]'.format(self.value, self.dvalue)
return '{:.1f}({:1.1f})'.format(self.value, self.dvalue)
else:
return 'Obs[{:.0f}({:2.0f})]'.format(self.value, self.dvalue)
return '{:.0f}({:2.0f})'.format(self.value, self.dvalue)
# Overload comparisons
def __lt__(self, other):
return self.value < other
def __le__(self, other):
return self.value <= other
def __gt__(self, other):
return self.value > other
def __ge__(self, other):
return self.value >= other
def __eq__(self, other):
return (self - other).is_zero()
def __ne__(self, other):
return not (self - other).is_zero()
# Overload math operations
def __add__(self, y):
if isinstance(y, Obs):
@ -507,9 +524,10 @@ class Obs:
else:
if isinstance(y, np.ndarray):
return np.array([self * o for o in y])
elif isinstance(y, complex):
return CObs(self * y.real, self * y.imag)
elif y.__class__.__name__ == 'Corr':
return NotImplemented
else:
return derived_observable(lambda x, **kwargs: x[0] * y, [self], man_grad=[y])
@ -622,6 +640,86 @@ class Obs:
return derived_observable(lambda x: anp.sinc(x[0]), [self])
class CObs:
"""Class for a complex valued observable."""
__slots__ = ['real', 'imag', 'tag']
def __init__(self, real, imag=0.0):
self.real = real
self.imag = imag
self.tag = None
def gamma_method(self, **kwargs):
if isinstance(self.real, Obs):
self.real.gamma_method(**kwargs)
if isinstance(self.imag, Obs):
self.imag.gamma_method(**kwargs)
def is_zero(self):
return self.real == 0.0 and self.imag == 0.0
def conjugate(self):
return CObs(self.real, -self.imag)
def __add__(self, other):
if hasattr(other, 'real') and hasattr(other, 'imag'):
return CObs(self.real + other.real,
self.imag + other.imag)
else:
return CObs(self.real + other, self.imag)
def __radd__(self, y):
return self + y
def __sub__(self, other):
if hasattr(other, 'real') and hasattr(other, 'imag'):
return CObs(self.real - other.real, self.imag - other.imag)
else:
return CObs(self.real - other, self.imag)
def __rsub__(self, other):
return -1 * (self - other)
def __mul__(self, other):
if all(isinstance(i, Obs) for i in [self.real, self.imag, other.real, other.imag]):
return CObs(derived_observable(lambda x, **kwargs: x[0] * x[1] - x[2] * x[3],
[self.real, other.real, self.imag, other.imag],
man_grad=[other.real.value, self.real.value, -other.imag.value, -self.imag.value]),
derived_observable(lambda x, **kwargs: x[2] * x[1] + x[0] * x[3],
[self.real, other.real, self.imag, other.imag],
man_grad=[other.imag.value, self.imag.value, other.real.value, self.real.value]))
elif hasattr(other, 'real') and hasattr(other, 'imag'):
return CObs(self.real * other.real - self.imag * other.imag,
self.imag * other.real + self.real * other.imag)
else:
return CObs(self.real * other, self.imag * other)
def __rmul__(self, other):
return self * other
def __truediv__(self, other):
if hasattr(other, 'real') and hasattr(other, 'imag'):
r = other.real ** 2 + other.imag ** 2
return CObs((self.real * other.real + self.imag * other.imag) / r, (self.imag * other.real - self.real * other.imag) / r)
else:
return CObs(self.real / other, self.imag / other)
def __abs__(self):
return np.sqrt(self.real**2 + self.imag**2)
def __neg__(other):
return -1 * other
def __eq__(self, other):
return self.real == other.real and self.imag == other.imag
def __str__(self):
return '(' + str(self.real) + int(self.imag >= 0.0) * '+' + str(self.imag) + 'j)'
def __repr__(self):
return 'CObs[' + str(self) + ']'
def derived_observable(func, data, **kwargs):
"""Construct a derived Obs according to func(data, **kwargs) using automatic differentiation.
@ -641,9 +739,6 @@ def derived_observable(func, data, **kwargs):
man_grad -- manually supply a list or an array which contains the jacobian
of func. Use cautiously, supplying the wrong derivative will
not be intercepted.
bias_correction -- if True, the bias correction specified in
hep-lat/0306017 eq. (19) is performed, not recommended.
(Only applicable for more than 1 replicum)
Notes
-----
@ -656,9 +751,19 @@ def derived_observable(func, data, **kwargs):
data = np.asarray(data)
raveled_data = data.ravel()
# Workaround for matrix operations containing non Obs data
for i_data in raveled_data:
if isinstance(i_data, Obs):
first_name = i_data.names[0]
first_shape = i_data.shape[first_name]
break
for i in range(len(raveled_data)):
if isinstance(raveled_data[i], (int, float)):
raveled_data[i] = Obs([raveled_data[i] + np.zeros(first_shape)], [first_name])
n_obs = len(raveled_data)
new_names = sorted(set([y for x in [o.names for o in raveled_data] for y in x]))
replicas = len(new_names)
new_shape = {}
for i_data in raveled_data:
@ -670,8 +775,10 @@ def derived_observable(func, data, **kwargs):
else:
if new_shape[name] != tmp:
raise Exception('Shapes of ensemble', name, 'do not match.')
values = np.vectorize(lambda x: x.value)(data)
if data.ndim == 1:
values = np.array([o.value for o in data])
else:
values = np.vectorize(lambda x: x.value)(data)
new_values = func(values, **kwargs)
@ -733,12 +840,7 @@ def derived_observable(func, data, **kwargs):
new_samples.append(new_deltas[name] + new_r_values[name][i_val])
final_result[i_val] = Obs(new_samples, new_names)
# Bias correction
if replicas > 1 and kwargs.get('bias_correction'):
final_result[i_val].value = (replicas * new_val - final_result[i_val].value) / (replicas - 1)
else:
final_result[i_val].value = new_val
final_result[i_val].value = new_val
if multi == 0:
final_result = final_result.item()
@ -781,9 +883,9 @@ def correlate(obs_a, obs_b):
raise Exception('Shapes of ensemble', name, 'do not fit')
if obs_a.reweighted == 1:
print('Warning: The first observable is already reweighted.')
warnings.warn("The first observable is already reweighted.", RuntimeWarning)
if obs_b.reweighted == 1:
print('Warning: The second observable is already reweighted.')
warnings.warn("The second observable is already reweighted.", RuntimeWarning)
new_samples = []
for name in sorted(obs_a.names):
@ -1065,122 +1167,6 @@ def load_object(path):
return pickle.load(file)
def plot_corrs(observables, **kwargs):
"""Plot lists of Obs.
Parameters
----------
observables -- list of lists of Obs, where the nth entry is considered to be the
correlation function
at x0=n e.g. [[f_A_0,f_A_1],[f_P_0,f_P_1]] or [f_A,f_P], where f_A and f_P are lists of Obs.
Keyword arguments
-----------------
xrange -- list of two values, determining the range of the x-axis e.g. [4, 8]
yrange -- list of two values, determining the range of the y-axis e.g. [0.2, 1.1]
prange -- list of two values, visualizing the width of the plateau e.g. [10, 15]
reference -- float valued variable which is shown as horizontal line for reference
plateau -- Obs which is shown as horizontal line with errorbar for reference
shift -- shift x by given value
label -- list of labels, has to have the same length as observables
exp -- plot exponential from fitting routine
"""
if 'shift' in kwargs:
shift = kwargs.get('shift')
else:
shift = 0
if 'label' in kwargs:
label = kwargs.get('label')
if len(label) != len(observables):
raise Exception('label has to be a list with exactly one entry per entry of observables.')
else:
label = []
for j in range(len(observables)):
label.append(str(j + 1))
f = plt.figure()
for j in range(len(observables)):
T = len(observables[j])
x = np.arange(T) + shift
y = np.zeros(T)
y_err = np.zeros(T)
for i in range(T):
y[i] = observables[j][i].value
y_err[i] = observables[j][i].dvalue
plt.errorbar(x, y, yerr=y_err, ls='none', fmt='o', capsize=3,
markersize=5, lw=1, label=label[j])
if kwargs.get('logscale'):
plt.yscale('log')
if 'xrange' in kwargs:
xrange = kwargs.get('xrange')
plt.xlim(xrange[0], xrange[1])
visible_y = y[int(xrange[0] + 0.5):int(xrange[1] + 0.5)]
visible_y_err = y_err[int(xrange[0] + 0.5):int(xrange[1] + 0.5)]
y_start = np.min(visible_y - visible_y_err)
y_stop = np.max(visible_y + visible_y_err)
span = y_stop - y_start
if np.isfinite(y_start) and np.isfinite(y_stop):
plt.ylim(y_start - 0.1 * span, y_stop + 0.1 * span)
if 'yrange' in kwargs:
yrange = kwargs.get('yrange')
plt.ylim(yrange[0], yrange[1])
if 'reference' in kwargs:
y_value = kwargs.get('reference')
plt.axhline(y=y_value, linewidth=2, color='k', alpha=0.25)
if 'prange' in kwargs:
prange = kwargs.get('prange')
plt.axvline(x=prange[0] - 0.5, ls='--', c='k', lw=1, alpha=0.5, marker=',')
plt.axvline(x=prange[1] + 0.5, ls='--', c='k', lw=1, alpha=0.5, marker=',')
if 'plateau' in kwargs:
plateau = kwargs.get('plateau')
if isinstance(plateau, Obs):
plt.axhline(y=plateau.value, linewidth=2, color='k', alpha=0.6, label='Plateau', marker=',', ls='--')
plt.axhspan(plateau.value - plateau.dvalue, plateau.value + plateau.dvalue, alpha=0.25, color='k')
elif isinstance(plateau, list):
for i in range(len(plateau)):
plt.axhline(y=plateau[i].value, linewidth=2, color='C' + str(i), alpha=0.6, label='Plateau' + str(i + 1), marker=',', ls='--')
plt.axhspan(plateau[i].value - plateau[i].dvalue, plateau[i].value + plateau[i].dvalue,
color='C' + str(i), alpha=0.25)
else:
raise Exception('Improper input for plateau.')
if kwargs.get('exp'):
fit_result = kwargs.get('exp')
y_fit = fit_result[1].value * np.exp(-fit_result[0].value * x)
plt.plot(x, y_fit, color='k')
if not (fit_result[0].e_names == {} and fit_result[1].e_names == {}):
y_fit_err = np.sqrt((y_fit * fit_result[0].dvalue) ** 2 + 2 * covariance(fit_result[0], fit_result[1]) * y_fit * np.exp(-fit_result[0].value * x) + (np.exp(-fit_result[0].value * x) * fit_result[1].dvalue) ** 2)
plt.fill_between(x, y_fit + y_fit_err, y_fit - y_fit_err, color='k', alpha=0.1)
plt.xlabel('$x_0/a$')
if 'ylabel' in kwargs:
plt.ylabel(kwargs.get('ylabel'))
if 'save' in kwargs:
lgd = plt.legend(loc=0)
else:
lgd = plt.legend(bbox_to_anchor=(1.04, 1), loc='upper left')
plt.show()
if 'save' in kwargs:
save = kwargs.get('save')
if not isinstance(save, str):
raise Exception('save has to be a string.')
f.savefig(save + '.pdf', bbox_extra_artists=(lgd,), bbox_inches='tight')
def merge_obs(list_of_obs):
"""Combine all observables in list_of_obs into one new observable

View file

@ -8,6 +8,6 @@ setup(name='pyerrors',
author='Fabian Joswig',
author_email='fabian.joswig@ed.ac.uk',
packages=find_packages(),
python_requires='>=3.5.0',
install_requires=['numpy>=1.16', 'autograd>=1.2', 'numdifftools', 'matplotlib', 'scipy', 'iminuit<2']
python_requires='>=3.6.0',
install_requires=['numpy>=1.16', 'autograd>=1.2', 'numdifftools', 'matplotlib>=3.3', 'scipy', 'iminuit<2']
)

74
tests/test_correlators.py Normal file
View file

@ -0,0 +1,74 @@
import os
import numpy as np
import pyerrors as pe
import pytest
np.random.seed(0)
def test_function_overloading():
corr_content_a = []
corr_content_b = []
for t in range(24):
corr_content_a.append(pe.pseudo_Obs(np.random.normal(1e-10, 1e-8), 1e-4, 't'))
corr_content_b.append(pe.pseudo_Obs(np.random.normal(1e8, 1e10), 1e7, 't'))
corr_a = pe.correlators.Corr(corr_content_a)
corr_b = pe.correlators.Corr(corr_content_b)
fs = [lambda x: x[0] + x[1], lambda x: x[1] + x[0], lambda x: x[0] - x[1], lambda x: x[1] - x[0],
lambda x: x[0] * x[1], lambda x: x[1] * x[0], lambda x: x[0] / x[1], lambda x: x[1] / x[0],
lambda x: np.exp(x[0]), lambda x: np.sin(x[0]), lambda x: np.cos(x[0]), lambda x: np.tan(x[0]),
lambda x: np.log(x[0] + 0.1), lambda x: np.sqrt(np.abs(x[0])),
lambda x: np.sinh(x[0]), lambda x: np.cosh(x[0]), lambda x: np.tanh(x[0])]
for i, f in enumerate(fs):
t1 = f([corr_a, corr_b])
for o_a, o_b, con in zip(corr_content_a, corr_content_b, t1.content):
t2 = f([o_a, o_b])
t2.gamma_method()
assert np.isclose(con[0].value, t2.value)
assert np.isclose(con[0].dvalue, t2.dvalue)
assert np.allclose(con[0].deltas['t'], t2.deltas['t'])
def test_modify_correlator():
corr_content = []
for t in range(24):
exponent = np.random.normal(3, 5)
corr_content.append(pe.pseudo_Obs(2 + 10 ** exponent, 10 ** (exponent - 1), 't'))
corr = pe.correlators.Corr(corr_content)
with pytest.warns(RuntimeWarning):
corr.symmetric()
with pytest.warns(RuntimeWarning):
corr.anti_symmetric()
corr.roll(np.random.randint(100))
corr.deriv(symmetric=True)
corr.deriv(symmetric=False)
corr.second_deriv()
def test_m_eff():
my_corr = pe.correlators.Corr([pe.pseudo_Obs(10, 0.1, 't'), pe.pseudo_Obs(9, 0.05, 't'), pe.pseudo_Obs(8, 0.1, 't'), pe.pseudo_Obs(7, 0.05, 't')])
my_corr.m_eff('log')
my_corr.m_eff('cosh')
my_corr.m_eff('sinh')
def test_utility():
corr_content = []
for t in range(8):
exponent = np.random.normal(3, 5)
corr_content.append(pe.pseudo_Obs(2 + 10 ** exponent, 10 ** (exponent - 1), 't'))
corr = pe.correlators.Corr(corr_content)
corr.print()
corr.print([2, 4])
corr.show()
corr.dump('test_dump')
new_corr = pe.load_object('test_dump.p')
os.remove('test_dump.p')
for o_a, o_b in zip(corr.content, new_corr.content):
assert np.isclose(o_a[0].value, o_b[0].value)
assert np.isclose(o_a[0].dvalue, o_b[0].dvalue)
assert np.allclose(o_a[0].deltas['t'], o_b[0].deltas['t'])

103
tests/test_fits.py Normal file
View file

@ -0,0 +1,103 @@
import autograd.numpy as np
import math
import scipy.optimize
from scipy.odr import ODR, Model, RealData
import pyerrors as pe
import pytest
np.random.seed(0)
def test_standard_fit():
dim = 10 + int(30 * np.random.rand())
x = np.arange(dim)
y = 2 * np.exp(-0.06 * x) + np.random.normal(0.0, 0.15, dim)
yerr = 0.1 + 0.1 * np.random.rand(dim)
oy = []
for i, item in enumerate(x):
oy.append(pe.pseudo_Obs(y[i], yerr[i], str(i)))
def f(x, a, b):
return a * np.exp(-b * x)
popt, pcov = scipy.optimize.curve_fit(f, x, y, sigma=[o.dvalue for o in oy], absolute_sigma=True)
def func(a, x):
y = a[0] * np.exp(-a[1] * x)
return y
beta = pe.fits.standard_fit(x, oy, func)
pe.Obs.e_tag_global = 5
for i in range(2):
beta[i].gamma_method(e_tag=5, S=1.0)
assert math.isclose(beta[i].value, popt[i], abs_tol=1e-5)
assert math.isclose(pcov[i, i], beta[i].dvalue ** 2, abs_tol=1e-3)
assert math.isclose(pe.covariance(beta[0], beta[1]), pcov[0, 1], abs_tol=1e-3)
pe.Obs.e_tag_global = 0
chi2_pyerrors = np.sum(((f(x, *[o.value for o in beta]) - y) / yerr) ** 2) / (len(x) - 2)
chi2_scipy = np.sum(((f(x, *popt) - y) / yerr) ** 2) / (len(x) - 2)
assert math.isclose(chi2_pyerrors, chi2_scipy, abs_tol=1e-10)
def test_odr_fit():
dim = 10 + int(30 * np.random.rand())
x = np.arange(dim) + np.random.normal(0.0, 0.15, dim)
xerr = 0.1 + 0.1 * np.random.rand(dim)
y = 2 * np.exp(-0.06 * x) + np.random.normal(0.0, 0.15, dim)
yerr = 0.1 + 0.1 * np.random.rand(dim)
ox = []
for i, item in enumerate(x):
ox.append(pe.pseudo_Obs(x[i], xerr[i], str(i)))
oy = []
for i, item in enumerate(x):
oy.append(pe.pseudo_Obs(y[i], yerr[i], str(i)))
def f(x, a, b):
return a * np.exp(-b * x)
def func(a, x):
y = a[0] * np.exp(-a[1] * x)
return y
data = RealData([o.value for o in ox], [o.value for o in oy], sx=[o.dvalue for o in ox], sy=[o.dvalue for o in oy])
model = Model(func)
odr = ODR(data, model, [0, 0], partol=np.finfo(np.float64).eps)
odr.set_job(fit_type=0, deriv=1)
output = odr.run()
beta = pe.fits.odr_fit(ox, oy, func)
pe.Obs.e_tag_global = 5
for i in range(2):
beta[i].gamma_method(e_tag=5, S=1.0)
assert math.isclose(beta[i].value, output.beta[i], rel_tol=1e-5)
assert math.isclose(output.cov_beta[i, i], beta[i].dvalue ** 2, rel_tol=2.5e-1), str(output.cov_beta[i, i]) + ' ' + str(beta[i].dvalue ** 2)
assert math.isclose(pe.covariance(beta[0], beta[1]), output.cov_beta[0, 1], rel_tol=2.5e-1)
pe.Obs.e_tag_global = 0
def test_odr_derivatives():
x = []
y = []
x_err = 0.01
y_err = 0.01
for n in np.arange(1, 9, 2):
loc_xvalue = n + np.random.normal(0.0, x_err)
x.append(pe.pseudo_Obs(loc_xvalue, x_err, str(n)))
y.append(pe.pseudo_Obs((lambda x: x ** 2 - 1)(loc_xvalue) +
np.random.normal(0.0, y_err), y_err, str(n)))
def func(a, x):
return a[0] + a[1] * x ** 2
fit1 = pe.fits.odr_fit(x, y, func)
tfit = pe.fits.fit_general(x, y, func, base_step=0.1, step_ratio=1.1, num_steps=20)
assert np.abs(np.max(np.array(list(fit1[1].deltas.values()))
- np.array(list(tfit[1].deltas.values())))) < 10e-8

85
tests/test_linalg.py Normal file
View file

@ -0,0 +1,85 @@
import autograd.numpy as np
import math
import pyerrors as pe
import pytest
np.random.seed(0)
def test_matrix_inverse():
content = []
for t in range(9):
exponent = np.random.normal(3, 5)
content.append(pe.pseudo_Obs(2 + 10 ** exponent, 10 ** (exponent - 1), 't'))
content.append(1.0) # Add 1.0 as a float
matrix = np.diag(content)
inverse_matrix = pe.linalg.mat_mat_op(np.linalg.inv, matrix)
assert all([o.is_zero() for o in np.diag(matrix) * np.diag(inverse_matrix) - 1])
def test_complex_matrix_inverse():
dimension = 6
base_matrix = np.empty((dimension, dimension), dtype=object)
matrix = np.empty((dimension, dimension), dtype=complex)
for (n, m), entry in np.ndenumerate(base_matrix):
exponent_real = np.random.normal(3, 5)
exponent_imag = np.random.normal(3, 5)
base_matrix[n, m] = pe.CObs(pe.pseudo_Obs(2 + 10 ** exponent_real, 10 ** (exponent_real - 1), 't'),
pe.pseudo_Obs(2 + 10 ** exponent_imag, 10 ** (exponent_imag - 1), 't'))
# Construct invertible matrix
obs_matrix = np.identity(dimension) + base_matrix @ base_matrix.T
for (n, m), entry in np.ndenumerate(obs_matrix):
matrix[n, m] = entry.real.value + 1j * entry.imag.value
inverse_matrix = np.linalg.inv(matrix)
inverse_obs_matrix = pe.linalg.mat_mat_op(np.linalg.inv, obs_matrix)
for (n, m), entry in np.ndenumerate(inverse_matrix):
assert np.isclose(inverse_matrix[n, m].real, inverse_obs_matrix[n, m].real.value)
assert np.isclose(inverse_matrix[n, m].imag, inverse_obs_matrix[n, m].imag.value)
def test_matrix_functions():
dim = 3 + int(4 * np.random.rand())
print(dim)
matrix = []
for i in range(dim):
row = []
for j in range(dim):
row.append(pe.pseudo_Obs(np.random.rand(), 0.2 + 0.1 * np.random.rand(), 'e1'))
matrix.append(row)
matrix = np.array(matrix) @ np.identity(dim)
# Check inverse of matrix
inv = pe.linalg.mat_mat_op(np.linalg.inv, matrix)
check_inv = matrix @ inv
for (i, j), entry in np.ndenumerate(check_inv):
entry.gamma_method()
if(i == j):
assert math.isclose(entry.value, 1.0, abs_tol=1e-9), 'value ' + str(i) + ',' + str(j) + ' ' + str(entry.value)
else:
assert math.isclose(entry.value, 0.0, abs_tol=1e-9), 'value ' + str(i) + ',' + str(j) + ' ' + str(entry.value)
assert math.isclose(entry.dvalue, 0.0, abs_tol=1e-9), 'dvalue ' + str(i) + ',' + str(j) + ' ' + str(entry.dvalue)
# Check Cholesky decomposition
sym = np.dot(matrix, matrix.T)
cholesky = pe.linalg.mat_mat_op(np.linalg.cholesky, sym)
check = cholesky @ cholesky.T
for (i, j), entry in np.ndenumerate(check):
diff = entry - sym[i, j]
diff.gamma_method()
assert math.isclose(diff.value, 0.0, abs_tol=1e-9), 'value ' + str(i) + ',' + str(j)
assert math.isclose(diff.dvalue, 0.0, abs_tol=1e-9), 'dvalue ' + str(i) + ',' + str(j)
# Check eigh
e, v = pe.linalg.eigh(sym)
for i in range(dim):
tmp = sym @ v[:, i] - v[:, i] * e[i]
for j in range(dim):
tmp[j].gamma_method()
assert math.isclose(tmp[j].value, 0.0, abs_tol=1e-9), 'value ' + str(i) + ',' + str(j)
assert math.isclose(tmp[j].dvalue, 0.0, abs_tol=1e-9), 'dvalue ' + str(i) + ',' + str(j)

View file

@ -1,17 +1,13 @@
import sys
sys.path.append('..')
import autograd.numpy as np
import os
import random
import math
import string
import copy
import scipy.optimize
from scipy.odr import ODR, Model, Data, RealData
import pyerrors as pe
import pytest
test_iterations = 100
np.random.seed(0)
def test_dump():
value = np.random.normal(5, 10)
@ -30,29 +26,46 @@ def test_comparison():
test_obs2 = pe.pseudo_Obs(value2, 0.1, 't')
assert (value1 > value2) == (test_obs1 > test_obs2)
assert (value1 < value2) == (test_obs1 < test_obs2)
assert (value1 >= value2) == (test_obs1 >= test_obs2)
assert (value1 <= value2) == (test_obs1 <= test_obs2)
assert test_obs1 >= test_obs1
assert test_obs2 <= test_obs2
assert test_obs1 == test_obs1
assert test_obs2 == test_obs2
assert test_obs1 != test_obs2
assert test_obs2 != test_obs1
def test_man_grad():
a = pe.pseudo_Obs(17,2.9,'e1')
b = pe.pseudo_Obs(4,0.8,'e1')
def test_function_overloading():
a = pe.pseudo_Obs(17, 2.9, 'e1')
b = pe.pseudo_Obs(4, 0.8, 'e1')
fs = [lambda x: x[0] + x[1], lambda x: x[1] + x[0], lambda x: x[0] - x[1], lambda x: x[1] - x[0],
lambda x: x[0] * x[1], lambda x: x[1] * x[0], lambda x: x[0] / x[1], lambda x: x[1] / x[0],
lambda x: np.exp(x[0]), lambda x: np.sin(x[0]), lambda x: np.cos(x[0]), lambda x: np.tan(x[0]),
lambda x: np.log(x[0]), lambda x: np.sqrt(x[0]),
lambda x: np.log(x[0]), lambda x: np.sqrt(np.abs(x[0])),
lambda x: np.sinh(x[0]), lambda x: np.cosh(x[0]), lambda x: np.tanh(x[0])]
for i, f in enumerate(fs):
t1 = f([a,b])
t2 = pe.derived_observable(f, [a,b])
t1 = f([a, b])
t2 = pe.derived_observable(f, [a, b])
c = t2 - t1
assert c.value == 0.0, str(i)
assert np.all(np.abs(c.deltas['e1']) < 1e-14), str(i)
def test_overloading_vectorization():
a = np.array([5, 4, 8])
b = pe.pseudo_Obs(4,0.8,'e1')
a = np.random.randint(1, 100, 10)
b = pe.pseudo_Obs(4, 0.8, 't')
assert [o.value for o in a * b] == [o.value for o in b * a]
assert [o.value for o in a + b] == [o.value for o in b + a]
assert [o.value for o in a - b] == [-1 * o.value for o in b - a]
assert [o.value for o in a / b] == [o.value for o in [p / b for p in a]]
assert [o.value for o in b / a] == [o.value for o in [b / p for p in a]]
a = np.random.normal(0.0, 1e10, 10)
b = pe.pseudo_Obs(4, 0.8, 't')
assert [o.value for o in a * b] == [o.value for o in b * a]
assert [o.value for o in a + b] == [o.value for o in b + a]
@ -61,130 +74,29 @@ def test_overloading_vectorization():
assert [o.value for o in b / a] == [o.value for o in [b / p for p in a]]
@pytest.mark.parametrize("n", np.arange(test_iterations // 10))
def test_covariance_is_variance(n):
def test_covariance_is_variance():
value = np.random.normal(5, 10)
dvalue = np.abs(np.random.normal(0, 1))
test_obs = pe.pseudo_Obs(value, dvalue, 't')
test_obs.gamma_method()
assert np.abs(test_obs.dvalue ** 2 - pe.covariance(test_obs, test_obs)) <= 10 * np.finfo(np.float).eps
assert np.abs(test_obs.dvalue ** 2 - pe.covariance(test_obs, test_obs)) <= 10 * np.finfo(np.float64).eps
test_obs = test_obs + pe.pseudo_Obs(value, dvalue, 'q', 200)
test_obs.gamma_method(e_tag=0)
assert np.abs(test_obs.dvalue ** 2 - pe.covariance(test_obs, test_obs)) <= 10 * np.finfo(np.float).eps
assert np.abs(test_obs.dvalue ** 2 - pe.covariance(test_obs, test_obs)) <= 10 * np.finfo(np.float64).eps
@pytest.mark.parametrize("n", np.arange(test_iterations // 10))
def test_fft(n):
def test_fft():
value = np.random.normal(5, 100)
dvalue = np.abs(np.random.normal(0, 5))
test_obs1 = pe.pseudo_Obs(value, dvalue, 't', int(500 + 1000 * np.random.rand()))
test_obs2 = copy.deepcopy(test_obs1)
test_obs1.gamma_method()
test_obs2.gamma_method(fft=False)
assert max(np.abs(test_obs1.e_rho[''] - test_obs2.e_rho[''])) <= 10 * np.finfo(np.float).eps
assert np.abs(test_obs1.dvalue - test_obs2.dvalue) <= 10 * max(test_obs1.dvalue, test_obs2.dvalue) * np.finfo(np.float).eps
assert max(np.abs(test_obs1.e_rho[''] - test_obs2.e_rho[''])) <= 10 * np.finfo(np.float64).eps
assert np.abs(test_obs1.dvalue - test_obs2.dvalue) <= 10 * max(test_obs1.dvalue, test_obs2.dvalue) * np.finfo(np.float64).eps
@pytest.mark.parametrize('n', np.arange(test_iterations // 10))
def test_standard_fit(n):
dim = 10 + int(30 * np.random.rand())
x = np.arange(dim)
y = 2 * np.exp(-0.06 * x) + np.random.normal(0.0, 0.15, dim)
yerr = 0.1 + 0.1 * np.random.rand(dim)
oy = []
for i, item in enumerate(x):
oy.append(pe.pseudo_Obs(y[i], yerr[i], str(i)))
def f(x, a, b):
return a * np.exp(-b * x)
popt, pcov = scipy.optimize.curve_fit(f, x, y, sigma=[o.dvalue for o in oy], absolute_sigma=True)
def func(a, x):
y = a[0] * np.exp(-a[1] * x)
return y
beta = pe.fits.standard_fit(x, oy, func)
pe.Obs.e_tag_global = 5
for i in range(2):
beta[i].gamma_method(e_tag=5, S=1.0)
assert math.isclose(beta[i].value, popt[i], abs_tol=1e-5)
assert math.isclose(pcov[i, i], beta[i].dvalue ** 2, abs_tol=1e-3)
assert math.isclose(pe.covariance(beta[0], beta[1]), pcov[0, 1], abs_tol=1e-3)
pe.Obs.e_tag_global = 0
chi2_pyerrors = np.sum(((f(x, *[o.value for o in beta]) - y) / yerr) ** 2) / (len(x) - 2)
chi2_scipy = np.sum(((f(x, *popt) - y) / yerr) ** 2) / (len(x) - 2)
assert math.isclose(chi2_pyerrors, chi2_scipy, abs_tol=1e-10)
@pytest.mark.parametrize('n', np.arange(test_iterations // 10))
def test_odr_fit(n):
dim = 10 + int(30 * np.random.rand())
x = np.arange(dim) + np.random.normal(0.0, 0.15, dim)
xerr = 0.1 + 0.1 * np.random.rand(dim)
y = 2 * np.exp(-0.06 * x) + np.random.normal(0.0, 0.15, dim)
yerr = 0.1 + 0.1 * np.random.rand(dim)
ox = []
for i, item in enumerate(x):
ox.append(pe.pseudo_Obs(x[i], xerr[i], str(i)))
oy = []
for i, item in enumerate(x):
oy.append(pe.pseudo_Obs(y[i], yerr[i], str(i)))
def f(x, a, b):
return a * np.exp(-b * x)
def func(a, x):
y = a[0] * np.exp(-a[1] * x)
return y
data = RealData([o.value for o in ox], [o.value for o in oy], sx=[o.dvalue for o in ox], sy=[o.dvalue for o in oy])
model = Model(func)
odr = ODR(data, model, [0,0], partol=np.finfo(np.float).eps)
odr.set_job(fit_type=0, deriv=1)
output = odr.run()
beta = pe.fits.odr_fit(ox, oy, func)
pe.Obs.e_tag_global = 5
for i in range(2):
beta[i].gamma_method(e_tag=5, S=1.0)
assert math.isclose(beta[i].value, output.beta[i], rel_tol=1e-5)
assert math.isclose(output.cov_beta[i,i], beta[i].dvalue**2, rel_tol=2.5e-1), str(output.cov_beta[i,i]) + ' ' + str(beta[i].dvalue**2)
assert math.isclose(pe.covariance(beta[0], beta[1]), output.cov_beta[0,1], rel_tol=2.5e-1)
pe.Obs.e_tag_global = 0
@pytest.mark.parametrize('n', np.arange(test_iterations // 10))
def test_odr_derivatives(n):
x = []
y = []
x_err = 0.01
y_err = 0.01
for n in np.arange(1, 9, 2):
loc_xvalue = n + np.random.normal(0.0, x_err)
x.append(pe.pseudo_Obs(loc_xvalue, x_err, str(n)))
y.append(pe.pseudo_Obs((lambda x: x ** 2 - 1)(loc_xvalue) +
np.random.normal(0.0, y_err), y_err, str(n)))
def func(a, x):
return a[0] + a[1] * x ** 2
fit1 = pe.fits.odr_fit(x, y, func)
tfit = pe.fits.fit_general(x, y, func, base_step=0.1, step_ratio=1.1, num_steps=20)
assert np.abs(np.max(np.array(list(fit1[1].deltas.values()))
- np.array(list(tfit[1].deltas.values())))) < 10e-8
@pytest.mark.parametrize('n', np.arange(test_iterations))
def test_covariance_symmetry(n):
def test_covariance_symmetry():
value1 = np.random.normal(5, 10)
dvalue1 = np.abs(np.random.normal(0, 1))
test_obs1 = pe.pseudo_Obs(value1, dvalue1, 't')
@ -195,12 +107,11 @@ def test_covariance_symmetry(n):
test_obs2.gamma_method()
cov_ab = pe.covariance(test_obs1, test_obs2)
cov_ba = pe.covariance(test_obs2, test_obs1)
assert np.abs(cov_ab - cov_ba) <= 10 * np.finfo(np.float).eps
assert np.abs(cov_ab) < test_obs1.dvalue * test_obs2.dvalue * (1 + 10 * np.finfo(np.float).eps)
assert np.abs(cov_ab - cov_ba) <= 10 * np.finfo(np.float64).eps
assert np.abs(cov_ab) < test_obs1.dvalue * test_obs2.dvalue * (1 + 10 * np.finfo(np.float64).eps)
@pytest.mark.parametrize('n', np.arange(test_iterations))
def test_gamma_method(n):
def test_gamma_method():
# Construct pseudo Obs with random shape
value = np.random.normal(5, 10)
dvalue = np.abs(np.random.normal(0, 1))
@ -213,28 +124,7 @@ def test_gamma_method(n):
assert abs(test_obs.dvalue - dvalue) < 1e-10 * dvalue
@pytest.mark.parametrize('n', np.arange(test_iterations))
def test_overloading(n):
# Construct pseudo Obs with random shape
obs_list = []
for i in range(5):
value = np.abs(np.random.normal(5, 2)) + 2.0
dvalue = np.abs(np.random.normal(0, 0.1)) + 1e-5
obs_list.append(pe.pseudo_Obs(value, dvalue, 't', 2000))
# Test if the error is processed correctly
def f(x):
return x[0] * x[1] + np.sin(x[2]) * np.exp(x[3] / x[1] / x[0]) - np.sqrt(2) / np.cosh(x[4] / x[0])
o_obs = f(obs_list)
d_obs = pe.derived_observable(f, obs_list)
assert np.max(np.abs((o_obs.deltas['t'] - d_obs.deltas['t']) / o_obs.deltas['t'])) < 1e-7, str(obs_list)
assert np.abs((o_obs.value - d_obs.value) / o_obs.value) < 1e-10
@pytest.mark.parametrize('n', np.arange(test_iterations))
def test_derived_observables(n):
def test_derived_observables():
# Construct pseudo Obs with random shape
test_obs = pe.pseudo_Obs(2, 0.1 * (1 + np.random.rand()), 't', int(1000 * (1 + np.random.rand())))
@ -245,20 +135,19 @@ def test_derived_observables(n):
d_Obs_fd.gamma_method()
assert d_Obs_ad.value == d_Obs_fd.value
assert np.abs(4.0 * np.sin(4.0) - d_Obs_ad.value) < 1000 * np.finfo(np.float).eps * np.abs(d_Obs_ad.value)
assert np.abs(d_Obs_ad.dvalue-d_Obs_fd.dvalue) < 1000 * np.finfo(np.float).eps * d_Obs_ad.dvalue
assert np.abs(4.0 * np.sin(4.0) - d_Obs_ad.value) < 1000 * np.finfo(np.float64).eps * np.abs(d_Obs_ad.value)
assert np.abs(d_Obs_ad.dvalue-d_Obs_fd.dvalue) < 1000 * np.finfo(np.float64).eps * d_Obs_ad.dvalue
i_am_one = pe.derived_observable(lambda x, **kwargs: x[0] / x[1], [d_Obs_ad, d_Obs_ad])
i_am_one.gamma_method(e_tag=1)
assert i_am_one.value == 1.0
assert i_am_one.dvalue < 2 * np.finfo(np.float).eps
assert i_am_one.e_dvalue['t'] <= 2 * np.finfo(np.float).eps
assert i_am_one.e_ddvalue['t'] <= 2 * np.finfo(np.float).eps
assert i_am_one.dvalue < 2 * np.finfo(np.float64).eps
assert i_am_one.e_dvalue['t'] <= 2 * np.finfo(np.float64).eps
assert i_am_one.e_ddvalue['t'] <= 2 * np.finfo(np.float64).eps
@pytest.mark.parametrize('n', np.arange(test_iterations // 10))
def test_multi_ens_system(n):
def test_multi_ens_system():
names = []
for i in range(100 + int(np.random.rand() * 50)):
tmp_string = ''
@ -276,8 +165,7 @@ def test_multi_ens_system(n):
assert sorted(x for y in sorted(new_obs.e_content.values()) for x in y) == sorted(new_obs.names)
@pytest.mark.parametrize('n', np.arange(test_iterations))
def test_overloaded_functions(n):
def test_overloaded_functions():
funcs = [np.exp, np.log, np.sin, np.cos, np.tan, np.sinh, np.cosh, np.arcsinh, np.arccosh]
deriv = [np.exp, lambda x: 1 / x, np.cos, lambda x: -np.sin(x), lambda x: 1 / np.cos(x) ** 2, np.cosh, np.sinh, lambda x: 1 / np.sqrt(x ** 2 + 1), lambda x: 1 / np.sqrt(x ** 2 - 1)]
val = 3 + 0.5 * np.random.rand()
@ -292,48 +180,52 @@ def test_overloaded_functions(n):
assert np.abs((ad_obs.value - item(val)) / ad_obs.value) < 1e-10, item.__name__
assert np.abs(ad_obs.dvalue - dval * np.abs(deriv[i](val))) < 1e-6, item.__name__
def test_utils():
my_obs = pe.pseudo_Obs(1.0, 0.5, 't')
my_obs.print(0)
my_obs.print(1)
my_obs.print(2)
assert not my_obs.is_zero_within_error()
my_obs.plot_tauint()
my_obs.plot_rho()
my_obs.plot_rep_dist()
my_obs.plot_history()
my_obs.plot_piechart()
assert my_obs > (my_obs - 1)
assert my_obs < (my_obs + 1)
@pytest.mark.parametrize('n', np.arange(test_iterations // 10))
def test_matrix_functions(n):
dim = 3 + int(4 * np.random.rand())
print(dim)
matrix = []
for i in range(dim):
row = []
for j in range(dim):
row.append(pe.pseudo_Obs(np.random.rand(), 0.2 + 0.1 * np.random.rand(), 'e1'))
matrix.append(row)
matrix = np.array(matrix) @ np.identity(dim)
def test_cobs():
obs1 = pe.pseudo_Obs(1.0, 0.1, 't')
obs2 = pe.pseudo_Obs(-0.2, 0.03, 't')
# Check inverse of matrix
inv = pe.linalg.mat_mat_op(np.linalg.inv, matrix)
check_inv = matrix @ inv
my_cobs = pe.CObs(obs1, obs2)
assert not (my_cobs + my_cobs.conjugate()).real.is_zero()
assert (my_cobs + my_cobs.conjugate()).imag.is_zero()
assert (my_cobs - my_cobs.conjugate()).real.is_zero()
assert not (my_cobs - my_cobs.conjugate()).imag.is_zero()
np.abs(my_cobs)
for (i, j), entry in np.ndenumerate(check_inv):
entry.gamma_method()
if(i == j):
assert math.isclose(entry.value, 1.0, abs_tol=1e-9), 'value ' + str(i) + ',' + str(j) + ' ' + str(entry.value)
else:
assert math.isclose(entry.value, 0.0, abs_tol=1e-9), 'value ' + str(i) + ',' + str(j) + ' ' + str(entry.value)
assert math.isclose(entry.dvalue, 0.0, abs_tol=1e-9), 'dvalue ' + str(i) + ',' + str(j) + ' ' + str(entry.dvalue)
assert (my_cobs * my_cobs / my_cobs - my_cobs).is_zero()
assert (my_cobs + my_cobs - 2 * my_cobs).is_zero()
# Check Cholesky decomposition
sym = np.dot(matrix, matrix.T)
cholesky = pe.linalg.mat_mat_op(np.linalg.cholesky, sym)
check = cholesky @ cholesky.T
fs = [[lambda x: x[0] + x[1], lambda x: x[1] + x[0]],
[lambda x: x[0] * x[1], lambda x: x[1] * x[0]]]
for other in [1, 1.1, (1.1-0.2j), pe.CObs(obs1), pe.CObs(obs1, obs2)]:
for funcs in fs:
ta = funcs[0]([my_cobs, other])
tb = funcs[1]([my_cobs, other])
diff = ta - tb
assert np.isclose(0.0, float(diff.real))
assert np.isclose(0.0, float(diff.imag))
assert np.allclose(0.0, diff.real.deltas['t'])
assert np.allclose(0.0, diff.imag.deltas['t'])
for (i, j), entry in np.ndenumerate(check):
diff = entry - sym[i, j]
diff.gamma_method()
assert math.isclose(diff.value, 0.0, abs_tol=1e-9), 'value ' + str(i) + ',' + str(j)
assert math.isclose(diff.dvalue, 0.0, abs_tol=1e-9), 'dvalue ' + str(i) + ',' + str(j)
# Check eigh
e, v = pe.linalg.eigh(sym)
for i in range(dim):
tmp = sym @ v[:, i] - v[:, i] * e[i]
for j in range(dim):
tmp[j].gamma_method()
assert math.isclose(tmp[j].value, 0.0, abs_tol=1e-9), 'value ' + str(i) + ',' + str(j)
assert math.isclose(tmp[j].dvalue, 0.0, abs_tol=1e-9), 'dvalue ' + str(i) + ',' + str(j)
ta = my_cobs - other
tb = other - my_cobs
diff = ta + tb
assert np.isclose(0.0, float(diff.real))
assert np.isclose(0.0, float(diff.imag))
assert np.allclose(0.0, diff.real.deltas['t'])
assert np.allclose(0.0, diff.imag.deltas['t'])
div = my_cobs / other

19
tests/test_roots.py Normal file
View file

@ -0,0 +1,19 @@
import numpy as np
import pyerrors as pe
import pytest
np.random.seed(0)
def test_root_linear():
def root_function(x, d):
return x - d
value = np.random.normal(0, 100)
my_obs = pe.pseudo_Obs(value, 0.1, 't')
my_root = pe.roots.find_root(my_obs, root_function)
assert np.isclose(my_root.value, value)
difference = my_obs - my_root
assert np.allclose(0.0, difference.deltas['t'])