Merge branch 'develop' into documentation

This commit is contained in:
fjosw 2022-07-22 09:27:21 +00:00
commit 2409bfdcf5
5 changed files with 14 additions and 24 deletions

View file

@ -2,10 +2,13 @@
All notable changes to this project will be documented in this file.
## [2.x.x] - 2022-xx-xx
## [2.2.0] - 2022-xx-xx
### Added
- New submodule `input.pandas` added which adds the possibility to read and write pandas DataFrames containing `Obs` or `Corr` objects to csv files or SQLite databases.
- `hash` method for `Obs` objects added.
- `Obs.reweight` method added in analogy to `Corr.reweight` which allows for a more convenient reweighting of individual observables.
- `Corr.show` now has the additional argument `title` which allows to add a title to the figure. Figures are now saved with `bbox_inches='tight'`.
- Function for the extraction of the gradient flow coupling added (see 1607.06423 for details).
### Fixed
- `Corr.m_eff` can now deal with correlator entries which are exactly zero.

View file

@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Import pyerrors, as well as autograd wrapped numpy and matplotlib."
"Import numpy, matplotlib and pyerrors."
]
},
{
@ -91,7 +91,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"`pyerrors` overloads all basic math operations, the user can work with these `Obs` as if they were real numbers. The proper resampling is performed in the background via automatic differentiation."
"`pyerrors` overloads all basic math operations, the user can work with these `Obs` as if they were real numbers. The proper error propagation is performed in the background via automatic differentiation."
]
},
{
@ -179,7 +179,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This figure shows the windowsize dependence of the integrated autocorrelation time. The red vertical line signalizes the window chosen by the automatic windowing procedure with $S=2.0$.\n",
"This figure shows the windowsize dependence of the integrated autocorrelation time. The vertical line signalizes the window chosen by the automatic windowing procedure with $S=2.0$.\n",
"Choosing a larger windowsize would not significantly alter $\\tau_\\text{int}$, so everything seems to be correct here."
]
},
@ -283,7 +283,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can now redo the error analysis and alter the value of S or attach a tail to the autocorrelation function to take into account long range autocorrelations"
"We can now redo the error analysis and alter the value of S or attach a tail to the autocorrelation function via the parameter `tau_exp` to take into account long range autocorrelations"
]
},
{

View file

@ -242,7 +242,7 @@
"id": "4a9d13b2",
"metadata": {},
"source": [
"We can also use the priodicity of the lattice in order to obtain the cosh effective mass"
"We can also use the periodicity of the lattice in order to obtain the cosh effective mass"
]
},
{
@ -387,8 +387,7 @@
"source": [
"## Missing Values \n",
"\n",
"Apart from the build-in functions, there is another reason, why one should use a **Corr** instead of a list of **Obs**. \n",
"Missing values are handled for you. \n",
"Apart from the build-in functions, another benefit of using ``Corr`` objects is that they can handle missing values. \n",
"We will create a second correlator with missing values. "
]
},
@ -459,18 +458,6 @@
"The important thing is that, whatever you do, correlators keep their length **T**. So there will never be confusion about how you count timeslices. You can also take a plateau or perform a fit, even though some values might be missing."
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "f3c4609c",
"metadata": {},
"outputs": [],
"source": [
"assert first_derivative.T == my_correlator.T == len(first_derivative.content) == len(my_correlator.content)\n",
"assert first_derivative.content[0] is None\n",
"assert first_derivative.content[-1] is None"
]
},
{
"cell_type": "markdown",
"id": "7fcbcac4",

View file

@ -163,7 +163,7 @@ def read_DistillationContraction_hd5(path, ens_id, diagrams=["direct"], idl=None
Nt = h5file["DistillationContraction/Metadata"].attrs.get("Nt")[0]
identifier = []
for in_file in range(4):
for in_file in range(len(h5file["DistillationContraction/Metadata/DmfInputFiles"].attrs.keys()) - 1):
encoded_info = h5file["DistillationContraction/Metadata/DmfInputFiles"].attrs.get("DmfInputFiles_" + str(in_file))
full_info = encoded_info[0].decode().split("/")[-1].replace(".h5", "").split("_")
my_tuple = (full_info[0], full_info[1][1:], full_info[2], full_info[3])
@ -174,8 +174,8 @@ def read_DistillationContraction_hd5(path, ens_id, diagrams=["direct"], idl=None
for diagram in diagrams:
real_data = np.zeros(Nt)
for x0 in range(Nt):
raw_data = h5file["DistillationContraction/Correlators/" + diagram + "/" + str(x0)]
real_data += np.roll(raw_data[:]["re"].astype(np.double), -x0)
raw_data = h5file["DistillationContraction/Correlators/" + diagram + "/" + str(x0)][:]["re"].astype(np.double)
real_data += np.roll(raw_data, -x0)
real_data /= Nt
corr_data[diagram].append(real_data)

View file

@ -25,7 +25,7 @@ setup(name='pyerrors',
license="MIT",
packages=find_packages(),
python_requires='>=3.6.0',
install_requires=['numpy>=1.16', 'autograd>=1.4', 'numdifftools', 'matplotlib>=3.3', 'scipy>=1', 'iminuit>=2', 'h5py>=3', 'lxml>=4', 'python-rapidjson>=1', 'pandas>=1.1', 'pysqlite3>=0.4'],
install_requires=['numpy>=1.19', 'autograd>=1.4', 'numdifftools', 'matplotlib>=3.3', 'scipy>=1.5', 'iminuit>=2', 'h5py>=3', 'lxml>=4', 'python-rapidjson>=1', 'pandas>=1.1', 'pysqlite3>=0.4'],
classifiers=[
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Science/Research',