Documentation updated

This commit is contained in:
fjosw 2023-07-10 15:34:17 +00:00
parent 9b1f60c00e
commit 8c4ead8410
2 changed files with 383 additions and 378 deletions

View file

@ -140,8 +140,8 @@ It is based on the gamma method <a href="https://arxiv.org/abs/hep-lat/0306017">
<p>Install the most recent release using pip and <a href="https://pypi.org/project/pyerrors/">pypi</a>:</p>
<div class="pdoc-code codehilite">
<pre><span></span><code>pip<span class="w"> </span>install<span class="w"> </span>pyerrors<span class="w"> </span><span class="c1"># Fresh install</span>
pip<span class="w"> </span>install<span class="w"> </span>-U<span class="w"> </span>pyerrors<span class="w"> </span><span class="c1"># Update</span>
<pre><span></span><code>python<span class="w"> </span>-m<span class="w"> </span>pip<span class="w"> </span>install<span class="w"> </span>pyerrors<span class="w"> </span><span class="c1"># Fresh install</span>
python<span class="w"> </span>-m<span class="w"> </span>pip<span class="w"> </span>install<span class="w"> </span>-U<span class="w"> </span>pyerrors<span class="w"> </span><span class="c1"># Update</span>
</code></pre>
</div>
@ -156,7 +156,7 @@ conda<span class="w"> </span>update<span class="w"> </span>-c<span class="w"> </
<p>Install the current <code>develop</code> version:</p>
<div class="pdoc-code codehilite">
<pre><span></span><code>pip<span class="w"> </span>install<span class="w"> </span>git+https://github.com/fjosw/pyerrors.git@develop
<pre><span></span><code>python<span class="w"> </span>-m<span class="w"> </span>pip<span class="w"> </span>install<span class="w"> </span>git+https://github.com/fjosw/pyerrors.git@develop
</code></pre>
</div>
@ -232,6 +232,8 @@ After having arrived at the derived quantity of interest the <code>gamma_method<
</code></pre>
</div>
<p>The <code>gamma_method</code> is not automatically called after every intermediate step in order to prevent computational overhead.</p>
<p>We use the following definition of the integrated autocorrelation time established in <a href="https://link.springer.com/article/10.1007/BF01022990">Madras &amp; Sokal 1988</a>
$$\tau_\mathrm{int}=\frac{1}{2}+\sum_{t=1}^{W}\rho(t)\geq \frac{1}{2}\,.$$
The window $W$ is determined via the automatic windowing procedure described in <a href="https://arxiv.org/abs/hep-lat/0306017">arXiv:hep-lat/0306017</a>.
@ -450,7 +452,7 @@ Make sure to check the autocorrelation time with e.g. <code><a href="pyerrors/ob
<p>In many projects, auxiliary data that is not based on Monte Carlo chains enters. Examples are experimentally determined mesons masses which are used to set the scale or renormalization constants. These numbers come with an error that has to be propagated through the analysis. The <code>Covobs</code> class allows to define such quantities in <code><a href="">pyerrors</a></code>. Furthermore, external input might consist of correlated quantities. An example are the parameters of an interpolation formula, which are defined via mean values and a covariance matrix between all parameters. The contribution of the interpolation formula to the error of a derived quantity therefore might depend on the complete covariance matrix.</p>
<p>This concept is built into the definition of <code>Covobs</code>. In <code><a href="">pyerrors</a></code>, external input is defined by $M$ mean values, a $M\times M$ covariance matrix, where $M=1$ is permissible, and a name that uniquely identifies the covariance matrix. Below, we define the pion mass, based on its mean value and error, 134.9768(5). Note, that the square of the error enters <code>cov_Obs</code>, since the second argument of this function is the covariance matrix of the <code>Covobs</code>.</p>
<p>This concept is built into the definition of <code>Covobs</code>. In <code><a href="">pyerrors</a></code>, external input is defined by $M$ mean values, a $M\times M$ covariance matrix, where $M=1$ is permissible, and a name that uniquely identifies the covariance matrix. Below, we define the pion mass, based on its mean value and error, 134.9768(5). <strong>Note, that the square of the error enters <code>cov_Obs</code></strong>, since the second argument of this function is the covariance matrix of the <code>Covobs</code>.</p>
<div class="pdoc-code codehilite">
<pre><span></span><code><span class="kn">import</span> <span class="nn"><a href="pyerrors/obs.html">pyerrors.obs</a></span> <span class="k">as</span> <span class="nn">pe</span>
@ -548,13 +550,14 @@ where the Jacobian is computed for each derived quantity via automatic different
</div>
<p><code><a href="">pyerrors</a></code> also supports correlated fits which can be triggered via the parameter <code>correlated_fit=True</code>.
Details about how the required covariance matrix is estimated can be found in <code><a href="pyerrors/obs.html#covariance">pyerrors.obs.covariance</a></code>.</p>
Details about how the required covariance matrix is estimated can be found in <code><a href="pyerrors/obs.html#covariance">pyerrors.obs.covariance</a></code>.
Direct visualizations of the performed fits can be triggered via <code>resplot=True</code> or <code>qqplot=True</code>.</p>
<p>Direct visualizations of the performed fits can be triggered via <code>resplot=True</code> or <code>qqplot=True</code>. For all available options see <code><a href="pyerrors/fits.html#least_squares">pyerrors.fits.least_squares</a></code>.</p>
<p>For all available options including combined fits to multiple datasets see <code><a href="pyerrors/fits.html#least_squares">pyerrors.fits.least_squares</a></code>.</p>
<h2 id="total-least-squares-fits">Total least squares fits</h2>
<p><code><a href="">pyerrors</a></code> can also fit data with errors on both the dependent and independent variables using the total least squares method also referred to orthogonal distance regression as implemented in <a href="https://docs.scipy.org/doc/scipy/reference/odr.html">scipy</a>, see <code><a href="pyerrors/fits.html#least_squares">pyerrors.fits.least_squares</a></code>. The syntax is identical to the standard least squares case, the only difference being that <code>x</code> also has to be a <code>list</code> or <code>numpy.array</code> of <code>Obs</code>.</p>
<p><code><a href="">pyerrors</a></code> can also fit data with errors on both the dependent and independent variables using the total least squares method also referred to as orthogonal distance regression as implemented in <a href="https://docs.scipy.org/doc/scipy/reference/odr.html">scipy</a>, see <code><a href="pyerrors/fits.html#least_squares">pyerrors.fits.least_squares</a></code>. The syntax is identical to the standard least squares case, the only difference being that <code>x</code> also has to be a <code>list</code> or <code>numpy.array</code> of <code>Obs</code>.</p>
<p>For the full API see <code><a href="pyerrors/fits.html">pyerrors.fits</a></code> for fits and <code><a href="pyerrors/roots.html">pyerrors.roots</a></code> for finding roots of functions.</p>
@ -694,8 +697,8 @@ The following entries are optional:</li>
</span><span id="L-27"><a href="#L-27"><span class="linenos"> 27</span></a>
</span><span id="L-28"><a href="#L-28"><span class="linenos"> 28</span></a><span class="sd">Install the most recent release using pip and [pypi](https://pypi.org/project/pyerrors/):</span>
</span><span id="L-29"><a href="#L-29"><span class="linenos"> 29</span></a><span class="sd">```bash</span>
</span><span id="L-30"><a href="#L-30"><span class="linenos"> 30</span></a><span class="sd">pip install pyerrors # Fresh install</span>
</span><span id="L-31"><a href="#L-31"><span class="linenos"> 31</span></a><span class="sd">pip install -U pyerrors # Update</span>
</span><span id="L-30"><a href="#L-30"><span class="linenos"> 30</span></a><span class="sd">python -m pip install pyerrors # Fresh install</span>
</span><span id="L-31"><a href="#L-31"><span class="linenos"> 31</span></a><span class="sd">python -m pip install -U pyerrors # Update</span>
</span><span id="L-32"><a href="#L-32"><span class="linenos"> 32</span></a><span class="sd">```</span>
</span><span id="L-33"><a href="#L-33"><span class="linenos"> 33</span></a><span class="sd">Install the most recent release using conda and [conda-forge](https://anaconda.org/conda-forge/pyerrors):</span>
</span><span id="L-34"><a href="#L-34"><span class="linenos"> 34</span></a><span class="sd">```bash</span>
@ -704,7 +707,7 @@ The following entries are optional:</li>
</span><span id="L-37"><a href="#L-37"><span class="linenos"> 37</span></a><span class="sd">```</span>
</span><span id="L-38"><a href="#L-38"><span class="linenos"> 38</span></a><span class="sd">Install the current `develop` version:</span>
</span><span id="L-39"><a href="#L-39"><span class="linenos"> 39</span></a><span class="sd">```bash</span>
</span><span id="L-40"><a href="#L-40"><span class="linenos"> 40</span></a><span class="sd">pip install git+https://github.com/fjosw/pyerrors.git@develop</span>
</span><span id="L-40"><a href="#L-40"><span class="linenos"> 40</span></a><span class="sd">python -m pip install git+https://github.com/fjosw/pyerrors.git@develop</span>
</span><span id="L-41"><a href="#L-41"><span class="linenos"> 41</span></a><span class="sd">```</span>
</span><span id="L-42"><a href="#L-42"><span class="linenos"> 42</span></a>
</span><span id="L-43"><a href="#L-43"><span class="linenos"> 43</span></a><span class="sd">## Basic example</span>
@ -775,383 +778,385 @@ The following entries are optional:</li>
</span><span id="L-108"><a href="#L-108"><span class="linenos">108</span></a><span class="sd">&gt; · Ensemble &#39;ensemble_name&#39; : 1000 configurations (from 1 to 1000)</span>
</span><span id="L-109"><a href="#L-109"><span class="linenos">109</span></a>
</span><span id="L-110"><a href="#L-110"><span class="linenos">110</span></a><span class="sd">```</span>
</span><span id="L-111"><a href="#L-111"><span class="linenos">111</span></a>
</span><span id="L-112"><a href="#L-112"><span class="linenos">112</span></a><span class="sd">We use the following definition of the integrated autocorrelation time established in [Madras &amp; Sokal 1988](https://link.springer.com/article/10.1007/BF01022990)</span>
</span><span id="L-113"><a href="#L-113"><span class="linenos">113</span></a><span class="sd">$$\tau_\mathrm{int}=\frac{1}{2}+\sum_{t=1}^{W}\rho(t)\geq \frac{1}{2}\,.$$</span>
</span><span id="L-114"><a href="#L-114"><span class="linenos">114</span></a><span class="sd">The window $W$ is determined via the automatic windowing procedure described in [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017).</span>
</span><span id="L-115"><a href="#L-115"><span class="linenos">115</span></a><span class="sd">The standard value for the parameter $S$ of this automatic windowing procedure is $S=2$. Other values for $S$ can be passed to the `gamma_method` as parameter.</span>
</span><span id="L-116"><a href="#L-116"><span class="linenos">116</span></a>
</span><span id="L-117"><a href="#L-117"><span class="linenos">117</span></a><span class="sd">```python</span>
</span><span id="L-118"><a href="#L-118"><span class="linenos">118</span></a><span class="sd">my_sum.gamma_method(S=3.0)</span>
</span><span id="L-119"><a href="#L-119"><span class="linenos">119</span></a><span class="sd">my_sum.details()</span>
</span><span id="L-120"><a href="#L-120"><span class="linenos">120</span></a><span class="sd">&gt; Result 1.70000000e+00 +/- 6.30675201e-01 +/- 1.04585650e-01 (37.099%)</span>
</span><span id="L-121"><a href="#L-121"><span class="linenos">121</span></a><span class="sd">&gt; t_int 3.29909703e+00 +/- 9.77310102e-01 S = 3.00</span>
</span><span id="L-122"><a href="#L-122"><span class="linenos">122</span></a><span class="sd">&gt; 1000 samples in 1 ensemble:</span>
</span><span id="L-123"><a href="#L-123"><span class="linenos">123</span></a><span class="sd">&gt; · Ensemble &#39;ensemble_name&#39; : 1000 configurations (from 1 to 1000)</span>
</span><span id="L-124"><a href="#L-124"><span class="linenos">124</span></a>
</span><span id="L-125"><a href="#L-125"><span class="linenos">125</span></a><span class="sd">```</span>
</span><span id="L-126"><a href="#L-126"><span class="linenos">126</span></a>
</span><span id="L-127"><a href="#L-127"><span class="linenos">127</span></a><span class="sd">The integrated autocorrelation time $\tau_\mathrm{int}$ and the autocorrelation function $\rho(W)$ can be monitored via the methods `pyerrors.obs.Obs.plot_tauint` and `pyerrors.obs.Obs.plot_rho`.</span>
</span><span id="L-128"><a href="#L-128"><span class="linenos">128</span></a>
</span><span id="L-129"><a href="#L-129"><span class="linenos">129</span></a><span class="sd">If the parameter $S$ is set to zero it is assumed that the dataset does not exhibit any autocorrelation and the window size is chosen to be zero.</span>
</span><span id="L-130"><a href="#L-130"><span class="linenos">130</span></a><span class="sd">In this case the error estimate is identical to the sample standard error.</span>
</span><span id="L-131"><a href="#L-131"><span class="linenos">131</span></a>
</span><span id="L-132"><a href="#L-132"><span class="linenos">132</span></a><span class="sd">### Exponential tails</span>
</span><span id="L-133"><a href="#L-133"><span class="linenos">133</span></a>
</span><span id="L-134"><a href="#L-134"><span class="linenos">134</span></a><span class="sd">Slow modes in the Monte Carlo history can be accounted for by attaching an exponential tail to the autocorrelation function $\rho$ as suggested in [arXiv:1009.5228](https://arxiv.org/abs/1009.5228). The longest autocorrelation time in the history, $\tau_\mathrm{exp}$, can be passed to the `gamma_method` as parameter. In this case the automatic windowing procedure is vacated and the parameter $S$ does not affect the error estimate.</span>
</span><span id="L-135"><a href="#L-135"><span class="linenos">135</span></a>
</span><span id="L-136"><a href="#L-136"><span class="linenos">136</span></a><span class="sd">```python</span>
</span><span id="L-137"><a href="#L-137"><span class="linenos">137</span></a><span class="sd">my_sum.gamma_method(tau_exp=7.2)</span>
</span><span id="L-138"><a href="#L-138"><span class="linenos">138</span></a><span class="sd">my_sum.details()</span>
</span><span id="L-139"><a href="#L-139"><span class="linenos">139</span></a><span class="sd">&gt; Result 1.70000000e+00 +/- 6.28097762e-01 +/- 5.79077524e-02 (36.947%)</span>
</span><span id="L-140"><a href="#L-140"><span class="linenos">140</span></a><span class="sd">&gt; t_int 3.27218667e+00 +/- 7.99583654e-01 tau_exp = 7.20, N_sigma = 1</span>
</span><span id="L-141"><a href="#L-141"><span class="linenos">141</span></a><span class="sd">&gt; 1000 samples in 1 ensemble:</span>
</span><span id="L-142"><a href="#L-142"><span class="linenos">142</span></a><span class="sd">&gt; · Ensemble &#39;ensemble_name&#39; : 1000 configurations (from 1 to 1000)</span>
</span><span id="L-143"><a href="#L-143"><span class="linenos">143</span></a><span class="sd">```</span>
</span><span id="L-144"><a href="#L-144"><span class="linenos">144</span></a>
</span><span id="L-145"><a href="#L-145"><span class="linenos">145</span></a><span class="sd">For the full API see `pyerrors.obs.Obs.gamma_method`.</span>
</span><span id="L-146"><a href="#L-146"><span class="linenos">146</span></a>
</span><span id="L-147"><a href="#L-147"><span class="linenos">147</span></a><span class="sd">## Multiple ensembles/replica</span>
</span><span id="L-148"><a href="#L-148"><span class="linenos">148</span></a>
</span><span id="L-149"><a href="#L-149"><span class="linenos">149</span></a><span class="sd">Error propagation for multiple ensembles (Markov chains with different simulation parameters) is handled automatically. Ensembles are uniquely identified by their `name`.</span>
</span><span id="L-150"><a href="#L-150"><span class="linenos">150</span></a>
</span><span id="L-151"><a href="#L-151"><span class="linenos">151</span></a><span class="sd">```python</span>
</span><span id="L-152"><a href="#L-152"><span class="linenos">152</span></a><span class="sd">obs1 = pe.Obs([samples1], [&#39;ensemble1&#39;])</span>
</span><span id="L-153"><a href="#L-153"><span class="linenos">153</span></a><span class="sd">obs2 = pe.Obs([samples2], [&#39;ensemble2&#39;])</span>
</span><span id="L-154"><a href="#L-154"><span class="linenos">154</span></a>
</span><span id="L-155"><a href="#L-155"><span class="linenos">155</span></a><span class="sd">my_sum = obs1 + obs2</span>
</span><span id="L-156"><a href="#L-156"><span class="linenos">156</span></a><span class="sd">my_sum.details()</span>
</span><span id="L-157"><a href="#L-157"><span class="linenos">157</span></a><span class="sd">&gt; Result 2.00697958e+00</span>
</span><span id="L-158"><a href="#L-158"><span class="linenos">158</span></a><span class="sd">&gt; 1500 samples in 2 ensembles:</span>
</span><span id="L-159"><a href="#L-159"><span class="linenos">159</span></a><span class="sd">&gt; · Ensemble &#39;ensemble1&#39; : 1000 configurations (from 1 to 1000)</span>
</span><span id="L-160"><a href="#L-160"><span class="linenos">160</span></a><span class="sd">&gt; · Ensemble &#39;ensemble2&#39; : 500 configurations (from 1 to 500)</span>
</span><span id="L-161"><a href="#L-161"><span class="linenos">161</span></a><span class="sd">```</span>
</span><span id="L-162"><a href="#L-162"><span class="linenos">162</span></a><span class="sd">Observables from the **same Monte Carlo chain** have to be initialized with the **same name** for correct error propagation. If different names were used in this case the data would be treated as statistically independent resulting in loss of relevant information and a potential over or under estimate of the statistical error.</span>
</span><span id="L-163"><a href="#L-163"><span class="linenos">163</span></a>
</span><span id="L-111"><a href="#L-111"><span class="linenos">111</span></a><span class="sd">The `gamma_method` is not automatically called after every intermediate step in order to prevent computational overhead.</span>
</span><span id="L-112"><a href="#L-112"><span class="linenos">112</span></a>
</span><span id="L-113"><a href="#L-113"><span class="linenos">113</span></a><span class="sd">We use the following definition of the integrated autocorrelation time established in [Madras &amp; Sokal 1988](https://link.springer.com/article/10.1007/BF01022990)</span>
</span><span id="L-114"><a href="#L-114"><span class="linenos">114</span></a><span class="sd">$$\tau_\mathrm{int}=\frac{1}{2}+\sum_{t=1}^{W}\rho(t)\geq \frac{1}{2}\,.$$</span>
</span><span id="L-115"><a href="#L-115"><span class="linenos">115</span></a><span class="sd">The window $W$ is determined via the automatic windowing procedure described in [arXiv:hep-lat/0306017](https://arxiv.org/abs/hep-lat/0306017).</span>
</span><span id="L-116"><a href="#L-116"><span class="linenos">116</span></a><span class="sd">The standard value for the parameter $S$ of this automatic windowing procedure is $S=2$. Other values for $S$ can be passed to the `gamma_method` as parameter.</span>
</span><span id="L-117"><a href="#L-117"><span class="linenos">117</span></a>
</span><span id="L-118"><a href="#L-118"><span class="linenos">118</span></a><span class="sd">```python</span>
</span><span id="L-119"><a href="#L-119"><span class="linenos">119</span></a><span class="sd">my_sum.gamma_method(S=3.0)</span>
</span><span id="L-120"><a href="#L-120"><span class="linenos">120</span></a><span class="sd">my_sum.details()</span>
</span><span id="L-121"><a href="#L-121"><span class="linenos">121</span></a><span class="sd">&gt; Result 1.70000000e+00 +/- 6.30675201e-01 +/- 1.04585650e-01 (37.099%)</span>
</span><span id="L-122"><a href="#L-122"><span class="linenos">122</span></a><span class="sd">&gt; t_int 3.29909703e+00 +/- 9.77310102e-01 S = 3.00</span>
</span><span id="L-123"><a href="#L-123"><span class="linenos">123</span></a><span class="sd">&gt; 1000 samples in 1 ensemble:</span>
</span><span id="L-124"><a href="#L-124"><span class="linenos">124</span></a><span class="sd">&gt; · Ensemble &#39;ensemble_name&#39; : 1000 configurations (from 1 to 1000)</span>
</span><span id="L-125"><a href="#L-125"><span class="linenos">125</span></a>
</span><span id="L-126"><a href="#L-126"><span class="linenos">126</span></a><span class="sd">```</span>
</span><span id="L-127"><a href="#L-127"><span class="linenos">127</span></a>
</span><span id="L-128"><a href="#L-128"><span class="linenos">128</span></a><span class="sd">The integrated autocorrelation time $\tau_\mathrm{int}$ and the autocorrelation function $\rho(W)$ can be monitored via the methods `pyerrors.obs.Obs.plot_tauint` and `pyerrors.obs.Obs.plot_rho`.</span>
</span><span id="L-129"><a href="#L-129"><span class="linenos">129</span></a>
</span><span id="L-130"><a href="#L-130"><span class="linenos">130</span></a><span class="sd">If the parameter $S$ is set to zero it is assumed that the dataset does not exhibit any autocorrelation and the window size is chosen to be zero.</span>
</span><span id="L-131"><a href="#L-131"><span class="linenos">131</span></a><span class="sd">In this case the error estimate is identical to the sample standard error.</span>
</span><span id="L-132"><a href="#L-132"><span class="linenos">132</span></a>
</span><span id="L-133"><a href="#L-133"><span class="linenos">133</span></a><span class="sd">### Exponential tails</span>
</span><span id="L-134"><a href="#L-134"><span class="linenos">134</span></a>
</span><span id="L-135"><a href="#L-135"><span class="linenos">135</span></a><span class="sd">Slow modes in the Monte Carlo history can be accounted for by attaching an exponential tail to the autocorrelation function $\rho$ as suggested in [arXiv:1009.5228](https://arxiv.org/abs/1009.5228). The longest autocorrelation time in the history, $\tau_\mathrm{exp}$, can be passed to the `gamma_method` as parameter. In this case the automatic windowing procedure is vacated and the parameter $S$ does not affect the error estimate.</span>
</span><span id="L-136"><a href="#L-136"><span class="linenos">136</span></a>
</span><span id="L-137"><a href="#L-137"><span class="linenos">137</span></a><span class="sd">```python</span>
</span><span id="L-138"><a href="#L-138"><span class="linenos">138</span></a><span class="sd">my_sum.gamma_method(tau_exp=7.2)</span>
</span><span id="L-139"><a href="#L-139"><span class="linenos">139</span></a><span class="sd">my_sum.details()</span>
</span><span id="L-140"><a href="#L-140"><span class="linenos">140</span></a><span class="sd">&gt; Result 1.70000000e+00 +/- 6.28097762e-01 +/- 5.79077524e-02 (36.947%)</span>
</span><span id="L-141"><a href="#L-141"><span class="linenos">141</span></a><span class="sd">&gt; t_int 3.27218667e+00 +/- 7.99583654e-01 tau_exp = 7.20, N_sigma = 1</span>
</span><span id="L-142"><a href="#L-142"><span class="linenos">142</span></a><span class="sd">&gt; 1000 samples in 1 ensemble:</span>
</span><span id="L-143"><a href="#L-143"><span class="linenos">143</span></a><span class="sd">&gt; · Ensemble &#39;ensemble_name&#39; : 1000 configurations (from 1 to 1000)</span>
</span><span id="L-144"><a href="#L-144"><span class="linenos">144</span></a><span class="sd">```</span>
</span><span id="L-145"><a href="#L-145"><span class="linenos">145</span></a>
</span><span id="L-146"><a href="#L-146"><span class="linenos">146</span></a><span class="sd">For the full API see `pyerrors.obs.Obs.gamma_method`.</span>
</span><span id="L-147"><a href="#L-147"><span class="linenos">147</span></a>
</span><span id="L-148"><a href="#L-148"><span class="linenos">148</span></a><span class="sd">## Multiple ensembles/replica</span>
</span><span id="L-149"><a href="#L-149"><span class="linenos">149</span></a>
</span><span id="L-150"><a href="#L-150"><span class="linenos">150</span></a><span class="sd">Error propagation for multiple ensembles (Markov chains with different simulation parameters) is handled automatically. Ensembles are uniquely identified by their `name`.</span>
</span><span id="L-151"><a href="#L-151"><span class="linenos">151</span></a>
</span><span id="L-152"><a href="#L-152"><span class="linenos">152</span></a><span class="sd">```python</span>
</span><span id="L-153"><a href="#L-153"><span class="linenos">153</span></a><span class="sd">obs1 = pe.Obs([samples1], [&#39;ensemble1&#39;])</span>
</span><span id="L-154"><a href="#L-154"><span class="linenos">154</span></a><span class="sd">obs2 = pe.Obs([samples2], [&#39;ensemble2&#39;])</span>
</span><span id="L-155"><a href="#L-155"><span class="linenos">155</span></a>
</span><span id="L-156"><a href="#L-156"><span class="linenos">156</span></a><span class="sd">my_sum = obs1 + obs2</span>
</span><span id="L-157"><a href="#L-157"><span class="linenos">157</span></a><span class="sd">my_sum.details()</span>
</span><span id="L-158"><a href="#L-158"><span class="linenos">158</span></a><span class="sd">&gt; Result 2.00697958e+00</span>
</span><span id="L-159"><a href="#L-159"><span class="linenos">159</span></a><span class="sd">&gt; 1500 samples in 2 ensembles:</span>
</span><span id="L-160"><a href="#L-160"><span class="linenos">160</span></a><span class="sd">&gt; · Ensemble &#39;ensemble1&#39; : 1000 configurations (from 1 to 1000)</span>
</span><span id="L-161"><a href="#L-161"><span class="linenos">161</span></a><span class="sd">&gt; · Ensemble &#39;ensemble2&#39; : 500 configurations (from 1 to 500)</span>
</span><span id="L-162"><a href="#L-162"><span class="linenos">162</span></a><span class="sd">```</span>
</span><span id="L-163"><a href="#L-163"><span class="linenos">163</span></a><span class="sd">Observables from the **same Monte Carlo chain** have to be initialized with the **same name** for correct error propagation. If different names were used in this case the data would be treated as statistically independent resulting in loss of relevant information and a potential over or under estimate of the statistical error.</span>
</span><span id="L-164"><a href="#L-164"><span class="linenos">164</span></a>
</span><span id="L-165"><a href="#L-165"><span class="linenos">165</span></a><span class="sd">`pyerrors` identifies multiple replica (independent Markov chains with identical simulation parameters) by the vertical bar `|` in the name of the data set.</span>
</span><span id="L-166"><a href="#L-166"><span class="linenos">166</span></a>
</span><span id="L-167"><a href="#L-167"><span class="linenos">167</span></a><span class="sd">```python</span>
</span><span id="L-168"><a href="#L-168"><span class="linenos">168</span></a><span class="sd">obs1 = pe.Obs([samples1], [&#39;ensemble1|r01&#39;])</span>
</span><span id="L-169"><a href="#L-169"><span class="linenos">169</span></a><span class="sd">obs2 = pe.Obs([samples2], [&#39;ensemble1|r02&#39;])</span>
</span><span id="L-170"><a href="#L-170"><span class="linenos">170</span></a>
</span><span id="L-171"><a href="#L-171"><span class="linenos">171</span></a><span class="sd">&gt; my_sum = obs1 + obs2</span>
</span><span id="L-172"><a href="#L-172"><span class="linenos">172</span></a><span class="sd">&gt; my_sum.details()</span>
</span><span id="L-173"><a href="#L-173"><span class="linenos">173</span></a><span class="sd">&gt; Result 2.00697958e+00</span>
</span><span id="L-174"><a href="#L-174"><span class="linenos">174</span></a><span class="sd">&gt; 1500 samples in 1 ensemble:</span>
</span><span id="L-175"><a href="#L-175"><span class="linenos">175</span></a><span class="sd">&gt; · Ensemble &#39;ensemble1&#39;</span>
</span><span id="L-176"><a href="#L-176"><span class="linenos">176</span></a><span class="sd">&gt; · Replicum &#39;r01&#39; : 1000 configurations (from 1 to 1000)</span>
</span><span id="L-177"><a href="#L-177"><span class="linenos">177</span></a><span class="sd">&gt; · Replicum &#39;r02&#39; : 500 configurations (from 1 to 500)</span>
</span><span id="L-178"><a href="#L-178"><span class="linenos">178</span></a><span class="sd">```</span>
</span><span id="L-179"><a href="#L-179"><span class="linenos">179</span></a>
</span><span id="L-180"><a href="#L-180"><span class="linenos">180</span></a><span class="sd">### Error estimation for multiple ensembles</span>
</span><span id="L-181"><a href="#L-181"><span class="linenos">181</span></a>
</span><span id="L-182"><a href="#L-182"><span class="linenos">182</span></a><span class="sd">In order to keep track of different error analysis parameters for different ensembles one can make use of global dictionaries as detailed in the following example.</span>
</span><span id="L-183"><a href="#L-183"><span class="linenos">183</span></a>
</span><span id="L-184"><a href="#L-184"><span class="linenos">184</span></a><span class="sd">```python</span>
</span><span id="L-185"><a href="#L-185"><span class="linenos">185</span></a><span class="sd">pe.Obs.S_dict[&#39;ensemble1&#39;] = 2.5</span>
</span><span id="L-186"><a href="#L-186"><span class="linenos">186</span></a><span class="sd">pe.Obs.tau_exp_dict[&#39;ensemble2&#39;] = 8.0</span>
</span><span id="L-187"><a href="#L-187"><span class="linenos">187</span></a><span class="sd">pe.Obs.tau_exp_dict[&#39;ensemble3&#39;] = 2.0</span>
</span><span id="L-188"><a href="#L-188"><span class="linenos">188</span></a><span class="sd">```</span>
</span><span id="L-189"><a href="#L-189"><span class="linenos">189</span></a>
</span><span id="L-190"><a href="#L-190"><span class="linenos">190</span></a><span class="sd">In case the `gamma_method` is called without any parameters it will use the values specified in the dictionaries for the respective ensembles.</span>
</span><span id="L-191"><a href="#L-191"><span class="linenos">191</span></a><span class="sd">Passing arguments to the `gamma_method` still dominates over the dictionaries.</span>
</span><span id="L-192"><a href="#L-192"><span class="linenos">192</span></a>
</span><span id="L-165"><a href="#L-165"><span class="linenos">165</span></a>
</span><span id="L-166"><a href="#L-166"><span class="linenos">166</span></a><span class="sd">`pyerrors` identifies multiple replica (independent Markov chains with identical simulation parameters) by the vertical bar `|` in the name of the data set.</span>
</span><span id="L-167"><a href="#L-167"><span class="linenos">167</span></a>
</span><span id="L-168"><a href="#L-168"><span class="linenos">168</span></a><span class="sd">```python</span>
</span><span id="L-169"><a href="#L-169"><span class="linenos">169</span></a><span class="sd">obs1 = pe.Obs([samples1], [&#39;ensemble1|r01&#39;])</span>
</span><span id="L-170"><a href="#L-170"><span class="linenos">170</span></a><span class="sd">obs2 = pe.Obs([samples2], [&#39;ensemble1|r02&#39;])</span>
</span><span id="L-171"><a href="#L-171"><span class="linenos">171</span></a>
</span><span id="L-172"><a href="#L-172"><span class="linenos">172</span></a><span class="sd">&gt; my_sum = obs1 + obs2</span>
</span><span id="L-173"><a href="#L-173"><span class="linenos">173</span></a><span class="sd">&gt; my_sum.details()</span>
</span><span id="L-174"><a href="#L-174"><span class="linenos">174</span></a><span class="sd">&gt; Result 2.00697958e+00</span>
</span><span id="L-175"><a href="#L-175"><span class="linenos">175</span></a><span class="sd">&gt; 1500 samples in 1 ensemble:</span>
</span><span id="L-176"><a href="#L-176"><span class="linenos">176</span></a><span class="sd">&gt; · Ensemble &#39;ensemble1&#39;</span>
</span><span id="L-177"><a href="#L-177"><span class="linenos">177</span></a><span class="sd">&gt; · Replicum &#39;r01&#39; : 1000 configurations (from 1 to 1000)</span>
</span><span id="L-178"><a href="#L-178"><span class="linenos">178</span></a><span class="sd">&gt; · Replicum &#39;r02&#39; : 500 configurations (from 1 to 500)</span>
</span><span id="L-179"><a href="#L-179"><span class="linenos">179</span></a><span class="sd">```</span>
</span><span id="L-180"><a href="#L-180"><span class="linenos">180</span></a>
</span><span id="L-181"><a href="#L-181"><span class="linenos">181</span></a><span class="sd">### Error estimation for multiple ensembles</span>
</span><span id="L-182"><a href="#L-182"><span class="linenos">182</span></a>
</span><span id="L-183"><a href="#L-183"><span class="linenos">183</span></a><span class="sd">In order to keep track of different error analysis parameters for different ensembles one can make use of global dictionaries as detailed in the following example.</span>
</span><span id="L-184"><a href="#L-184"><span class="linenos">184</span></a>
</span><span id="L-185"><a href="#L-185"><span class="linenos">185</span></a><span class="sd">```python</span>
</span><span id="L-186"><a href="#L-186"><span class="linenos">186</span></a><span class="sd">pe.Obs.S_dict[&#39;ensemble1&#39;] = 2.5</span>
</span><span id="L-187"><a href="#L-187"><span class="linenos">187</span></a><span class="sd">pe.Obs.tau_exp_dict[&#39;ensemble2&#39;] = 8.0</span>
</span><span id="L-188"><a href="#L-188"><span class="linenos">188</span></a><span class="sd">pe.Obs.tau_exp_dict[&#39;ensemble3&#39;] = 2.0</span>
</span><span id="L-189"><a href="#L-189"><span class="linenos">189</span></a><span class="sd">```</span>
</span><span id="L-190"><a href="#L-190"><span class="linenos">190</span></a>
</span><span id="L-191"><a href="#L-191"><span class="linenos">191</span></a><span class="sd">In case the `gamma_method` is called without any parameters it will use the values specified in the dictionaries for the respective ensembles.</span>
</span><span id="L-192"><a href="#L-192"><span class="linenos">192</span></a><span class="sd">Passing arguments to the `gamma_method` still dominates over the dictionaries.</span>
</span><span id="L-193"><a href="#L-193"><span class="linenos">193</span></a>
</span><span id="L-194"><a href="#L-194"><span class="linenos">194</span></a><span class="sd">## Irregular Monte Carlo chains</span>
</span><span id="L-195"><a href="#L-195"><span class="linenos">195</span></a>
</span><span id="L-196"><a href="#L-196"><span class="linenos">196</span></a><span class="sd">`Obs` objects defined on irregular Monte Carlo chains can be initialized with the parameter `idl`.</span>
</span><span id="L-197"><a href="#L-197"><span class="linenos">197</span></a>
</span><span id="L-198"><a href="#L-198"><span class="linenos">198</span></a><span class="sd">```python</span>
</span><span id="L-199"><a href="#L-199"><span class="linenos">199</span></a><span class="sd"># Observable defined on configurations 20 to 519</span>
</span><span id="L-200"><a href="#L-200"><span class="linenos">200</span></a><span class="sd">obs1 = pe.Obs([samples1], [&#39;ensemble1&#39;], idl=[range(20, 520)])</span>
</span><span id="L-201"><a href="#L-201"><span class="linenos">201</span></a><span class="sd">obs1.details()</span>
</span><span id="L-202"><a href="#L-202"><span class="linenos">202</span></a><span class="sd">&gt; Result 9.98319881e-01</span>
</span><span id="L-203"><a href="#L-203"><span class="linenos">203</span></a><span class="sd">&gt; 500 samples in 1 ensemble:</span>
</span><span id="L-204"><a href="#L-204"><span class="linenos">204</span></a><span class="sd">&gt; · Ensemble &#39;ensemble1&#39; : 500 configurations (from 20 to 519)</span>
</span><span id="L-205"><a href="#L-205"><span class="linenos">205</span></a>
</span><span id="L-206"><a href="#L-206"><span class="linenos">206</span></a><span class="sd"># Observable defined on every second configuration between 5 and 1003</span>
</span><span id="L-207"><a href="#L-207"><span class="linenos">207</span></a><span class="sd">obs2 = pe.Obs([samples2], [&#39;ensemble1&#39;], idl=[range(5, 1005, 2)])</span>
</span><span id="L-208"><a href="#L-208"><span class="linenos">208</span></a><span class="sd">obs2.details()</span>
</span><span id="L-209"><a href="#L-209"><span class="linenos">209</span></a><span class="sd">&gt; Result 9.99100712e-01</span>
</span><span id="L-210"><a href="#L-210"><span class="linenos">210</span></a><span class="sd">&gt; 500 samples in 1 ensemble:</span>
</span><span id="L-211"><a href="#L-211"><span class="linenos">211</span></a><span class="sd">&gt; · Ensemble &#39;ensemble1&#39; : 500 configurations (from 5 to 1003 in steps of 2)</span>
</span><span id="L-212"><a href="#L-212"><span class="linenos">212</span></a>
</span><span id="L-213"><a href="#L-213"><span class="linenos">213</span></a><span class="sd"># Observable defined on configurations 2, 9, 28, 29 and 501</span>
</span><span id="L-214"><a href="#L-214"><span class="linenos">214</span></a><span class="sd">obs3 = pe.Obs([samples3], [&#39;ensemble1&#39;], idl=[[2, 9, 28, 29, 501]])</span>
</span><span id="L-215"><a href="#L-215"><span class="linenos">215</span></a><span class="sd">obs3.details()</span>
</span><span id="L-216"><a href="#L-216"><span class="linenos">216</span></a><span class="sd">&gt; Result 1.01718064e+00</span>
</span><span id="L-217"><a href="#L-217"><span class="linenos">217</span></a><span class="sd">&gt; 5 samples in 1 ensemble:</span>
</span><span id="L-218"><a href="#L-218"><span class="linenos">218</span></a><span class="sd">&gt; · Ensemble &#39;ensemble1&#39; : 5 configurations (irregular range)</span>
</span><span id="L-219"><a href="#L-219"><span class="linenos">219</span></a>
</span><span id="L-220"><a href="#L-220"><span class="linenos">220</span></a><span class="sd">```</span>
</span><span id="L-221"><a href="#L-221"><span class="linenos">221</span></a>
</span><span id="L-222"><a href="#L-222"><span class="linenos">222</span></a><span class="sd">`Obs` objects defined on regular and irregular histories of the same ensemble can be combined with each other and the correct error propagation and estimation is automatically taken care of.</span>
</span><span id="L-223"><a href="#L-223"><span class="linenos">223</span></a>
</span><span id="L-224"><a href="#L-224"><span class="linenos">224</span></a><span class="sd">**Warning:** Irregular Monte Carlo chains can result in odd patterns in the autocorrelation functions.</span>
</span><span id="L-225"><a href="#L-225"><span class="linenos">225</span></a><span class="sd">Make sure to check the autocorrelation time with e.g. `pyerrors.obs.Obs.plot_rho` or `pyerrors.obs.Obs.plot_tauint`.</span>
</span><span id="L-226"><a href="#L-226"><span class="linenos">226</span></a>
</span><span id="L-227"><a href="#L-227"><span class="linenos">227</span></a><span class="sd">For the full API see `pyerrors.obs.Obs`.</span>
</span><span id="L-228"><a href="#L-228"><span class="linenos">228</span></a>
</span><span id="L-229"><a href="#L-229"><span class="linenos">229</span></a><span class="sd"># Correlators</span>
</span><span id="L-230"><a href="#L-230"><span class="linenos">230</span></a><span class="sd">When one is not interested in single observables but correlation functions, `pyerrors` offers the `Corr` class which simplifies the corresponding error propagation and provides the user with a set of standard methods. In order to initialize a `Corr` objects one needs to arrange the data as a list of `Obs`</span>
</span><span id="L-231"><a href="#L-231"><span class="linenos">231</span></a><span class="sd">```python</span>
</span><span id="L-232"><a href="#L-232"><span class="linenos">232</span></a><span class="sd">my_corr = pe.Corr([obs_0, obs_1, obs_2, obs_3])</span>
</span><span id="L-233"><a href="#L-233"><span class="linenos">233</span></a><span class="sd">print(my_corr)</span>
</span><span id="L-234"><a href="#L-234"><span class="linenos">234</span></a><span class="sd">&gt; x0/a Corr(x0/a)</span>
</span><span id="L-235"><a href="#L-235"><span class="linenos">235</span></a><span class="sd">&gt; ------------------</span>
</span><span id="L-236"><a href="#L-236"><span class="linenos">236</span></a><span class="sd">&gt; 0 0.7957(80)</span>
</span><span id="L-237"><a href="#L-237"><span class="linenos">237</span></a><span class="sd">&gt; 1 0.5156(51)</span>
</span><span id="L-238"><a href="#L-238"><span class="linenos">238</span></a><span class="sd">&gt; 2 0.3227(33)</span>
</span><span id="L-239"><a href="#L-239"><span class="linenos">239</span></a><span class="sd">&gt; 3 0.2041(21)</span>
</span><span id="L-240"><a href="#L-240"><span class="linenos">240</span></a><span class="sd">```</span>
</span><span id="L-241"><a href="#L-241"><span class="linenos">241</span></a><span class="sd">In case the correlation functions are not defined on the outermost timeslices, for example because of fixed boundary conditions, a padding can be introduced.</span>
</span><span id="L-242"><a href="#L-242"><span class="linenos">242</span></a><span class="sd">```python</span>
</span><span id="L-243"><a href="#L-243"><span class="linenos">243</span></a><span class="sd">my_corr = pe.Corr([obs_0, obs_1, obs_2, obs_3], padding=[1, 1])</span>
</span><span id="L-244"><a href="#L-244"><span class="linenos">244</span></a><span class="sd">print(my_corr)</span>
</span><span id="L-245"><a href="#L-245"><span class="linenos">245</span></a><span class="sd">&gt; x0/a Corr(x0/a)</span>
</span><span id="L-246"><a href="#L-246"><span class="linenos">246</span></a><span class="sd">&gt; ------------------</span>
</span><span id="L-247"><a href="#L-247"><span class="linenos">247</span></a><span class="sd">&gt; 0</span>
</span><span id="L-248"><a href="#L-248"><span class="linenos">248</span></a><span class="sd">&gt; 1 0.7957(80)</span>
</span><span id="L-249"><a href="#L-249"><span class="linenos">249</span></a><span class="sd">&gt; 2 0.5156(51)</span>
</span><span id="L-250"><a href="#L-250"><span class="linenos">250</span></a><span class="sd">&gt; 3 0.3227(33)</span>
</span><span id="L-251"><a href="#L-251"><span class="linenos">251</span></a><span class="sd">&gt; 4 0.2041(21)</span>
</span><span id="L-252"><a href="#L-252"><span class="linenos">252</span></a><span class="sd">&gt; 5</span>
</span><span id="L-253"><a href="#L-253"><span class="linenos">253</span></a><span class="sd">```</span>
</span><span id="L-254"><a href="#L-254"><span class="linenos">254</span></a><span class="sd">The individual entries of a correlator can be accessed via slicing</span>
</span><span id="L-255"><a href="#L-255"><span class="linenos">255</span></a><span class="sd">```python</span>
</span><span id="L-256"><a href="#L-256"><span class="linenos">256</span></a><span class="sd">print(my_corr[3])</span>
</span><span id="L-257"><a href="#L-257"><span class="linenos">257</span></a><span class="sd">&gt; 0.3227(33)</span>
</span><span id="L-258"><a href="#L-258"><span class="linenos">258</span></a><span class="sd">```</span>
</span><span id="L-259"><a href="#L-259"><span class="linenos">259</span></a><span class="sd">Error propagation with the `Corr` class works very similar to `Obs` objects. Mathematical operations are overloaded and `Corr` objects can be computed together with other `Corr` objects, `Obs` objects or real numbers and integers.</span>
</span><span id="L-260"><a href="#L-260"><span class="linenos">260</span></a><span class="sd">```python</span>
</span><span id="L-261"><a href="#L-261"><span class="linenos">261</span></a><span class="sd">my_new_corr = 0.3 * my_corr[2] * my_corr * my_corr + 12 / my_corr</span>
</span><span id="L-262"><a href="#L-262"><span class="linenos">262</span></a><span class="sd">```</span>
</span><span id="L-263"><a href="#L-263"><span class="linenos">263</span></a>
</span><span id="L-264"><a href="#L-264"><span class="linenos">264</span></a><span class="sd">`pyerrors` provides the user with a set of regularly used methods for the manipulation of correlator objects:</span>
</span><span id="L-265"><a href="#L-265"><span class="linenos">265</span></a><span class="sd">- `Corr.gamma_method` applies the gamma method to all entries of the correlator.</span>
</span><span id="L-266"><a href="#L-266"><span class="linenos">266</span></a><span class="sd">- `Corr.m_eff` to construct effective masses. Various variants for periodic and fixed temporal boundary conditions are available.</span>
</span><span id="L-267"><a href="#L-267"><span class="linenos">267</span></a><span class="sd">- `Corr.deriv` returns the first derivative of the correlator as `Corr`. Different discretizations of the numerical derivative are available.</span>
</span><span id="L-268"><a href="#L-268"><span class="linenos">268</span></a><span class="sd">- `Corr.second_deriv` returns the second derivative of the correlator as `Corr`. Different discretizations of the numerical derivative are available.</span>
</span><span id="L-269"><a href="#L-269"><span class="linenos">269</span></a><span class="sd">- `Corr.symmetric` symmetrizes parity even correlations functions, assuming periodic boundary conditions.</span>
</span><span id="L-270"><a href="#L-270"><span class="linenos">270</span></a><span class="sd">- `Corr.anti_symmetric` anti-symmetrizes parity odd correlations functions, assuming periodic boundary conditions.</span>
</span><span id="L-271"><a href="#L-271"><span class="linenos">271</span></a><span class="sd">- `Corr.T_symmetry` averages a correlator with its time symmetry partner, assuming fixed boundary conditions.</span>
</span><span id="L-272"><a href="#L-272"><span class="linenos">272</span></a><span class="sd">- `Corr.plateau` extracts a plateau value from the correlator in a given range.</span>
</span><span id="L-273"><a href="#L-273"><span class="linenos">273</span></a><span class="sd">- `Corr.roll` periodically shifts the correlator.</span>
</span><span id="L-274"><a href="#L-274"><span class="linenos">274</span></a><span class="sd">- `Corr.reverse` reverses the time ordering of the correlator.</span>
</span><span id="L-275"><a href="#L-275"><span class="linenos">275</span></a><span class="sd">- `Corr.correlate` constructs a disconnected correlation function from the correlator and another `Corr` or `Obs` object.</span>
</span><span id="L-276"><a href="#L-276"><span class="linenos">276</span></a><span class="sd">- `Corr.reweight` reweights the correlator.</span>
</span><span id="L-277"><a href="#L-277"><span class="linenos">277</span></a>
</span><span id="L-278"><a href="#L-278"><span class="linenos">278</span></a><span class="sd">`pyerrors` can also handle matrices of correlation functions and extract energy states from these matrices via a generalized eigenvalue problem (see `pyerrors.correlators.Corr.GEVP`).</span>
</span><span id="L-279"><a href="#L-279"><span class="linenos">279</span></a>
</span><span id="L-280"><a href="#L-280"><span class="linenos">280</span></a><span class="sd">For the full API see `pyerrors.correlators.Corr`.</span>
</span><span id="L-281"><a href="#L-281"><span class="linenos">281</span></a>
</span><span id="L-282"><a href="#L-282"><span class="linenos">282</span></a><span class="sd"># Complex valued observables</span>
</span><span id="L-283"><a href="#L-283"><span class="linenos">283</span></a>
</span><span id="L-284"><a href="#L-284"><span class="linenos">284</span></a><span class="sd">`pyerrors` can handle complex valued observables via the class `pyerrors.obs.CObs`.</span>
</span><span id="L-285"><a href="#L-285"><span class="linenos">285</span></a><span class="sd">`CObs` are initialized with a real and an imaginary part which both can be `Obs` valued.</span>
</span><span id="L-286"><a href="#L-286"><span class="linenos">286</span></a>
</span><span id="L-287"><a href="#L-287"><span class="linenos">287</span></a><span class="sd">```python</span>
</span><span id="L-288"><a href="#L-288"><span class="linenos">288</span></a><span class="sd">my_real_part = pe.Obs([samples1], [&#39;ensemble1&#39;])</span>
</span><span id="L-289"><a href="#L-289"><span class="linenos">289</span></a><span class="sd">my_imag_part = pe.Obs([samples2], [&#39;ensemble1&#39;])</span>
</span><span id="L-290"><a href="#L-290"><span class="linenos">290</span></a>
</span><span id="L-291"><a href="#L-291"><span class="linenos">291</span></a><span class="sd">my_cobs = pe.CObs(my_real_part, my_imag_part)</span>
</span><span id="L-292"><a href="#L-292"><span class="linenos">292</span></a><span class="sd">my_cobs.gamma_method()</span>
</span><span id="L-293"><a href="#L-293"><span class="linenos">293</span></a><span class="sd">print(my_cobs)</span>
</span><span id="L-294"><a href="#L-294"><span class="linenos">294</span></a><span class="sd">&gt; (0.9959(91)+0.659(28)j)</span>
</span><span id="L-295"><a href="#L-295"><span class="linenos">295</span></a><span class="sd">```</span>
</span><span id="L-296"><a href="#L-296"><span class="linenos">296</span></a>
</span><span id="L-297"><a href="#L-297"><span class="linenos">297</span></a><span class="sd">Elementary mathematical operations are overloaded and samples are properly propagated as for the `Obs` class.</span>
</span><span id="L-298"><a href="#L-298"><span class="linenos">298</span></a><span class="sd">```python</span>
</span><span id="L-299"><a href="#L-299"><span class="linenos">299</span></a><span class="sd">my_derived_cobs = (my_cobs + my_cobs.conjugate()) / np.abs(my_cobs)</span>
</span><span id="L-300"><a href="#L-300"><span class="linenos">300</span></a><span class="sd">my_derived_cobs.gamma_method()</span>
</span><span id="L-301"><a href="#L-301"><span class="linenos">301</span></a><span class="sd">print(my_derived_cobs)</span>
</span><span id="L-302"><a href="#L-302"><span class="linenos">302</span></a><span class="sd">&gt; (1.668(23)+0.0j)</span>
</span><span id="L-303"><a href="#L-303"><span class="linenos">303</span></a><span class="sd">```</span>
</span><span id="L-304"><a href="#L-304"><span class="linenos">304</span></a>
</span><span id="L-305"><a href="#L-305"><span class="linenos">305</span></a><span class="sd"># The `Covobs` class</span>
</span><span id="L-306"><a href="#L-306"><span class="linenos">306</span></a><span class="sd">In many projects, auxiliary data that is not based on Monte Carlo chains enters. Examples are experimentally determined mesons masses which are used to set the scale or renormalization constants. These numbers come with an error that has to be propagated through the analysis. The `Covobs` class allows to define such quantities in `pyerrors`. Furthermore, external input might consist of correlated quantities. An example are the parameters of an interpolation formula, which are defined via mean values and a covariance matrix between all parameters. The contribution of the interpolation formula to the error of a derived quantity therefore might depend on the complete covariance matrix.</span>
</span><span id="L-307"><a href="#L-307"><span class="linenos">307</span></a>
</span><span id="L-308"><a href="#L-308"><span class="linenos">308</span></a><span class="sd">This concept is built into the definition of `Covobs`. In `pyerrors`, external input is defined by $M$ mean values, a $M\times M$ covariance matrix, where $M=1$ is permissible, and a name that uniquely identifies the covariance matrix. Below, we define the pion mass, based on its mean value and error, 134.9768(5). Note, that the square of the error enters `cov_Obs`, since the second argument of this function is the covariance matrix of the `Covobs`.</span>
</span><span id="L-309"><a href="#L-309"><span class="linenos">309</span></a>
</span><span id="L-310"><a href="#L-310"><span class="linenos">310</span></a><span class="sd">```python</span>
</span><span id="L-311"><a href="#L-311"><span class="linenos">311</span></a><span class="sd">import pyerrors.obs as pe</span>
</span><span id="L-312"><a href="#L-312"><span class="linenos">312</span></a>
</span><span id="L-313"><a href="#L-313"><span class="linenos">313</span></a><span class="sd">mpi = pe.cov_Obs(134.9768, 0.0005**2, &#39;pi^0 mass&#39;)</span>
</span><span id="L-314"><a href="#L-314"><span class="linenos">314</span></a><span class="sd">mpi.gamma_method()</span>
</span><span id="L-315"><a href="#L-315"><span class="linenos">315</span></a><span class="sd">mpi.details()</span>
</span><span id="L-316"><a href="#L-316"><span class="linenos">316</span></a><span class="sd">&gt; Result 1.34976800e+02 +/- 5.00000000e-04 +/- 0.00000000e+00 (0.000%)</span>
</span><span id="L-317"><a href="#L-317"><span class="linenos">317</span></a><span class="sd">&gt; pi^0 mass 5.00000000e-04</span>
</span><span id="L-318"><a href="#L-318"><span class="linenos">318</span></a><span class="sd">&gt; 0 samples in 1 ensemble:</span>
</span><span id="L-319"><a href="#L-319"><span class="linenos">319</span></a><span class="sd">&gt; · Covobs &#39;pi^0 mass&#39;</span>
</span><span id="L-320"><a href="#L-320"><span class="linenos">320</span></a><span class="sd">```</span>
</span><span id="L-321"><a href="#L-321"><span class="linenos">321</span></a><span class="sd">The resulting object `mpi` is an `Obs` that contains a `Covobs`. In the following, it may be handled as any other `Obs`. The contribution of the covariance matrix to the error of an `Obs` is determined from the $M \times M$ covariance matrix $\Sigma$ and the gradient of the `Obs` with respect to the external quantities, which is the $1\times M$ Jacobian matrix $J$, via</span>
</span><span id="L-322"><a href="#L-322"><span class="linenos">322</span></a><span class="sd">$$s = \sqrt{J^T \Sigma J}\,,$$</span>
</span><span id="L-323"><a href="#L-323"><span class="linenos">323</span></a><span class="sd">where the Jacobian is computed for each derived quantity via automatic differentiation.</span>
</span><span id="L-324"><a href="#L-324"><span class="linenos">324</span></a>
</span><span id="L-325"><a href="#L-325"><span class="linenos">325</span></a><span class="sd">Correlated auxiliary data is defined similarly to above, e.g., via</span>
</span><span id="L-326"><a href="#L-326"><span class="linenos">326</span></a><span class="sd">```python</span>
</span><span id="L-327"><a href="#L-327"><span class="linenos">327</span></a><span class="sd">RAP = pe.cov_Obs([16.7457, -19.0475], [[3.49591, -6.07560], [-6.07560, 10.5834]], &#39;R_AP, 1906.03445, (5.3a)&#39;)</span>
</span><span id="L-328"><a href="#L-328"><span class="linenos">328</span></a><span class="sd">print(RAP)</span>
</span><span id="L-329"><a href="#L-329"><span class="linenos">329</span></a><span class="sd">&gt; [Obs[16.7(1.9)], Obs[-19.0(3.3)]]</span>
</span><span id="L-330"><a href="#L-330"><span class="linenos">330</span></a><span class="sd">```</span>
</span><span id="L-331"><a href="#L-331"><span class="linenos">331</span></a><span class="sd">where `RAP` now is a list of two `Obs` that contains the two correlated parameters.</span>
</span><span id="L-332"><a href="#L-332"><span class="linenos">332</span></a>
</span><span id="L-333"><a href="#L-333"><span class="linenos">333</span></a><span class="sd">Since the gradient of a derived observable with respect to an external covariance matrix is propagated through the entire analysis, the `Covobs` class allows to quote the derivative of a result with respect to the external quantities. If these derivatives are published together with the result, small shifts in the definition of external quantities, e.g., the definition of the physical point, can be performed a posteriori based on the published information. This may help to compare results of different groups. The gradient of an `Obs` `o` with respect to a covariance matrix with the identifying string `k` may be accessed via</span>
</span><span id="L-334"><a href="#L-334"><span class="linenos">334</span></a><span class="sd">```python</span>
</span><span id="L-335"><a href="#L-335"><span class="linenos">335</span></a><span class="sd">o.covobs[k].grad</span>
</span><span id="L-336"><a href="#L-336"><span class="linenos">336</span></a><span class="sd">```</span>
</span><span id="L-337"><a href="#L-337"><span class="linenos">337</span></a>
</span><span id="L-338"><a href="#L-338"><span class="linenos">338</span></a><span class="sd"># Error propagation in iterative algorithms</span>
</span><span id="L-339"><a href="#L-339"><span class="linenos">339</span></a>
</span><span id="L-340"><a href="#L-340"><span class="linenos">340</span></a><span class="sd">`pyerrors` supports exact linear error propagation for iterative algorithms like various variants of non-linear least squares fits or root finding. The derivatives required for the error propagation are calculated as described in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289).</span>
</span><span id="L-341"><a href="#L-341"><span class="linenos">341</span></a>
</span><span id="L-342"><a href="#L-342"><span class="linenos">342</span></a><span class="sd">## Least squares fits</span>
</span><span id="L-343"><a href="#L-343"><span class="linenos">343</span></a>
</span><span id="L-344"><a href="#L-344"><span class="linenos">344</span></a><span class="sd">Standard non-linear least square fits with errors on the dependent but not the independent variables can be performed with `pyerrors.fits.least_squares`. As default solver the Levenberg-Marquardt algorithm implemented in [scipy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html) is used.</span>
</span><span id="L-345"><a href="#L-345"><span class="linenos">345</span></a>
</span><span id="L-346"><a href="#L-346"><span class="linenos">346</span></a><span class="sd">Fit functions have to be of the following form</span>
</span><span id="L-347"><a href="#L-347"><span class="linenos">347</span></a><span class="sd">```python</span>
</span><span id="L-348"><a href="#L-348"><span class="linenos">348</span></a><span class="sd">import autograd.numpy as anp</span>
</span><span id="L-349"><a href="#L-349"><span class="linenos">349</span></a>
</span><span id="L-350"><a href="#L-350"><span class="linenos">350</span></a><span class="sd">def func(a, x):</span>
</span><span id="L-351"><a href="#L-351"><span class="linenos">351</span></a><span class="sd"> return a[1] * anp.exp(-a[0] * x)</span>
</span><span id="L-352"><a href="#L-352"><span class="linenos">352</span></a><span class="sd">```</span>
</span><span id="L-353"><a href="#L-353"><span class="linenos">353</span></a><span class="sd">**It is important that numerical functions refer to `autograd.numpy` instead of `numpy` for the automatic differentiation in iterative algorithms to work properly.**</span>
</span><span id="L-354"><a href="#L-354"><span class="linenos">354</span></a>
</span><span id="L-355"><a href="#L-355"><span class="linenos">355</span></a><span class="sd">Fits can then be performed via</span>
</span><span id="L-356"><a href="#L-356"><span class="linenos">356</span></a><span class="sd">```python</span>
</span><span id="L-357"><a href="#L-357"><span class="linenos">357</span></a><span class="sd">fit_result = pe.fits.least_squares(x, y, func)</span>
</span><span id="L-358"><a href="#L-358"><span class="linenos">358</span></a><span class="sd">print(&quot;\n&quot;, fit_result)</span>
</span><span id="L-359"><a href="#L-359"><span class="linenos">359</span></a><span class="sd">&gt; Fit with 2 parameters</span>
</span><span id="L-360"><a href="#L-360"><span class="linenos">360</span></a><span class="sd">&gt; Method: Levenberg-Marquardt</span>
</span><span id="L-361"><a href="#L-361"><span class="linenos">361</span></a><span class="sd">&gt; `ftol` termination condition is satisfied.</span>
</span><span id="L-362"><a href="#L-362"><span class="linenos">362</span></a><span class="sd">&gt; chisquare/d.o.f.: 0.9593035785160936</span>
</span><span id="L-363"><a href="#L-363"><span class="linenos">363</span></a>
</span><span id="L-364"><a href="#L-364"><span class="linenos">364</span></a><span class="sd">&gt; Goodness of fit:</span>
</span><span id="L-365"><a href="#L-365"><span class="linenos">365</span></a><span class="sd">&gt; χ²/d.o.f. = 0.959304</span>
</span><span id="L-366"><a href="#L-366"><span class="linenos">366</span></a><span class="sd">&gt; p-value = 0.5673</span>
</span><span id="L-367"><a href="#L-367"><span class="linenos">367</span></a><span class="sd">&gt; Fit parameters:</span>
</span><span id="L-368"><a href="#L-368"><span class="linenos">368</span></a><span class="sd">&gt; 0 0.0548(28)</span>
</span><span id="L-369"><a href="#L-369"><span class="linenos">369</span></a><span class="sd">&gt; 1 1.933(64)</span>
</span><span id="L-370"><a href="#L-370"><span class="linenos">370</span></a><span class="sd">```</span>
</span><span id="L-371"><a href="#L-371"><span class="linenos">371</span></a><span class="sd">where x is a `list` or `numpy.array` of `floats` and y is a `list` or `numpy.array` of `Obs`.</span>
</span><span id="L-372"><a href="#L-372"><span class="linenos">372</span></a>
</span><span id="L-373"><a href="#L-373"><span class="linenos">373</span></a><span class="sd">Data stored in `Corr` objects can be fitted directly using the `Corr.fit` method.</span>
</span><span id="L-374"><a href="#L-374"><span class="linenos">374</span></a><span class="sd">```python</span>
</span><span id="L-375"><a href="#L-375"><span class="linenos">375</span></a><span class="sd">my_corr = pe.Corr(y)</span>
</span><span id="L-376"><a href="#L-376"><span class="linenos">376</span></a><span class="sd">fit_result = my_corr.fit(func, fitrange=[12, 25])</span>
</span><span id="L-377"><a href="#L-377"><span class="linenos">377</span></a><span class="sd">```</span>
</span><span id="L-378"><a href="#L-378"><span class="linenos">378</span></a><span class="sd">this can simplify working with absolute fit ranges and takes care of gaps in the data automatically.</span>
</span><span id="L-379"><a href="#L-379"><span class="linenos">379</span></a>
</span><span id="L-380"><a href="#L-380"><span class="linenos">380</span></a><span class="sd">For fit functions with multiple independent variables the fit function can be of the form</span>
</span><span id="L-381"><a href="#L-381"><span class="linenos">381</span></a>
</span><span id="L-382"><a href="#L-382"><span class="linenos">382</span></a><span class="sd">```python</span>
</span><span id="L-383"><a href="#L-383"><span class="linenos">383</span></a><span class="sd">def func(a, x):</span>
</span><span id="L-384"><a href="#L-384"><span class="linenos">384</span></a><span class="sd"> (x1, x2) = x</span>
</span><span id="L-385"><a href="#L-385"><span class="linenos">385</span></a><span class="sd"> return a[0] * x1 ** 2 + a[1] * x2</span>
</span><span id="L-386"><a href="#L-386"><span class="linenos">386</span></a><span class="sd">```</span>
</span><span id="L-387"><a href="#L-387"><span class="linenos">387</span></a>
</span><span id="L-388"><a href="#L-388"><span class="linenos">388</span></a><span class="sd">`pyerrors` also supports correlated fits which can be triggered via the parameter `correlated_fit=True`.</span>
</span><span id="L-389"><a href="#L-389"><span class="linenos">389</span></a><span class="sd">Details about how the required covariance matrix is estimated can be found in `pyerrors.obs.covariance`.</span>
</span><span id="L-390"><a href="#L-390"><span class="linenos">390</span></a>
</span><span id="L-391"><a href="#L-391"><span class="linenos">391</span></a><span class="sd">Direct visualizations of the performed fits can be triggered via `resplot=True` or `qqplot=True`. For all available options see `pyerrors.fits.least_squares`.</span>
</span><span id="L-194"><a href="#L-194"><span class="linenos">194</span></a>
</span><span id="L-195"><a href="#L-195"><span class="linenos">195</span></a><span class="sd">## Irregular Monte Carlo chains</span>
</span><span id="L-196"><a href="#L-196"><span class="linenos">196</span></a>
</span><span id="L-197"><a href="#L-197"><span class="linenos">197</span></a><span class="sd">`Obs` objects defined on irregular Monte Carlo chains can be initialized with the parameter `idl`.</span>
</span><span id="L-198"><a href="#L-198"><span class="linenos">198</span></a>
</span><span id="L-199"><a href="#L-199"><span class="linenos">199</span></a><span class="sd">```python</span>
</span><span id="L-200"><a href="#L-200"><span class="linenos">200</span></a><span class="sd"># Observable defined on configurations 20 to 519</span>
</span><span id="L-201"><a href="#L-201"><span class="linenos">201</span></a><span class="sd">obs1 = pe.Obs([samples1], [&#39;ensemble1&#39;], idl=[range(20, 520)])</span>
</span><span id="L-202"><a href="#L-202"><span class="linenos">202</span></a><span class="sd">obs1.details()</span>
</span><span id="L-203"><a href="#L-203"><span class="linenos">203</span></a><span class="sd">&gt; Result 9.98319881e-01</span>
</span><span id="L-204"><a href="#L-204"><span class="linenos">204</span></a><span class="sd">&gt; 500 samples in 1 ensemble:</span>
</span><span id="L-205"><a href="#L-205"><span class="linenos">205</span></a><span class="sd">&gt; · Ensemble &#39;ensemble1&#39; : 500 configurations (from 20 to 519)</span>
</span><span id="L-206"><a href="#L-206"><span class="linenos">206</span></a>
</span><span id="L-207"><a href="#L-207"><span class="linenos">207</span></a><span class="sd"># Observable defined on every second configuration between 5 and 1003</span>
</span><span id="L-208"><a href="#L-208"><span class="linenos">208</span></a><span class="sd">obs2 = pe.Obs([samples2], [&#39;ensemble1&#39;], idl=[range(5, 1005, 2)])</span>
</span><span id="L-209"><a href="#L-209"><span class="linenos">209</span></a><span class="sd">obs2.details()</span>
</span><span id="L-210"><a href="#L-210"><span class="linenos">210</span></a><span class="sd">&gt; Result 9.99100712e-01</span>
</span><span id="L-211"><a href="#L-211"><span class="linenos">211</span></a><span class="sd">&gt; 500 samples in 1 ensemble:</span>
</span><span id="L-212"><a href="#L-212"><span class="linenos">212</span></a><span class="sd">&gt; · Ensemble &#39;ensemble1&#39; : 500 configurations (from 5 to 1003 in steps of 2)</span>
</span><span id="L-213"><a href="#L-213"><span class="linenos">213</span></a>
</span><span id="L-214"><a href="#L-214"><span class="linenos">214</span></a><span class="sd"># Observable defined on configurations 2, 9, 28, 29 and 501</span>
</span><span id="L-215"><a href="#L-215"><span class="linenos">215</span></a><span class="sd">obs3 = pe.Obs([samples3], [&#39;ensemble1&#39;], idl=[[2, 9, 28, 29, 501]])</span>
</span><span id="L-216"><a href="#L-216"><span class="linenos">216</span></a><span class="sd">obs3.details()</span>
</span><span id="L-217"><a href="#L-217"><span class="linenos">217</span></a><span class="sd">&gt; Result 1.01718064e+00</span>
</span><span id="L-218"><a href="#L-218"><span class="linenos">218</span></a><span class="sd">&gt; 5 samples in 1 ensemble:</span>
</span><span id="L-219"><a href="#L-219"><span class="linenos">219</span></a><span class="sd">&gt; · Ensemble &#39;ensemble1&#39; : 5 configurations (irregular range)</span>
</span><span id="L-220"><a href="#L-220"><span class="linenos">220</span></a>
</span><span id="L-221"><a href="#L-221"><span class="linenos">221</span></a><span class="sd">```</span>
</span><span id="L-222"><a href="#L-222"><span class="linenos">222</span></a>
</span><span id="L-223"><a href="#L-223"><span class="linenos">223</span></a><span class="sd">`Obs` objects defined on regular and irregular histories of the same ensemble can be combined with each other and the correct error propagation and estimation is automatically taken care of.</span>
</span><span id="L-224"><a href="#L-224"><span class="linenos">224</span></a>
</span><span id="L-225"><a href="#L-225"><span class="linenos">225</span></a><span class="sd">**Warning:** Irregular Monte Carlo chains can result in odd patterns in the autocorrelation functions.</span>
</span><span id="L-226"><a href="#L-226"><span class="linenos">226</span></a><span class="sd">Make sure to check the autocorrelation time with e.g. `pyerrors.obs.Obs.plot_rho` or `pyerrors.obs.Obs.plot_tauint`.</span>
</span><span id="L-227"><a href="#L-227"><span class="linenos">227</span></a>
</span><span id="L-228"><a href="#L-228"><span class="linenos">228</span></a><span class="sd">For the full API see `pyerrors.obs.Obs`.</span>
</span><span id="L-229"><a href="#L-229"><span class="linenos">229</span></a>
</span><span id="L-230"><a href="#L-230"><span class="linenos">230</span></a><span class="sd"># Correlators</span>
</span><span id="L-231"><a href="#L-231"><span class="linenos">231</span></a><span class="sd">When one is not interested in single observables but correlation functions, `pyerrors` offers the `Corr` class which simplifies the corresponding error propagation and provides the user with a set of standard methods. In order to initialize a `Corr` objects one needs to arrange the data as a list of `Obs`</span>
</span><span id="L-232"><a href="#L-232"><span class="linenos">232</span></a><span class="sd">```python</span>
</span><span id="L-233"><a href="#L-233"><span class="linenos">233</span></a><span class="sd">my_corr = pe.Corr([obs_0, obs_1, obs_2, obs_3])</span>
</span><span id="L-234"><a href="#L-234"><span class="linenos">234</span></a><span class="sd">print(my_corr)</span>
</span><span id="L-235"><a href="#L-235"><span class="linenos">235</span></a><span class="sd">&gt; x0/a Corr(x0/a)</span>
</span><span id="L-236"><a href="#L-236"><span class="linenos">236</span></a><span class="sd">&gt; ------------------</span>
</span><span id="L-237"><a href="#L-237"><span class="linenos">237</span></a><span class="sd">&gt; 0 0.7957(80)</span>
</span><span id="L-238"><a href="#L-238"><span class="linenos">238</span></a><span class="sd">&gt; 1 0.5156(51)</span>
</span><span id="L-239"><a href="#L-239"><span class="linenos">239</span></a><span class="sd">&gt; 2 0.3227(33)</span>
</span><span id="L-240"><a href="#L-240"><span class="linenos">240</span></a><span class="sd">&gt; 3 0.2041(21)</span>
</span><span id="L-241"><a href="#L-241"><span class="linenos">241</span></a><span class="sd">```</span>
</span><span id="L-242"><a href="#L-242"><span class="linenos">242</span></a><span class="sd">In case the correlation functions are not defined on the outermost timeslices, for example because of fixed boundary conditions, a padding can be introduced.</span>
</span><span id="L-243"><a href="#L-243"><span class="linenos">243</span></a><span class="sd">```python</span>
</span><span id="L-244"><a href="#L-244"><span class="linenos">244</span></a><span class="sd">my_corr = pe.Corr([obs_0, obs_1, obs_2, obs_3], padding=[1, 1])</span>
</span><span id="L-245"><a href="#L-245"><span class="linenos">245</span></a><span class="sd">print(my_corr)</span>
</span><span id="L-246"><a href="#L-246"><span class="linenos">246</span></a><span class="sd">&gt; x0/a Corr(x0/a)</span>
</span><span id="L-247"><a href="#L-247"><span class="linenos">247</span></a><span class="sd">&gt; ------------------</span>
</span><span id="L-248"><a href="#L-248"><span class="linenos">248</span></a><span class="sd">&gt; 0</span>
</span><span id="L-249"><a href="#L-249"><span class="linenos">249</span></a><span class="sd">&gt; 1 0.7957(80)</span>
</span><span id="L-250"><a href="#L-250"><span class="linenos">250</span></a><span class="sd">&gt; 2 0.5156(51)</span>
</span><span id="L-251"><a href="#L-251"><span class="linenos">251</span></a><span class="sd">&gt; 3 0.3227(33)</span>
</span><span id="L-252"><a href="#L-252"><span class="linenos">252</span></a><span class="sd">&gt; 4 0.2041(21)</span>
</span><span id="L-253"><a href="#L-253"><span class="linenos">253</span></a><span class="sd">&gt; 5</span>
</span><span id="L-254"><a href="#L-254"><span class="linenos">254</span></a><span class="sd">```</span>
</span><span id="L-255"><a href="#L-255"><span class="linenos">255</span></a><span class="sd">The individual entries of a correlator can be accessed via slicing</span>
</span><span id="L-256"><a href="#L-256"><span class="linenos">256</span></a><span class="sd">```python</span>
</span><span id="L-257"><a href="#L-257"><span class="linenos">257</span></a><span class="sd">print(my_corr[3])</span>
</span><span id="L-258"><a href="#L-258"><span class="linenos">258</span></a><span class="sd">&gt; 0.3227(33)</span>
</span><span id="L-259"><a href="#L-259"><span class="linenos">259</span></a><span class="sd">```</span>
</span><span id="L-260"><a href="#L-260"><span class="linenos">260</span></a><span class="sd">Error propagation with the `Corr` class works very similar to `Obs` objects. Mathematical operations are overloaded and `Corr` objects can be computed together with other `Corr` objects, `Obs` objects or real numbers and integers.</span>
</span><span id="L-261"><a href="#L-261"><span class="linenos">261</span></a><span class="sd">```python</span>
</span><span id="L-262"><a href="#L-262"><span class="linenos">262</span></a><span class="sd">my_new_corr = 0.3 * my_corr[2] * my_corr * my_corr + 12 / my_corr</span>
</span><span id="L-263"><a href="#L-263"><span class="linenos">263</span></a><span class="sd">```</span>
</span><span id="L-264"><a href="#L-264"><span class="linenos">264</span></a>
</span><span id="L-265"><a href="#L-265"><span class="linenos">265</span></a><span class="sd">`pyerrors` provides the user with a set of regularly used methods for the manipulation of correlator objects:</span>
</span><span id="L-266"><a href="#L-266"><span class="linenos">266</span></a><span class="sd">- `Corr.gamma_method` applies the gamma method to all entries of the correlator.</span>
</span><span id="L-267"><a href="#L-267"><span class="linenos">267</span></a><span class="sd">- `Corr.m_eff` to construct effective masses. Various variants for periodic and fixed temporal boundary conditions are available.</span>
</span><span id="L-268"><a href="#L-268"><span class="linenos">268</span></a><span class="sd">- `Corr.deriv` returns the first derivative of the correlator as `Corr`. Different discretizations of the numerical derivative are available.</span>
</span><span id="L-269"><a href="#L-269"><span class="linenos">269</span></a><span class="sd">- `Corr.second_deriv` returns the second derivative of the correlator as `Corr`. Different discretizations of the numerical derivative are available.</span>
</span><span id="L-270"><a href="#L-270"><span class="linenos">270</span></a><span class="sd">- `Corr.symmetric` symmetrizes parity even correlations functions, assuming periodic boundary conditions.</span>
</span><span id="L-271"><a href="#L-271"><span class="linenos">271</span></a><span class="sd">- `Corr.anti_symmetric` anti-symmetrizes parity odd correlations functions, assuming periodic boundary conditions.</span>
</span><span id="L-272"><a href="#L-272"><span class="linenos">272</span></a><span class="sd">- `Corr.T_symmetry` averages a correlator with its time symmetry partner, assuming fixed boundary conditions.</span>
</span><span id="L-273"><a href="#L-273"><span class="linenos">273</span></a><span class="sd">- `Corr.plateau` extracts a plateau value from the correlator in a given range.</span>
</span><span id="L-274"><a href="#L-274"><span class="linenos">274</span></a><span class="sd">- `Corr.roll` periodically shifts the correlator.</span>
</span><span id="L-275"><a href="#L-275"><span class="linenos">275</span></a><span class="sd">- `Corr.reverse` reverses the time ordering of the correlator.</span>
</span><span id="L-276"><a href="#L-276"><span class="linenos">276</span></a><span class="sd">- `Corr.correlate` constructs a disconnected correlation function from the correlator and another `Corr` or `Obs` object.</span>
</span><span id="L-277"><a href="#L-277"><span class="linenos">277</span></a><span class="sd">- `Corr.reweight` reweights the correlator.</span>
</span><span id="L-278"><a href="#L-278"><span class="linenos">278</span></a>
</span><span id="L-279"><a href="#L-279"><span class="linenos">279</span></a><span class="sd">`pyerrors` can also handle matrices of correlation functions and extract energy states from these matrices via a generalized eigenvalue problem (see `pyerrors.correlators.Corr.GEVP`).</span>
</span><span id="L-280"><a href="#L-280"><span class="linenos">280</span></a>
</span><span id="L-281"><a href="#L-281"><span class="linenos">281</span></a><span class="sd">For the full API see `pyerrors.correlators.Corr`.</span>
</span><span id="L-282"><a href="#L-282"><span class="linenos">282</span></a>
</span><span id="L-283"><a href="#L-283"><span class="linenos">283</span></a><span class="sd"># Complex valued observables</span>
</span><span id="L-284"><a href="#L-284"><span class="linenos">284</span></a>
</span><span id="L-285"><a href="#L-285"><span class="linenos">285</span></a><span class="sd">`pyerrors` can handle complex valued observables via the class `pyerrors.obs.CObs`.</span>
</span><span id="L-286"><a href="#L-286"><span class="linenos">286</span></a><span class="sd">`CObs` are initialized with a real and an imaginary part which both can be `Obs` valued.</span>
</span><span id="L-287"><a href="#L-287"><span class="linenos">287</span></a>
</span><span id="L-288"><a href="#L-288"><span class="linenos">288</span></a><span class="sd">```python</span>
</span><span id="L-289"><a href="#L-289"><span class="linenos">289</span></a><span class="sd">my_real_part = pe.Obs([samples1], [&#39;ensemble1&#39;])</span>
</span><span id="L-290"><a href="#L-290"><span class="linenos">290</span></a><span class="sd">my_imag_part = pe.Obs([samples2], [&#39;ensemble1&#39;])</span>
</span><span id="L-291"><a href="#L-291"><span class="linenos">291</span></a>
</span><span id="L-292"><a href="#L-292"><span class="linenos">292</span></a><span class="sd">my_cobs = pe.CObs(my_real_part, my_imag_part)</span>
</span><span id="L-293"><a href="#L-293"><span class="linenos">293</span></a><span class="sd">my_cobs.gamma_method()</span>
</span><span id="L-294"><a href="#L-294"><span class="linenos">294</span></a><span class="sd">print(my_cobs)</span>
</span><span id="L-295"><a href="#L-295"><span class="linenos">295</span></a><span class="sd">&gt; (0.9959(91)+0.659(28)j)</span>
</span><span id="L-296"><a href="#L-296"><span class="linenos">296</span></a><span class="sd">```</span>
</span><span id="L-297"><a href="#L-297"><span class="linenos">297</span></a>
</span><span id="L-298"><a href="#L-298"><span class="linenos">298</span></a><span class="sd">Elementary mathematical operations are overloaded and samples are properly propagated as for the `Obs` class.</span>
</span><span id="L-299"><a href="#L-299"><span class="linenos">299</span></a><span class="sd">```python</span>
</span><span id="L-300"><a href="#L-300"><span class="linenos">300</span></a><span class="sd">my_derived_cobs = (my_cobs + my_cobs.conjugate()) / np.abs(my_cobs)</span>
</span><span id="L-301"><a href="#L-301"><span class="linenos">301</span></a><span class="sd">my_derived_cobs.gamma_method()</span>
</span><span id="L-302"><a href="#L-302"><span class="linenos">302</span></a><span class="sd">print(my_derived_cobs)</span>
</span><span id="L-303"><a href="#L-303"><span class="linenos">303</span></a><span class="sd">&gt; (1.668(23)+0.0j)</span>
</span><span id="L-304"><a href="#L-304"><span class="linenos">304</span></a><span class="sd">```</span>
</span><span id="L-305"><a href="#L-305"><span class="linenos">305</span></a>
</span><span id="L-306"><a href="#L-306"><span class="linenos">306</span></a><span class="sd"># The `Covobs` class</span>
</span><span id="L-307"><a href="#L-307"><span class="linenos">307</span></a><span class="sd">In many projects, auxiliary data that is not based on Monte Carlo chains enters. Examples are experimentally determined mesons masses which are used to set the scale or renormalization constants. These numbers come with an error that has to be propagated through the analysis. The `Covobs` class allows to define such quantities in `pyerrors`. Furthermore, external input might consist of correlated quantities. An example are the parameters of an interpolation formula, which are defined via mean values and a covariance matrix between all parameters. The contribution of the interpolation formula to the error of a derived quantity therefore might depend on the complete covariance matrix.</span>
</span><span id="L-308"><a href="#L-308"><span class="linenos">308</span></a>
</span><span id="L-309"><a href="#L-309"><span class="linenos">309</span></a><span class="sd">This concept is built into the definition of `Covobs`. In `pyerrors`, external input is defined by $M$ mean values, a $M\times M$ covariance matrix, where $M=1$ is permissible, and a name that uniquely identifies the covariance matrix. Below, we define the pion mass, based on its mean value and error, 134.9768(5). **Note, that the square of the error enters `cov_Obs`**, since the second argument of this function is the covariance matrix of the `Covobs`.</span>
</span><span id="L-310"><a href="#L-310"><span class="linenos">310</span></a>
</span><span id="L-311"><a href="#L-311"><span class="linenos">311</span></a><span class="sd">```python</span>
</span><span id="L-312"><a href="#L-312"><span class="linenos">312</span></a><span class="sd">import pyerrors.obs as pe</span>
</span><span id="L-313"><a href="#L-313"><span class="linenos">313</span></a>
</span><span id="L-314"><a href="#L-314"><span class="linenos">314</span></a><span class="sd">mpi = pe.cov_Obs(134.9768, 0.0005**2, &#39;pi^0 mass&#39;)</span>
</span><span id="L-315"><a href="#L-315"><span class="linenos">315</span></a><span class="sd">mpi.gamma_method()</span>
</span><span id="L-316"><a href="#L-316"><span class="linenos">316</span></a><span class="sd">mpi.details()</span>
</span><span id="L-317"><a href="#L-317"><span class="linenos">317</span></a><span class="sd">&gt; Result 1.34976800e+02 +/- 5.00000000e-04 +/- 0.00000000e+00 (0.000%)</span>
</span><span id="L-318"><a href="#L-318"><span class="linenos">318</span></a><span class="sd">&gt; pi^0 mass 5.00000000e-04</span>
</span><span id="L-319"><a href="#L-319"><span class="linenos">319</span></a><span class="sd">&gt; 0 samples in 1 ensemble:</span>
</span><span id="L-320"><a href="#L-320"><span class="linenos">320</span></a><span class="sd">&gt; · Covobs &#39;pi^0 mass&#39;</span>
</span><span id="L-321"><a href="#L-321"><span class="linenos">321</span></a><span class="sd">```</span>
</span><span id="L-322"><a href="#L-322"><span class="linenos">322</span></a><span class="sd">The resulting object `mpi` is an `Obs` that contains a `Covobs`. In the following, it may be handled as any other `Obs`. The contribution of the covariance matrix to the error of an `Obs` is determined from the $M \times M$ covariance matrix $\Sigma$ and the gradient of the `Obs` with respect to the external quantities, which is the $1\times M$ Jacobian matrix $J$, via</span>
</span><span id="L-323"><a href="#L-323"><span class="linenos">323</span></a><span class="sd">$$s = \sqrt{J^T \Sigma J}\,,$$</span>
</span><span id="L-324"><a href="#L-324"><span class="linenos">324</span></a><span class="sd">where the Jacobian is computed for each derived quantity via automatic differentiation.</span>
</span><span id="L-325"><a href="#L-325"><span class="linenos">325</span></a>
</span><span id="L-326"><a href="#L-326"><span class="linenos">326</span></a><span class="sd">Correlated auxiliary data is defined similarly to above, e.g., via</span>
</span><span id="L-327"><a href="#L-327"><span class="linenos">327</span></a><span class="sd">```python</span>
</span><span id="L-328"><a href="#L-328"><span class="linenos">328</span></a><span class="sd">RAP = pe.cov_Obs([16.7457, -19.0475], [[3.49591, -6.07560], [-6.07560, 10.5834]], &#39;R_AP, 1906.03445, (5.3a)&#39;)</span>
</span><span id="L-329"><a href="#L-329"><span class="linenos">329</span></a><span class="sd">print(RAP)</span>
</span><span id="L-330"><a href="#L-330"><span class="linenos">330</span></a><span class="sd">&gt; [Obs[16.7(1.9)], Obs[-19.0(3.3)]]</span>
</span><span id="L-331"><a href="#L-331"><span class="linenos">331</span></a><span class="sd">```</span>
</span><span id="L-332"><a href="#L-332"><span class="linenos">332</span></a><span class="sd">where `RAP` now is a list of two `Obs` that contains the two correlated parameters.</span>
</span><span id="L-333"><a href="#L-333"><span class="linenos">333</span></a>
</span><span id="L-334"><a href="#L-334"><span class="linenos">334</span></a><span class="sd">Since the gradient of a derived observable with respect to an external covariance matrix is propagated through the entire analysis, the `Covobs` class allows to quote the derivative of a result with respect to the external quantities. If these derivatives are published together with the result, small shifts in the definition of external quantities, e.g., the definition of the physical point, can be performed a posteriori based on the published information. This may help to compare results of different groups. The gradient of an `Obs` `o` with respect to a covariance matrix with the identifying string `k` may be accessed via</span>
</span><span id="L-335"><a href="#L-335"><span class="linenos">335</span></a><span class="sd">```python</span>
</span><span id="L-336"><a href="#L-336"><span class="linenos">336</span></a><span class="sd">o.covobs[k].grad</span>
</span><span id="L-337"><a href="#L-337"><span class="linenos">337</span></a><span class="sd">```</span>
</span><span id="L-338"><a href="#L-338"><span class="linenos">338</span></a>
</span><span id="L-339"><a href="#L-339"><span class="linenos">339</span></a><span class="sd"># Error propagation in iterative algorithms</span>
</span><span id="L-340"><a href="#L-340"><span class="linenos">340</span></a>
</span><span id="L-341"><a href="#L-341"><span class="linenos">341</span></a><span class="sd">`pyerrors` supports exact linear error propagation for iterative algorithms like various variants of non-linear least squares fits or root finding. The derivatives required for the error propagation are calculated as described in [arXiv:1809.01289](https://arxiv.org/abs/1809.01289).</span>
</span><span id="L-342"><a href="#L-342"><span class="linenos">342</span></a>
</span><span id="L-343"><a href="#L-343"><span class="linenos">343</span></a><span class="sd">## Least squares fits</span>
</span><span id="L-344"><a href="#L-344"><span class="linenos">344</span></a>
</span><span id="L-345"><a href="#L-345"><span class="linenos">345</span></a><span class="sd">Standard non-linear least square fits with errors on the dependent but not the independent variables can be performed with `pyerrors.fits.least_squares`. As default solver the Levenberg-Marquardt algorithm implemented in [scipy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html) is used.</span>
</span><span id="L-346"><a href="#L-346"><span class="linenos">346</span></a>
</span><span id="L-347"><a href="#L-347"><span class="linenos">347</span></a><span class="sd">Fit functions have to be of the following form</span>
</span><span id="L-348"><a href="#L-348"><span class="linenos">348</span></a><span class="sd">```python</span>
</span><span id="L-349"><a href="#L-349"><span class="linenos">349</span></a><span class="sd">import autograd.numpy as anp</span>
</span><span id="L-350"><a href="#L-350"><span class="linenos">350</span></a>
</span><span id="L-351"><a href="#L-351"><span class="linenos">351</span></a><span class="sd">def func(a, x):</span>
</span><span id="L-352"><a href="#L-352"><span class="linenos">352</span></a><span class="sd"> return a[1] * anp.exp(-a[0] * x)</span>
</span><span id="L-353"><a href="#L-353"><span class="linenos">353</span></a><span class="sd">```</span>
</span><span id="L-354"><a href="#L-354"><span class="linenos">354</span></a><span class="sd">**It is important that numerical functions refer to `autograd.numpy` instead of `numpy` for the automatic differentiation in iterative algorithms to work properly.**</span>
</span><span id="L-355"><a href="#L-355"><span class="linenos">355</span></a>
</span><span id="L-356"><a href="#L-356"><span class="linenos">356</span></a><span class="sd">Fits can then be performed via</span>
</span><span id="L-357"><a href="#L-357"><span class="linenos">357</span></a><span class="sd">```python</span>
</span><span id="L-358"><a href="#L-358"><span class="linenos">358</span></a><span class="sd">fit_result = pe.fits.least_squares(x, y, func)</span>
</span><span id="L-359"><a href="#L-359"><span class="linenos">359</span></a><span class="sd">print(&quot;\n&quot;, fit_result)</span>
</span><span id="L-360"><a href="#L-360"><span class="linenos">360</span></a><span class="sd">&gt; Fit with 2 parameters</span>
</span><span id="L-361"><a href="#L-361"><span class="linenos">361</span></a><span class="sd">&gt; Method: Levenberg-Marquardt</span>
</span><span id="L-362"><a href="#L-362"><span class="linenos">362</span></a><span class="sd">&gt; `ftol` termination condition is satisfied.</span>
</span><span id="L-363"><a href="#L-363"><span class="linenos">363</span></a><span class="sd">&gt; chisquare/d.o.f.: 0.9593035785160936</span>
</span><span id="L-364"><a href="#L-364"><span class="linenos">364</span></a>
</span><span id="L-365"><a href="#L-365"><span class="linenos">365</span></a><span class="sd">&gt; Goodness of fit:</span>
</span><span id="L-366"><a href="#L-366"><span class="linenos">366</span></a><span class="sd">&gt; χ²/d.o.f. = 0.959304</span>
</span><span id="L-367"><a href="#L-367"><span class="linenos">367</span></a><span class="sd">&gt; p-value = 0.5673</span>
</span><span id="L-368"><a href="#L-368"><span class="linenos">368</span></a><span class="sd">&gt; Fit parameters:</span>
</span><span id="L-369"><a href="#L-369"><span class="linenos">369</span></a><span class="sd">&gt; 0 0.0548(28)</span>
</span><span id="L-370"><a href="#L-370"><span class="linenos">370</span></a><span class="sd">&gt; 1 1.933(64)</span>
</span><span id="L-371"><a href="#L-371"><span class="linenos">371</span></a><span class="sd">```</span>
</span><span id="L-372"><a href="#L-372"><span class="linenos">372</span></a><span class="sd">where x is a `list` or `numpy.array` of `floats` and y is a `list` or `numpy.array` of `Obs`.</span>
</span><span id="L-373"><a href="#L-373"><span class="linenos">373</span></a>
</span><span id="L-374"><a href="#L-374"><span class="linenos">374</span></a><span class="sd">Data stored in `Corr` objects can be fitted directly using the `Corr.fit` method.</span>
</span><span id="L-375"><a href="#L-375"><span class="linenos">375</span></a><span class="sd">```python</span>
</span><span id="L-376"><a href="#L-376"><span class="linenos">376</span></a><span class="sd">my_corr = pe.Corr(y)</span>
</span><span id="L-377"><a href="#L-377"><span class="linenos">377</span></a><span class="sd">fit_result = my_corr.fit(func, fitrange=[12, 25])</span>
</span><span id="L-378"><a href="#L-378"><span class="linenos">378</span></a><span class="sd">```</span>
</span><span id="L-379"><a href="#L-379"><span class="linenos">379</span></a><span class="sd">this can simplify working with absolute fit ranges and takes care of gaps in the data automatically.</span>
</span><span id="L-380"><a href="#L-380"><span class="linenos">380</span></a>
</span><span id="L-381"><a href="#L-381"><span class="linenos">381</span></a><span class="sd">For fit functions with multiple independent variables the fit function can be of the form</span>
</span><span id="L-382"><a href="#L-382"><span class="linenos">382</span></a>
</span><span id="L-383"><a href="#L-383"><span class="linenos">383</span></a><span class="sd">```python</span>
</span><span id="L-384"><a href="#L-384"><span class="linenos">384</span></a><span class="sd">def func(a, x):</span>
</span><span id="L-385"><a href="#L-385"><span class="linenos">385</span></a><span class="sd"> (x1, x2) = x</span>
</span><span id="L-386"><a href="#L-386"><span class="linenos">386</span></a><span class="sd"> return a[0] * x1 ** 2 + a[1] * x2</span>
</span><span id="L-387"><a href="#L-387"><span class="linenos">387</span></a><span class="sd">```</span>
</span><span id="L-388"><a href="#L-388"><span class="linenos">388</span></a>
</span><span id="L-389"><a href="#L-389"><span class="linenos">389</span></a><span class="sd">`pyerrors` also supports correlated fits which can be triggered via the parameter `correlated_fit=True`.</span>
</span><span id="L-390"><a href="#L-390"><span class="linenos">390</span></a><span class="sd">Details about how the required covariance matrix is estimated can be found in `pyerrors.obs.covariance`.</span>
</span><span id="L-391"><a href="#L-391"><span class="linenos">391</span></a><span class="sd">Direct visualizations of the performed fits can be triggered via `resplot=True` or `qqplot=True`.</span>
</span><span id="L-392"><a href="#L-392"><span class="linenos">392</span></a>
</span><span id="L-393"><a href="#L-393"><span class="linenos">393</span></a><span class="sd">## Total least squares fits</span>
</span><span id="L-394"><a href="#L-394"><span class="linenos">394</span></a><span class="sd">`pyerrors` can also fit data with errors on both the dependent and independent variables using the total least squares method also referred to orthogonal distance regression as implemented in [scipy](https://docs.scipy.org/doc/scipy/reference/odr.html), see `pyerrors.fits.least_squares`. The syntax is identical to the standard least squares case, the only difference being that `x` also has to be a `list` or `numpy.array` of `Obs`.</span>
</span><span id="L-395"><a href="#L-395"><span class="linenos">395</span></a>
</span><span id="L-396"><a href="#L-396"><span class="linenos">396</span></a><span class="sd">For the full API see `pyerrors.fits` for fits and `pyerrors.roots` for finding roots of functions.</span>
</span><span id="L-393"><a href="#L-393"><span class="linenos">393</span></a><span class="sd">For all available options including combined fits to multiple datasets see `pyerrors.fits.least_squares`.</span>
</span><span id="L-394"><a href="#L-394"><span class="linenos">394</span></a>
</span><span id="L-395"><a href="#L-395"><span class="linenos">395</span></a><span class="sd">## Total least squares fits</span>
</span><span id="L-396"><a href="#L-396"><span class="linenos">396</span></a><span class="sd">`pyerrors` can also fit data with errors on both the dependent and independent variables using the total least squares method also referred to as orthogonal distance regression as implemented in [scipy](https://docs.scipy.org/doc/scipy/reference/odr.html), see `pyerrors.fits.least_squares`. The syntax is identical to the standard least squares case, the only difference being that `x` also has to be a `list` or `numpy.array` of `Obs`.</span>
</span><span id="L-397"><a href="#L-397"><span class="linenos">397</span></a>
</span><span id="L-398"><a href="#L-398"><span class="linenos">398</span></a><span class="sd"># Matrix operations</span>
</span><span id="L-399"><a href="#L-399"><span class="linenos">399</span></a><span class="sd">`pyerrors` provides wrappers for `Obs`- and `CObs`-valued matrix operations based on `numpy.linalg`. The supported functions include:</span>
</span><span id="L-400"><a href="#L-400"><span class="linenos">400</span></a><span class="sd">- `inv` for the matrix inverse.</span>
</span><span id="L-401"><a href="#L-401"><span class="linenos">401</span></a><span class="sd">- `cholseky` for the Cholesky decomposition.</span>
</span><span id="L-402"><a href="#L-402"><span class="linenos">402</span></a><span class="sd">- `det` for the matrix determinant.</span>
</span><span id="L-403"><a href="#L-403"><span class="linenos">403</span></a><span class="sd">- `eigh` for eigenvalues and eigenvectors of hermitean matrices.</span>
</span><span id="L-404"><a href="#L-404"><span class="linenos">404</span></a><span class="sd">- `eig` for eigenvalues of general matrices.</span>
</span><span id="L-405"><a href="#L-405"><span class="linenos">405</span></a><span class="sd">- `pinv` for the Moore-Penrose pseudoinverse.</span>
</span><span id="L-406"><a href="#L-406"><span class="linenos">406</span></a><span class="sd">- `svd` for the singular-value-decomposition.</span>
</span><span id="L-407"><a href="#L-407"><span class="linenos">407</span></a>
</span><span id="L-408"><a href="#L-408"><span class="linenos">408</span></a><span class="sd">For the full API see `pyerrors.linalg`.</span>
</span><span id="L-398"><a href="#L-398"><span class="linenos">398</span></a><span class="sd">For the full API see `pyerrors.fits` for fits and `pyerrors.roots` for finding roots of functions.</span>
</span><span id="L-399"><a href="#L-399"><span class="linenos">399</span></a>
</span><span id="L-400"><a href="#L-400"><span class="linenos">400</span></a><span class="sd"># Matrix operations</span>
</span><span id="L-401"><a href="#L-401"><span class="linenos">401</span></a><span class="sd">`pyerrors` provides wrappers for `Obs`- and `CObs`-valued matrix operations based on `numpy.linalg`. The supported functions include:</span>
</span><span id="L-402"><a href="#L-402"><span class="linenos">402</span></a><span class="sd">- `inv` for the matrix inverse.</span>
</span><span id="L-403"><a href="#L-403"><span class="linenos">403</span></a><span class="sd">- `cholseky` for the Cholesky decomposition.</span>
</span><span id="L-404"><a href="#L-404"><span class="linenos">404</span></a><span class="sd">- `det` for the matrix determinant.</span>
</span><span id="L-405"><a href="#L-405"><span class="linenos">405</span></a><span class="sd">- `eigh` for eigenvalues and eigenvectors of hermitean matrices.</span>
</span><span id="L-406"><a href="#L-406"><span class="linenos">406</span></a><span class="sd">- `eig` for eigenvalues of general matrices.</span>
</span><span id="L-407"><a href="#L-407"><span class="linenos">407</span></a><span class="sd">- `pinv` for the Moore-Penrose pseudoinverse.</span>
</span><span id="L-408"><a href="#L-408"><span class="linenos">408</span></a><span class="sd">- `svd` for the singular-value-decomposition.</span>
</span><span id="L-409"><a href="#L-409"><span class="linenos">409</span></a>
</span><span id="L-410"><a href="#L-410"><span class="linenos">410</span></a><span class="sd"># Export data</span>
</span><span id="L-410"><a href="#L-410"><span class="linenos">410</span></a><span class="sd">For the full API see `pyerrors.linalg`.</span>
</span><span id="L-411"><a href="#L-411"><span class="linenos">411</span></a>
</span><span id="L-412"><a href="#L-412"><span class="linenos">412</span></a><span class="sd">[&lt;img src=&quot;https://imgs.xkcd.com/comics/standards_2x.png&quot; width=&quot;75%&quot; height=&quot;75%&quot;&gt;](https://xkcd.com/927/)</span>
</span><span id="L-412"><a href="#L-412"><span class="linenos">412</span></a><span class="sd"># Export data</span>
</span><span id="L-413"><a href="#L-413"><span class="linenos">413</span></a>
</span><span id="L-414"><a href="#L-414"><span class="linenos">414</span></a><span class="sd">The preferred exported file format within `pyerrors` is json.gz. Files written to this format are valid JSON files that have been compressed using gzip. The structure of the content is inspired by the dobs format of the ALPHA collaboration. The aim of the format is to facilitate the storage of data in a self-contained way such that, even years after the creation of the file, it is possible to extract all necessary information:</span>
</span><span id="L-415"><a href="#L-415"><span class="linenos">415</span></a><span class="sd">- What observables are stored? Possibly: How exactly are they defined.</span>
</span><span id="L-416"><a href="#L-416"><span class="linenos">416</span></a><span class="sd">- How does each single ensemble or external quantity contribute to the error of the observable?</span>
</span><span id="L-417"><a href="#L-417"><span class="linenos">417</span></a><span class="sd">- Who did write the file when and on which machine?</span>
</span><span id="L-418"><a href="#L-418"><span class="linenos">418</span></a>
</span><span id="L-419"><a href="#L-419"><span class="linenos">419</span></a><span class="sd">This can be achieved by storing all information in one single file. The export routines of `pyerrors` are written such that as much information as possible is written automatically as described in the following example</span>
</span><span id="L-420"><a href="#L-420"><span class="linenos">420</span></a><span class="sd">```python</span>
</span><span id="L-421"><a href="#L-421"><span class="linenos">421</span></a><span class="sd">my_obs = pe.Obs([samples], [&quot;test_ensemble&quot;])</span>
</span><span id="L-422"><a href="#L-422"><span class="linenos">422</span></a><span class="sd">my_obs.tag = &quot;My observable&quot;</span>
</span><span id="L-423"><a href="#L-423"><span class="linenos">423</span></a>
</span><span id="L-424"><a href="#L-424"><span class="linenos">424</span></a><span class="sd">pe.input.json.dump_to_json(my_obs, &quot;test_output_file&quot;, description=&quot;This file contains a test observable&quot;)</span>
</span><span id="L-425"><a href="#L-425"><span class="linenos">425</span></a><span class="sd"># For a single observable one can equivalently use the class method dump</span>
</span><span id="L-426"><a href="#L-426"><span class="linenos">426</span></a><span class="sd">my_obs.dump(&quot;test_output_file&quot;, description=&quot;This file contains a test observable&quot;)</span>
</span><span id="L-427"><a href="#L-427"><span class="linenos">427</span></a>
</span><span id="L-428"><a href="#L-428"><span class="linenos">428</span></a><span class="sd">check = pe.input.json.load_json(&quot;test_output_file&quot;)</span>
</span><span id="L-414"><a href="#L-414"><span class="linenos">414</span></a><span class="sd">[&lt;img src=&quot;https://imgs.xkcd.com/comics/standards_2x.png&quot; width=&quot;75%&quot; height=&quot;75%&quot;&gt;](https://xkcd.com/927/)</span>
</span><span id="L-415"><a href="#L-415"><span class="linenos">415</span></a>
</span><span id="L-416"><a href="#L-416"><span class="linenos">416</span></a><span class="sd">The preferred exported file format within `pyerrors` is json.gz. Files written to this format are valid JSON files that have been compressed using gzip. The structure of the content is inspired by the dobs format of the ALPHA collaboration. The aim of the format is to facilitate the storage of data in a self-contained way such that, even years after the creation of the file, it is possible to extract all necessary information:</span>
</span><span id="L-417"><a href="#L-417"><span class="linenos">417</span></a><span class="sd">- What observables are stored? Possibly: How exactly are they defined.</span>
</span><span id="L-418"><a href="#L-418"><span class="linenos">418</span></a><span class="sd">- How does each single ensemble or external quantity contribute to the error of the observable?</span>
</span><span id="L-419"><a href="#L-419"><span class="linenos">419</span></a><span class="sd">- Who did write the file when and on which machine?</span>
</span><span id="L-420"><a href="#L-420"><span class="linenos">420</span></a>
</span><span id="L-421"><a href="#L-421"><span class="linenos">421</span></a><span class="sd">This can be achieved by storing all information in one single file. The export routines of `pyerrors` are written such that as much information as possible is written automatically as described in the following example</span>
</span><span id="L-422"><a href="#L-422"><span class="linenos">422</span></a><span class="sd">```python</span>
</span><span id="L-423"><a href="#L-423"><span class="linenos">423</span></a><span class="sd">my_obs = pe.Obs([samples], [&quot;test_ensemble&quot;])</span>
</span><span id="L-424"><a href="#L-424"><span class="linenos">424</span></a><span class="sd">my_obs.tag = &quot;My observable&quot;</span>
</span><span id="L-425"><a href="#L-425"><span class="linenos">425</span></a>
</span><span id="L-426"><a href="#L-426"><span class="linenos">426</span></a><span class="sd">pe.input.json.dump_to_json(my_obs, &quot;test_output_file&quot;, description=&quot;This file contains a test observable&quot;)</span>
</span><span id="L-427"><a href="#L-427"><span class="linenos">427</span></a><span class="sd"># For a single observable one can equivalently use the class method dump</span>
</span><span id="L-428"><a href="#L-428"><span class="linenos">428</span></a><span class="sd">my_obs.dump(&quot;test_output_file&quot;, description=&quot;This file contains a test observable&quot;)</span>
</span><span id="L-429"><a href="#L-429"><span class="linenos">429</span></a>
</span><span id="L-430"><a href="#L-430"><span class="linenos">430</span></a><span class="sd">print(my_obs == check)</span>
</span><span id="L-431"><a href="#L-431"><span class="linenos">431</span></a><span class="sd">&gt; True</span>
</span><span id="L-432"><a href="#L-432"><span class="linenos">432</span></a><span class="sd">```</span>
</span><span id="L-433"><a href="#L-433"><span class="linenos">433</span></a><span class="sd">The format also allows to directly write out the content of `Corr` objects or lists and arrays of `Obs` objects by passing the desired data to `pyerrors.input.json.dump_to_json`.</span>
</span><span id="L-434"><a href="#L-434"><span class="linenos">434</span></a>
</span><span id="L-435"><a href="#L-435"><span class="linenos">435</span></a><span class="sd">## json.gz format specification</span>
</span><span id="L-436"><a href="#L-436"><span class="linenos">436</span></a><span class="sd">The first entries of the file provide optional auxiliary information:</span>
</span><span id="L-437"><a href="#L-437"><span class="linenos">437</span></a><span class="sd">- `program` is a string that indicates which program was used to write the file.</span>
</span><span id="L-438"><a href="#L-438"><span class="linenos">438</span></a><span class="sd">- `version` is a string that specifies the version of the format.</span>
</span><span id="L-439"><a href="#L-439"><span class="linenos">439</span></a><span class="sd">- `who` is a string that specifies the user name of the creator of the file.</span>
</span><span id="L-440"><a href="#L-440"><span class="linenos">440</span></a><span class="sd">- `date` is a string and contains the creation date of the file.</span>
</span><span id="L-441"><a href="#L-441"><span class="linenos">441</span></a><span class="sd">- `host` is a string and contains the hostname of the machine where the file has been written.</span>
</span><span id="L-442"><a href="#L-442"><span class="linenos">442</span></a><span class="sd">- `description` contains information on the content of the file. This field is not filled automatically in `pyerrors`. The user is advised to provide as detailed information as possible in this field. Examples are: Input files of measurements or simulations, LaTeX formulae or references to publications to specify how the observables have been computed, details on the analysis strategy, ... This field may be any valid JSON type. Strings, arrays or objects (equivalent to dicts in python) are well suited to provide information.</span>
</span><span id="L-443"><a href="#L-443"><span class="linenos">443</span></a>
</span><span id="L-444"><a href="#L-444"><span class="linenos">444</span></a><span class="sd">The only necessary entry of the file is the field</span>
</span><span id="L-445"><a href="#L-445"><span class="linenos">445</span></a><span class="sd">-`obsdata`, an array that contains the actual data.</span>
</span><span id="L-446"><a href="#L-446"><span class="linenos">446</span></a>
</span><span id="L-447"><a href="#L-447"><span class="linenos">447</span></a><span class="sd">Each entry of the array belongs to a single structure of observables. Currently, these structures can be either of `Obs`, `list`, `numpy.ndarray`, `Corr`. All `Obs` inside a structure (with dimension &gt; 0) have to be defined on the same set of configurations. Different structures, that are represented by entries of the array `obsdata`, are treated independently. Each entry of the array `obsdata` has the following required entries:</span>
</span><span id="L-448"><a href="#L-448"><span class="linenos">448</span></a><span class="sd">- `type` is a string that specifies the type of the structure. This allows to parse the content to the correct form after reading the file. It is always possible to interpret the content as list of Obs.</span>
</span><span id="L-449"><a href="#L-449"><span class="linenos">449</span></a><span class="sd">- `value` is an array that contains the mean values of the Obs inside the structure.</span>
</span><span id="L-450"><a href="#L-450"><span class="linenos">450</span></a><span class="sd">The following entries are optional:</span>
</span><span id="L-451"><a href="#L-451"><span class="linenos">451</span></a><span class="sd">- `layout` is a string that specifies the layout of multi-dimensional structures. Examples are &quot;2, 2&quot; for a 2x2 dimensional matrix or &quot;64, 4, 4&quot; for a Corr with $T=64$ and 4x4 matrices on each time slices. &quot;1&quot; denotes a single Obs. Multi-dimensional structures are stored in row-major format (see below).</span>
</span><span id="L-452"><a href="#L-452"><span class="linenos">452</span></a><span class="sd">- `tag` is any JSON type. It contains additional information concerning the structure. The `tag` of an `Obs` in `pyerrors` is written here.</span>
</span><span id="L-453"><a href="#L-453"><span class="linenos">453</span></a><span class="sd">- `reweighted` is a Bool that may be used to specify, whether the `Obs` in the structure have been reweighted.</span>
</span><span id="L-454"><a href="#L-454"><span class="linenos">454</span></a><span class="sd">- `data` is an array that contains the data from MC chains. We will define it below.</span>
</span><span id="L-455"><a href="#L-455"><span class="linenos">455</span></a><span class="sd">- `cdata` is an array that contains the data from external quantities with an error (`Covobs` in `pyerrors`). We will define it below.</span>
</span><span id="L-456"><a href="#L-456"><span class="linenos">456</span></a>
</span><span id="L-457"><a href="#L-457"><span class="linenos">457</span></a><span class="sd">The array `data` contains the data from MC chains. Each entry of the array corresponds to one ensemble and contains:</span>
</span><span id="L-458"><a href="#L-458"><span class="linenos">458</span></a><span class="sd">- `id`, a string that contains the name of the ensemble</span>
</span><span id="L-459"><a href="#L-459"><span class="linenos">459</span></a><span class="sd">- `replica`, an array that contains an entry per replica of the ensemble.</span>
</span><span id="L-460"><a href="#L-460"><span class="linenos">460</span></a>
</span><span id="L-461"><a href="#L-461"><span class="linenos">461</span></a><span class="sd">Each entry of `replica` contains</span>
</span><span id="L-462"><a href="#L-462"><span class="linenos">462</span></a><span class="sd">`name`, a string that contains the name of the replica</span>
</span><span id="L-463"><a href="#L-463"><span class="linenos">463</span></a><span class="sd">`deltas`, an array that contains the actual data.</span>
</span><span id="L-464"><a href="#L-464"><span class="linenos">464</span></a>
</span><span id="L-465"><a href="#L-465"><span class="linenos">465</span></a><span class="sd">Each entry in `deltas` corresponds to one configuration of the replica and has $1+N$ many entries. The first entry is an integer that specifies the configuration number that, together with ensemble and replica name, may be used to uniquely identify the configuration on which the data has been obtained. The following N entries specify the deltas, i.e., the deviation of the observable from the mean value on this configuration, of each `Obs` inside the structure. Multi-dimensional structures are stored in a row-major format. For primary observables, such as correlation functions, $value + delta_i$ matches the primary data obtained on the configuration.</span>
</span><span id="L-430"><a href="#L-430"><span class="linenos">430</span></a><span class="sd">check = pe.input.json.load_json(&quot;test_output_file&quot;)</span>
</span><span id="L-431"><a href="#L-431"><span class="linenos">431</span></a>
</span><span id="L-432"><a href="#L-432"><span class="linenos">432</span></a><span class="sd">print(my_obs == check)</span>
</span><span id="L-433"><a href="#L-433"><span class="linenos">433</span></a><span class="sd">&gt; True</span>
</span><span id="L-434"><a href="#L-434"><span class="linenos">434</span></a><span class="sd">```</span>
</span><span id="L-435"><a href="#L-435"><span class="linenos">435</span></a><span class="sd">The format also allows to directly write out the content of `Corr` objects or lists and arrays of `Obs` objects by passing the desired data to `pyerrors.input.json.dump_to_json`.</span>
</span><span id="L-436"><a href="#L-436"><span class="linenos">436</span></a>
</span><span id="L-437"><a href="#L-437"><span class="linenos">437</span></a><span class="sd">## json.gz format specification</span>
</span><span id="L-438"><a href="#L-438"><span class="linenos">438</span></a><span class="sd">The first entries of the file provide optional auxiliary information:</span>
</span><span id="L-439"><a href="#L-439"><span class="linenos">439</span></a><span class="sd">- `program` is a string that indicates which program was used to write the file.</span>
</span><span id="L-440"><a href="#L-440"><span class="linenos">440</span></a><span class="sd">- `version` is a string that specifies the version of the format.</span>
</span><span id="L-441"><a href="#L-441"><span class="linenos">441</span></a><span class="sd">- `who` is a string that specifies the user name of the creator of the file.</span>
</span><span id="L-442"><a href="#L-442"><span class="linenos">442</span></a><span class="sd">- `date` is a string and contains the creation date of the file.</span>
</span><span id="L-443"><a href="#L-443"><span class="linenos">443</span></a><span class="sd">- `host` is a string and contains the hostname of the machine where the file has been written.</span>
</span><span id="L-444"><a href="#L-444"><span class="linenos">444</span></a><span class="sd">- `description` contains information on the content of the file. This field is not filled automatically in `pyerrors`. The user is advised to provide as detailed information as possible in this field. Examples are: Input files of measurements or simulations, LaTeX formulae or references to publications to specify how the observables have been computed, details on the analysis strategy, ... This field may be any valid JSON type. Strings, arrays or objects (equivalent to dicts in python) are well suited to provide information.</span>
</span><span id="L-445"><a href="#L-445"><span class="linenos">445</span></a>
</span><span id="L-446"><a href="#L-446"><span class="linenos">446</span></a><span class="sd">The only necessary entry of the file is the field</span>
</span><span id="L-447"><a href="#L-447"><span class="linenos">447</span></a><span class="sd">-`obsdata`, an array that contains the actual data.</span>
</span><span id="L-448"><a href="#L-448"><span class="linenos">448</span></a>
</span><span id="L-449"><a href="#L-449"><span class="linenos">449</span></a><span class="sd">Each entry of the array belongs to a single structure of observables. Currently, these structures can be either of `Obs`, `list`, `numpy.ndarray`, `Corr`. All `Obs` inside a structure (with dimension &gt; 0) have to be defined on the same set of configurations. Different structures, that are represented by entries of the array `obsdata`, are treated independently. Each entry of the array `obsdata` has the following required entries:</span>
</span><span id="L-450"><a href="#L-450"><span class="linenos">450</span></a><span class="sd">- `type` is a string that specifies the type of the structure. This allows to parse the content to the correct form after reading the file. It is always possible to interpret the content as list of Obs.</span>
</span><span id="L-451"><a href="#L-451"><span class="linenos">451</span></a><span class="sd">- `value` is an array that contains the mean values of the Obs inside the structure.</span>
</span><span id="L-452"><a href="#L-452"><span class="linenos">452</span></a><span class="sd">The following entries are optional:</span>
</span><span id="L-453"><a href="#L-453"><span class="linenos">453</span></a><span class="sd">- `layout` is a string that specifies the layout of multi-dimensional structures. Examples are &quot;2, 2&quot; for a 2x2 dimensional matrix or &quot;64, 4, 4&quot; for a Corr with $T=64$ and 4x4 matrices on each time slices. &quot;1&quot; denotes a single Obs. Multi-dimensional structures are stored in row-major format (see below).</span>
</span><span id="L-454"><a href="#L-454"><span class="linenos">454</span></a><span class="sd">- `tag` is any JSON type. It contains additional information concerning the structure. The `tag` of an `Obs` in `pyerrors` is written here.</span>
</span><span id="L-455"><a href="#L-455"><span class="linenos">455</span></a><span class="sd">- `reweighted` is a Bool that may be used to specify, whether the `Obs` in the structure have been reweighted.</span>
</span><span id="L-456"><a href="#L-456"><span class="linenos">456</span></a><span class="sd">- `data` is an array that contains the data from MC chains. We will define it below.</span>
</span><span id="L-457"><a href="#L-457"><span class="linenos">457</span></a><span class="sd">- `cdata` is an array that contains the data from external quantities with an error (`Covobs` in `pyerrors`). We will define it below.</span>
</span><span id="L-458"><a href="#L-458"><span class="linenos">458</span></a>
</span><span id="L-459"><a href="#L-459"><span class="linenos">459</span></a><span class="sd">The array `data` contains the data from MC chains. Each entry of the array corresponds to one ensemble and contains:</span>
</span><span id="L-460"><a href="#L-460"><span class="linenos">460</span></a><span class="sd">- `id`, a string that contains the name of the ensemble</span>
</span><span id="L-461"><a href="#L-461"><span class="linenos">461</span></a><span class="sd">- `replica`, an array that contains an entry per replica of the ensemble.</span>
</span><span id="L-462"><a href="#L-462"><span class="linenos">462</span></a>
</span><span id="L-463"><a href="#L-463"><span class="linenos">463</span></a><span class="sd">Each entry of `replica` contains</span>
</span><span id="L-464"><a href="#L-464"><span class="linenos">464</span></a><span class="sd">`name`, a string that contains the name of the replica</span>
</span><span id="L-465"><a href="#L-465"><span class="linenos">465</span></a><span class="sd">`deltas`, an array that contains the actual data.</span>
</span><span id="L-466"><a href="#L-466"><span class="linenos">466</span></a>
</span><span id="L-467"><a href="#L-467"><span class="linenos">467</span></a><span class="sd">The array `cdata` contains information about the contribution of auxiliary observables, represented by `Covobs` in `pyerrors`, to the total error of the observables. Each entry of the array belongs to one auxiliary covariance matrix and contains:</span>
</span><span id="L-468"><a href="#L-468"><span class="linenos">468</span></a><span class="sd">- `id`, a string that identifies the covariance matrix</span>
</span><span id="L-469"><a href="#L-469"><span class="linenos">469</span></a><span class="sd">- `layout`, a string that defines the dimensions of the $M\times M$ covariance matrix (has to be &quot;M, M&quot; or &quot;1&quot;).</span>
</span><span id="L-470"><a href="#L-470"><span class="linenos">470</span></a><span class="sd">- `cov`, an array that contains the $M\times M$ many entries of the covariance matrix, stored in row-major format.</span>
</span><span id="L-471"><a href="#L-471"><span class="linenos">471</span></a><span class="sd">- `grad`, an array that contains N entries, one for each `Obs` inside the structure. Each entry itself is an array, that contains the M gradients of the Nth observable with respect to the quantity that corresponds to the Mth diagonal entry of the covariance matrix.</span>
</span><span id="L-472"><a href="#L-472"><span class="linenos">472</span></a>
</span><span id="L-473"><a href="#L-473"><span class="linenos">473</span></a><span class="sd">A JSON schema that may be used to verify the correctness of a file with respect to the format definition is stored in ./examples/json_schema.json. The schema is a self-descriptive format definition and contains an exemplary file.</span>
</span><span id="L-467"><a href="#L-467"><span class="linenos">467</span></a><span class="sd">Each entry in `deltas` corresponds to one configuration of the replica and has $1+N$ many entries. The first entry is an integer that specifies the configuration number that, together with ensemble and replica name, may be used to uniquely identify the configuration on which the data has been obtained. The following N entries specify the deltas, i.e., the deviation of the observable from the mean value on this configuration, of each `Obs` inside the structure. Multi-dimensional structures are stored in a row-major format. For primary observables, such as correlation functions, $value + delta_i$ matches the primary data obtained on the configuration.</span>
</span><span id="L-468"><a href="#L-468"><span class="linenos">468</span></a>
</span><span id="L-469"><a href="#L-469"><span class="linenos">469</span></a><span class="sd">The array `cdata` contains information about the contribution of auxiliary observables, represented by `Covobs` in `pyerrors`, to the total error of the observables. Each entry of the array belongs to one auxiliary covariance matrix and contains:</span>
</span><span id="L-470"><a href="#L-470"><span class="linenos">470</span></a><span class="sd">- `id`, a string that identifies the covariance matrix</span>
</span><span id="L-471"><a href="#L-471"><span class="linenos">471</span></a><span class="sd">- `layout`, a string that defines the dimensions of the $M\times M$ covariance matrix (has to be &quot;M, M&quot; or &quot;1&quot;).</span>
</span><span id="L-472"><a href="#L-472"><span class="linenos">472</span></a><span class="sd">- `cov`, an array that contains the $M\times M$ many entries of the covariance matrix, stored in row-major format.</span>
</span><span id="L-473"><a href="#L-473"><span class="linenos">473</span></a><span class="sd">- `grad`, an array that contains N entries, one for each `Obs` inside the structure. Each entry itself is an array, that contains the M gradients of the Nth observable with respect to the quantity that corresponds to the Mth diagonal entry of the covariance matrix.</span>
</span><span id="L-474"><a href="#L-474"><span class="linenos">474</span></a>
</span><span id="L-475"><a href="#L-475"><span class="linenos">475</span></a><span class="sd">Julia I/O routines for the json.gz format, compatible with [ADerrors.jl](https://gitlab.ift.uam-csic.es/alberto/aderrors.jl), can be found [here](https://github.com/fjosw/ADjson.jl).</span>
</span><span id="L-476"><a href="#L-476"><span class="linenos">476</span></a><span class="sd">&#39;&#39;&#39;</span>
</span><span id="L-477"><a href="#L-477"><span class="linenos">477</span></a><span class="kn">from</span> <span class="nn">.obs</span> <span class="kn">import</span> <span class="o">*</span>
</span><span id="L-478"><a href="#L-478"><span class="linenos">478</span></a><span class="kn">from</span> <span class="nn">.correlators</span> <span class="kn">import</span> <span class="o">*</span>
</span><span id="L-479"><a href="#L-479"><span class="linenos">479</span></a><span class="kn">from</span> <span class="nn">.fits</span> <span class="kn">import</span> <span class="o">*</span>
</span><span id="L-480"><a href="#L-480"><span class="linenos">480</span></a><span class="kn">from</span> <span class="nn">.misc</span> <span class="kn">import</span> <span class="o">*</span>
</span><span id="L-481"><a href="#L-481"><span class="linenos">481</span></a><span class="kn">from</span> <span class="nn">.</span> <span class="kn">import</span> <span class="n">dirac</span>
</span><span id="L-482"><a href="#L-482"><span class="linenos">482</span></a><span class="kn">from</span> <span class="nn">.</span> <span class="kn">import</span> <span class="nb">input</span>
</span><span id="L-483"><a href="#L-483"><span class="linenos">483</span></a><span class="kn">from</span> <span class="nn">.</span> <span class="kn">import</span> <span class="n">linalg</span>
</span><span id="L-484"><a href="#L-484"><span class="linenos">484</span></a><span class="kn">from</span> <span class="nn">.</span> <span class="kn">import</span> <span class="n">mpm</span>
</span><span id="L-485"><a href="#L-485"><span class="linenos">485</span></a><span class="kn">from</span> <span class="nn">.</span> <span class="kn">import</span> <span class="n">roots</span>
</span><span id="L-486"><a href="#L-486"><span class="linenos">486</span></a>
</span><span id="L-487"><a href="#L-487"><span class="linenos">487</span></a><span class="kn">from</span> <span class="nn">.version</span> <span class="kn">import</span> <span class="n">__version__</span>
</span><span id="L-475"><a href="#L-475"><span class="linenos">475</span></a><span class="sd">A JSON schema that may be used to verify the correctness of a file with respect to the format definition is stored in ./examples/json_schema.json. The schema is a self-descriptive format definition and contains an exemplary file.</span>
</span><span id="L-476"><a href="#L-476"><span class="linenos">476</span></a>
</span><span id="L-477"><a href="#L-477"><span class="linenos">477</span></a><span class="sd">Julia I/O routines for the json.gz format, compatible with [ADerrors.jl](https://gitlab.ift.uam-csic.es/alberto/aderrors.jl), can be found [here](https://github.com/fjosw/ADjson.jl).</span>
</span><span id="L-478"><a href="#L-478"><span class="linenos">478</span></a><span class="sd">&#39;&#39;&#39;</span>
</span><span id="L-479"><a href="#L-479"><span class="linenos">479</span></a><span class="kn">from</span> <span class="nn">.obs</span> <span class="kn">import</span> <span class="o">*</span>
</span><span id="L-480"><a href="#L-480"><span class="linenos">480</span></a><span class="kn">from</span> <span class="nn">.correlators</span> <span class="kn">import</span> <span class="o">*</span>
</span><span id="L-481"><a href="#L-481"><span class="linenos">481</span></a><span class="kn">from</span> <span class="nn">.fits</span> <span class="kn">import</span> <span class="o">*</span>
</span><span id="L-482"><a href="#L-482"><span class="linenos">482</span></a><span class="kn">from</span> <span class="nn">.misc</span> <span class="kn">import</span> <span class="o">*</span>
</span><span id="L-483"><a href="#L-483"><span class="linenos">483</span></a><span class="kn">from</span> <span class="nn">.</span> <span class="kn">import</span> <span class="n">dirac</span>
</span><span id="L-484"><a href="#L-484"><span class="linenos">484</span></a><span class="kn">from</span> <span class="nn">.</span> <span class="kn">import</span> <span class="nb">input</span>
</span><span id="L-485"><a href="#L-485"><span class="linenos">485</span></a><span class="kn">from</span> <span class="nn">.</span> <span class="kn">import</span> <span class="n">linalg</span>
</span><span id="L-486"><a href="#L-486"><span class="linenos">486</span></a><span class="kn">from</span> <span class="nn">.</span> <span class="kn">import</span> <span class="n">mpm</span>
</span><span id="L-487"><a href="#L-487"><span class="linenos">487</span></a><span class="kn">from</span> <span class="nn">.</span> <span class="kn">import</span> <span class="n">roots</span>
</span><span id="L-488"><a href="#L-488"><span class="linenos">488</span></a>
</span><span id="L-489"><a href="#L-489"><span class="linenos">489</span></a><span class="kn">from</span> <span class="nn">.version</span> <span class="kn">import</span> <span class="n">__version__</span>
</span></pre></div>

File diff suppressed because one or more lines are too long