Documentation updated

This commit is contained in:
fjosw 2022-03-05 08:14:33 +00:00
parent e86c7cb442
commit c4d415747e
4 changed files with 62 additions and 43 deletions

View file

@ -451,6 +451,11 @@ where the Jacobian is computed for each derived quantity via automatic different
<span class="k">return</span> <span class="n">a</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">*</span> <span class="n">x1</span> <span class="o">**</span> <span class="mi">2</span> <span class="o">+</span> <span class="n">a</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">*</span> <span class="n">x2</span>
</code></pre></div>
<p><code>pyerrors</code> also supports correlated fits which can be triggered via the parameter <code>correlated_fit=True</code>.
Details about how the required covariance matrix is estimated can be found in <code><a href="pyerrors/obs.html#covariance">pyerrors.obs.covariance</a></code>.</p>
<p>Direct visualizations of the performed fits can be triggered via <code>resplot=True</code> or <code>qqplot=True</code>. For all available options see <code><a href="pyerrors/fits.html#least_squares">pyerrors.fits.least_squares</a></code>.</p>
<h2 id="total-least-squares-fits">Total least squares fits</h2>
<p><code>pyerrors</code> can also fit data with errors on both the dependent and independent variables using the total least squares method also referred to orthogonal distance regression as implemented in <a href="https://docs.scipy.org/doc/scipy/reference/odr.html">scipy</a>, see <code><a href="pyerrors/fits.html#least_squares">pyerrors.fits.least_squares</a></code>. The syntax is identical to the standard least squares case, the only diffrence being that <code>x</code> also has to be a <code>list</code> or <code>numpy.array</code> of <code>Obs</code>.</p>
@ -923,6 +928,11 @@ The following entries are optional:</li>
<span class="sd"> return a[0] * x1 ** 2 + a[1] * x2</span>
<span class="sd">```</span>
<span class="sd">`pyerrors` also supports correlated fits which can be triggered via the parameter `correlated_fit=True`.</span>
<span class="sd">Details about how the required covariance matrix is estimated can be found in `pyerrors.obs.covariance`.</span>
<span class="sd">Direct visualizations of the performed fits can be triggered via `resplot=True` or `qqplot=True`. For all available options see `pyerrors.fits.least_squares`.</span>
<span class="sd">## Total least squares fits</span>
<span class="sd">`pyerrors` can also fit data with errors on both the dependent and independent variables using the total least squares method also referred to orthogonal distance regression as implemented in [scipy](https://docs.scipy.org/doc/scipy/reference/odr.html), see `pyerrors.fits.least_squares`. The syntax is identical to the standard least squares case, the only diffrence being that `x` also has to be a `list` or `numpy.array` of `Obs`.</span>

View file

@ -187,7 +187,7 @@
<span class="sd"> ```</span>
<span class="sd"> It is important that all numpy functions refer to autograd.numpy, otherwise the differentiation</span>
<span class="sd"> will not work</span>
<span class="sd"> will not work.</span>
<span class="sd"> priors : list, optional</span>
<span class="sd"> priors has to be a list with an entry for every parameter in the fit. The entries can either be</span>
<span class="sd"> Obs (e.g. results from a previous fit) or strings containing a value and an error formatted like</span>
@ -202,18 +202,19 @@
<span class="sd"> The possible methods are the ones which can be used for scipy.optimize.minimize and</span>
<span class="sd"> migrad of iminuit. If no method is specified, Levenberg-Marquard is used.</span>
<span class="sd"> Reliable alternatives are migrad, Powell and Nelder-Mead.</span>
<span class="sd"> resplot : bool</span>
<span class="sd"> If true, a plot which displays fit, data and residuals is generated (default False).</span>
<span class="sd"> qqplot : bool</span>
<span class="sd"> If true, a quantile-quantile plot of the fit result is generated (default False).</span>
<span class="sd"> expected_chisquare : bool</span>
<span class="sd"> If true prints the expected chisquare which is</span>
<span class="sd"> corrected by effects caused by correlated input data.</span>
<span class="sd"> This can take a while as the full correlation matrix</span>
<span class="sd"> has to be calculated (default False).</span>
<span class="sd"> correlated_fit : bool</span>
<span class="sd"> If true, use the full correlation matrix in the definition of the chisquare</span>
<span class="sd"> (only works for prior==None and when no method is given, at the moment).</span>
<span class="sd"> If True, use the full inverse covariance matrix in the definition of the chisquare cost function.</span>
<span class="sd"> For details about how the covariance matrix is estimated see `pyerrors.obs.covariance`.</span>
<span class="sd"> In practice the correlation matrix is Cholesky decomposed and inverted (instead of the covariance matrix).</span>
<span class="sd"> This procedure should be numerically more stable as the correlation matrix is typically better conditioned (Jacobi preconditioning).</span>
<span class="sd"> At the moment this option only works for `prior==None` and when no `method` is given.</span>
<span class="sd"> expected_chisquare : bool</span>
<span class="sd"> If True estimates the expected chisquare which is</span>
<span class="sd"> corrected by effects caused by correlated input data (default False).</span>
<span class="sd"> resplot : bool</span>
<span class="sd"> If True, a plot which displays fit, data and residuals is generated (default False).</span>
<span class="sd"> qqplot : bool</span>
<span class="sd"> If True, a quantile-quantile plot of the fit result is generated (default False).</span>
<span class="sd"> &#39;&#39;&#39;</span>
<span class="k">if</span> <span class="n">priors</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="k">return</span> <span class="n">_prior_fit</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">func</span><span class="p">,</span> <span class="n">priors</span><span class="p">,</span> <span class="n">silent</span><span class="o">=</span><span class="n">silent</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
@ -972,7 +973,7 @@ also accessible via indices.</li>
<span class="sd"> ```</span>
<span class="sd"> It is important that all numpy functions refer to autograd.numpy, otherwise the differentiation</span>
<span class="sd"> will not work</span>
<span class="sd"> will not work.</span>
<span class="sd"> priors : list, optional</span>
<span class="sd"> priors has to be a list with an entry for every parameter in the fit. The entries can either be</span>
<span class="sd"> Obs (e.g. results from a previous fit) or strings containing a value and an error formatted like</span>
@ -987,18 +988,19 @@ also accessible via indices.</li>
<span class="sd"> The possible methods are the ones which can be used for scipy.optimize.minimize and</span>
<span class="sd"> migrad of iminuit. If no method is specified, Levenberg-Marquard is used.</span>
<span class="sd"> Reliable alternatives are migrad, Powell and Nelder-Mead.</span>
<span class="sd"> resplot : bool</span>
<span class="sd"> If true, a plot which displays fit, data and residuals is generated (default False).</span>
<span class="sd"> qqplot : bool</span>
<span class="sd"> If true, a quantile-quantile plot of the fit result is generated (default False).</span>
<span class="sd"> expected_chisquare : bool</span>
<span class="sd"> If true prints the expected chisquare which is</span>
<span class="sd"> corrected by effects caused by correlated input data.</span>
<span class="sd"> This can take a while as the full correlation matrix</span>
<span class="sd"> has to be calculated (default False).</span>
<span class="sd"> correlated_fit : bool</span>
<span class="sd"> If true, use the full correlation matrix in the definition of the chisquare</span>
<span class="sd"> (only works for prior==None and when no method is given, at the moment).</span>
<span class="sd"> If True, use the full inverse covariance matrix in the definition of the chisquare cost function.</span>
<span class="sd"> For details about how the covariance matrix is estimated see `pyerrors.obs.covariance`.</span>
<span class="sd"> In practice the correlation matrix is Cholesky decomposed and inverted (instead of the covariance matrix).</span>
<span class="sd"> This procedure should be numerically more stable as the correlation matrix is typically better conditioned (Jacobi preconditioning).</span>
<span class="sd"> At the moment this option only works for `prior==None` and when no `method` is given.</span>
<span class="sd"> expected_chisquare : bool</span>
<span class="sd"> If True estimates the expected chisquare which is</span>
<span class="sd"> corrected by effects caused by correlated input data (default False).</span>
<span class="sd"> resplot : bool</span>
<span class="sd"> If True, a plot which displays fit, data and residuals is generated (default False).</span>
<span class="sd"> qqplot : bool</span>
<span class="sd"> If True, a quantile-quantile plot of the fit result is generated (default False).</span>
<span class="sd"> &#39;&#39;&#39;</span>
<span class="k">if</span> <span class="n">priors</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="k">return</span> <span class="n">_prior_fit</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">func</span><span class="p">,</span> <span class="n">priors</span><span class="p">,</span> <span class="n">silent</span><span class="o">=</span><span class="n">silent</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
@ -1034,7 +1036,7 @@ fit function, has to be of the form</p>
</code></pre></div>
<p>It is important that all numpy functions refer to autograd.numpy, otherwise the differentiation
will not work</p></li>
will not work.</p></li>
<li><strong>priors</strong> (list, optional):
priors has to be a list with an entry for every parameter in the fit. The entries can either be
Obs (e.g. results from a previous fit) or strings containing a value and an error formatted like
@ -1043,24 +1045,25 @@ Obs (e.g. results from a previous fit) or strings containing a value and an erro
If true all output to the console is omitted (default False).</li>
<li><strong>initial_guess</strong> (list):
can provide an initial guess for the input parameters. Relevant for
non-linear fits with many parameters.</li>
non-linear fits with many parameters.</li>
<li><strong>method</strong> (str, optional):
can be used to choose an alternative method for the minimization of chisquare.
The possible methods are the ones which can be used for scipy.optimize.minimize and
migrad of iminuit. If no method is specified, Levenberg-Marquard is used.
Reliable alternatives are migrad, Powell and Nelder-Mead.</li>
<li><strong>resplot</strong> (bool):
If true, a plot which displays fit, data and residuals is generated (default False).</li>
<li><strong>qqplot</strong> (bool):
If true, a quantile-quantile plot of the fit result is generated (default False).</li>
<li><strong>expected_chisquare</strong> (bool):
If true prints the expected chisquare which is
corrected by effects caused by correlated input data.
This can take a while as the full correlation matrix
has to be calculated (default False).</li>
<li><strong>correlated_fit</strong> (bool):
If true, use the full correlation matrix in the definition of the chisquare
(only works for prior==None and when no method is given, at the moment).</li>
If True, use the full inverse covariance matrix in the definition of the chisquare cost function.
For details about how the covariance matrix is estimated see <code><a href="obs.html#covariance">pyerrors.obs.covariance</a></code>.
In practice the correlation matrix is Cholesky decomposed and inverted (instead of the covariance matrix).
This procedure should be numerically more stable as the correlation matrix is typically better conditioned (Jacobi preconditioning).
At the moment this option only works for <code>prior==None</code> and when no <code>method</code> is given.</li>
<li><strong>expected_chisquare</strong> (bool):
If True estimates the expected chisquare which is
corrected by effects caused by correlated input data (default False).</li>
<li><strong>resplot</strong> (bool):
If True, a plot which displays fit, data and residuals is generated (default False).</li>
<li><strong>qqplot</strong> (bool):
If True, a quantile-quantile plot of the fit result is generated (default False).</li>
</ul>
</div>

View file

@ -1659,7 +1659,9 @@
<span class="sd"> Notes</span>
<span class="sd"> -----</span>
<span class="sd"> The covariance is estimated by calculating the correlation matrix assuming no autocorrelation and then rescaling the correlation matrix by the full errors including the previous gamma method estimate for the autocorrelation of the observables. For observables defined on a single ensemble this is equivalent to assuming that the integrated autocorrelation time of an off-diagonal element is equal to the geometric mean of the integrated autocorrelation times of the corresponding diagonal elements.</span>
<span class="sd"> The covariance is estimated by calculating the correlation matrix assuming no autocorrelation and then rescaling the correlation matrix by the full errors including the previous gamma method estimate for the autocorrelation of the observables. The covariance at windowsize 0 is guaranteed to be positive semi-definite</span>
<span class="sd"> $$v_i\Gamma_{ij}(0)v_j=\frac{1}{N}\sum_{s=1}^N\sum_{i,j}v_i\delta_i^s\delta_j^s v_j=\frac{1}{N}\sum_{s=1}^N\sum_{i}|v_i\delta_i^s|^2\geq 0\,,$$ for every $v_i\in\mathbb{R}^N$, while such an identity does not hold for larger windows/lags.</span>
<span class="sd"> For observables defined on a single ensemble our approximation is equivalent to assuming that the integrated autocorrelation time of an off-diagonal element is equal to the geometric mean of the integrated autocorrelation times of the corresponding diagonal elements.</span>
<span class="sd"> $$\tau_{\mathrm{int}, ij}=\sqrt{\tau_{\mathrm{int}, i}\times \tau_{\mathrm{int}, j}}$$</span>
<span class="sd"> This construction ensures that the estimated covariance matrix is positive semi-definite (up to numerical rounding errors).</span>
<span class="sd"> &#39;&#39;&#39;</span>
@ -4853,7 +4855,9 @@ Second observable</li>
<span class="sd"> Notes</span>
<span class="sd"> -----</span>
<span class="sd"> The covariance is estimated by calculating the correlation matrix assuming no autocorrelation and then rescaling the correlation matrix by the full errors including the previous gamma method estimate for the autocorrelation of the observables. For observables defined on a single ensemble this is equivalent to assuming that the integrated autocorrelation time of an off-diagonal element is equal to the geometric mean of the integrated autocorrelation times of the corresponding diagonal elements.</span>
<span class="sd"> The covariance is estimated by calculating the correlation matrix assuming no autocorrelation and then rescaling the correlation matrix by the full errors including the previous gamma method estimate for the autocorrelation of the observables. The covariance at windowsize 0 is guaranteed to be positive semi-definite</span>
<span class="sd"> $$v_i\Gamma_{ij}(0)v_j=\frac{1}{N}\sum_{s=1}^N\sum_{i,j}v_i\delta_i^s\delta_j^s v_j=\frac{1}{N}\sum_{s=1}^N\sum_{i}|v_i\delta_i^s|^2\geq 0\,,$$ for every $v_i\in\mathbb{R}^N$, while such an identity does not hold for larger windows/lags.</span>
<span class="sd"> For observables defined on a single ensemble our approximation is equivalent to assuming that the integrated autocorrelation time of an off-diagonal element is equal to the geometric mean of the integrated autocorrelation times of the corresponding diagonal elements.</span>
<span class="sd"> $$\tau_{\mathrm{int}, ij}=\sqrt{\tau_{\mathrm{int}, i}\times \tau_{\mathrm{int}, j}}$$</span>
<span class="sd"> This construction ensures that the estimated covariance matrix is positive semi-definite (up to numerical rounding errors).</span>
<span class="sd"> &#39;&#39;&#39;</span>
@ -4910,7 +4914,9 @@ If True the correlation instead of the covariance is returned (default False).</
<h6 id="notes">Notes</h6>
<p>The covariance is estimated by calculating the correlation matrix assuming no autocorrelation and then rescaling the correlation matrix by the full errors including the previous gamma method estimate for the autocorrelation of the observables. For observables defined on a single ensemble this is equivalent to assuming that the integrated autocorrelation time of an off-diagonal element is equal to the geometric mean of the integrated autocorrelation times of the corresponding diagonal elements.
<p>The covariance is estimated by calculating the correlation matrix assuming no autocorrelation and then rescaling the correlation matrix by the full errors including the previous gamma method estimate for the autocorrelation of the observables. The covariance at windowsize 0 is guaranteed to be positive semi-definite
$$v_i\Gamma_{ij}(0)v_j=\frac{1}{N}\sum_{s=1}^N\sum_{i,j}v_i\delta_i^s\delta_j^s v_j=\frac{1}{N}\sum_{s=1}^N\sum_{i}|v_i\delta_i^s|^2\geq 0\,,$$ for every $v_i\in\mathbb{R}^N$, while such an identity does not hold for larger windows/lags.
For observables defined on a single ensemble our approximation is equivalent to assuming that the integrated autocorrelation time of an off-diagonal element is equal to the geometric mean of the integrated autocorrelation times of the corresponding diagonal elements.
$$\tau_{\mathrm{int}, ij}=\sqrt{\tau_{\mathrm{int}, i}\times \tau_{\mathrm{int}, j}}$$
This construction ensures that the estimated covariance matrix is positive semi-definite (up to numerical rounding errors).</p>
</div>

File diff suppressed because one or more lines are too long