Build a searchable database for Correlator data in the pyerrors format.
- Python 100%
|
Some checks failed
Mypy / mypy (push) Successful in 1m15s
Pytest / pytest (3.13) (push) Failing after 41s
Pytest / pytest (3.14) (push) Failing after 41s
Mypy / mypy (pull_request) Failing after 39s
Pytest / pytest (3.12) (pull_request) Failing after 39s
Pytest / pytest (3.13) (pull_request) Failing after 40s
Pytest / pytest (3.14) (pull_request) Failing after 43s
Ruff / ruff (pull_request) Failing after 39s
Ruff / ruff (push) Successful in 1m3s
Pytest / pytest (3.12) (push) Failing after 1m18s
|
||
|---|---|---|
| .github/workflows | ||
| corrlib | ||
| examples | ||
| tests | ||
| .gitignore | ||
| pyproject.toml | ||
| README.md | ||
| TODO.md | ||
| uv.lock | ||
Pyerrors backlog
With this tool, we aim at making it easy to backlog correlation functions that can be read with pyerrors.
This is done in a reproducible way using datalad.
In principle, a dataset is created, that is automatically administered by the backlogger, in which data from differnt projects are held together.
Everything is catalogued by a searchable SQL database, which holds the paths to the respective measurements.
The original projects can be linked to the dataset and the data may be imported using wrapper functions around the read methonds of pyerrors.