Build a searchable database for Correlator data in the pyerrors format.
Find a file
Justus Kuhlmann ecfab2a897
All checks were successful
Pytest / pytest (3.12) (push) Successful in 50s
Pytest / pytest (3.13) (push) Successful in 48s
Pytest / pytest (3.14) (push) Successful in 49s
Ruff / ruff (push) Successful in 1m7s
Pytest / pytest (3.12) (pull_request) Successful in 52s
Pytest / pytest (3.13) (pull_request) Successful in 49s
Pytest / pytest (3.14) (pull_request) Successful in 51s
Ruff / ruff (pull_request) Successful in 33s
correct sfcf ruff errors
2025-12-02 10:50:42 +01:00
.github/workflows set python version to 3.12 2025-12-02 10:42:43 +01:00
corrlib correct sfcf ruff errors 2025-12-02 10:50:42 +01:00
examples add example toml-file 2025-11-20 17:14:12 +01:00
tests add simple test for listing ensembles and projects 2025-12-01 19:04:40 +01:00
.gitignore add .coverage 2025-12-01 19:09:05 +01:00
.gitmodules [DATALAD] Added subdataset 2025-07-09 08:03:04 +00:00
pyproject.toml ruff compatible to YTT,E,W,F 2025-12-02 10:28:22 +01:00
README.md add README 2024-06-21 14:14:02 +00:00
setup.py better setup 2025-04-16 18:00:05 +00:00
TODO.md Finish first version of cache 2025-05-13 09:45:00 +00:00
uv.lock ruff compatible to YTT,E,W,F 2025-12-02 10:28:22 +01:00

Pyerrors backlog

With this tool, we aim at making it easy to backlog correlation functions that can be read with pyerrors. This is done in a reproducible way using datalad. In principle, a dataset is created, that is automatically administered by the backlogger, in which data from differnt projects are held together. Everything is catalogued by a searchable SQL database, which holds the paths to the respective measurements. The original projects can be linked to the dataset and the data may be imported using wrapper functions around the read methonds of pyerrors.