update TODO and README
This commit is contained in:
parent
085256857d
commit
73d7687359
2 changed files with 13 additions and 0 deletions
|
|
@ -5,3 +5,12 @@ This is done in a reproducible way using `datalad`.
|
|||
In principle, a dataset is created, that is automatically administered by the backlogger, in which data from differnt projects are held together.
|
||||
Everything is catalogued by a searchable SQL database, which holds the paths to the respective measurements.
|
||||
The original projects can be linked to the dataset and the data may be imported using wrapper functions around the read methonds of pyerrors.
|
||||
|
||||
We work with the following nomenclature in this project:
|
||||
- Measurement
|
||||
A setis of Observables, including the appropriate metadata.
|
||||
- Project
|
||||
A series of measurements that was done by one person as part of their research.
|
||||
- Record
|
||||
An entry of a single Correlator in the database of the backlogger.
|
||||
-
|
||||
4
TODO.md
4
TODO.md
|
|
@ -15,3 +15,7 @@
|
|||
## Bugfixes
|
||||
- [ ] revisit the reimport function for single files
|
||||
- [ ] drop record needs to look if no records are left in a json file.
|
||||
|
||||
## Rough Ideas
|
||||
- [ ] multitable could provide a high speed implementation of an HDF5 based format
|
||||
- [ ] implement also a way to include compiled binaries in the archives.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue