AiiDA-exported archives for the two sample batches discussed in the paper:
eclab.aiida
- cycling experiments generated by the importer script from results obtained via EC-labaurora.aiida
- cycling experiments submitted in the AiiDAlab-Aurora app and executed/tracked by AiiDARobot outputs for the two sample batches discussed in the paper:
eclab_robot_output.csv
- cycled without AiiDAaurora_robot_output.csv
- cycled with AiiDATwo zipped archives as part of the EC-lab batch:
eclab_mpr.zip
- raw .mpr
files produced by EC-labeclab_json.zip
- the above raw .mpr
post-processed into .json
files by the same tools used by tomatoThis README.md
file
Each robot output is a .csv
file containing information on the 36 samples of the respective batch, including compositions (e.g. electrodes, electrolyte, separator, etc.), theoretical capacities, electrode masses, and more. This information is loaded into the AiiDAlab-Aurora app and stored as digital twins used when submitting experiments through the app.
The raw .mpr
EC-lab output provided here in eclab_mpr.zip
can be processed using yadg
, the parsing package used by tomato (see instructions here and here). The resulting .json
files include additional cycling metadata that has been stripped from the .json
files provided in eclab_json.zip
, which is otherwise identical. This was done to reduce file size when used in the AiiDAlab-Aurora app during analysis.
On Mac, you may need to unzip the files from terminal with the command unzip <archive>
For the EC-lab batch, the post-processed .json
files can be queried directly.
To query the AiiDA datasets, you will first need to set up an AiiDA environment. You may then proceed to follow the instructions below to explore the data.
verdi archive import <archive>.aiida
<archive>
with your archive of interestWe recommend exploring the data from the built-in AiiDA shell, as it provided convenient auto-completion for dot notation. For example, hitting tab after workflow.inputs.
will yield a list of all inputs provided to the workflow.
To start, open the shell with verdi shell
and run the following:
workflows = QueryBuilder().append(WorkChainNode, filters={"label": {"like": "%<label>%"}}).all(flat=True)
where <label>
is 230511
for archive 1, or 231012
for archive 2.
This will store the archive's workflows in a local variable. You can load the first (or any other) workflow with
wf = workflows[0] # replace 0 with any index smaller than the result of running `len(workflows)`
wf.inputs.battery_sample.attributes
wf.inputs.protocols.<protocol>.attributes
wf.inputs.protocol_order.attributes
wf.inputs.control_settings.<protocol>.attributes
where <protocol>
is any protocol given on tab-completion.
Exploring the outputs defers slightly between the datasets.
eclab_data
)The results node was generated by the importer script and is available at
wf.outputs.results.cycling.attributes
aurora_data
)The results node is generated automatically when a workflow terminates normally. However, the available workflows in this dataset were terminated prematurely. Nevertheless, the reader may explore the results of completed protocols, including the final protocol terminated by the monitoring.
wf.called_descendants[#].outputs.results
where #
is the index of the protocol of interest. Note that descendants do not necessarily follow the execution order defined in wf.inputs.protocol_order
.
More advanced querying may be achieved but is outside the scope of this document. To learn more, please visit the official AiiDA documentation on querying.