← Library Structure
The library is structured as follows:
├── README.md
│
├── config
│ │
│ ├── experiments
│ │ ├── UPP_16.toml
│ │ ├── UPP_32.toml
│ │ ├── UPP_64.toml
│ │ ├── UPP_all.toml
│ │
│ ├── configuration.toml
│ ├── discord.toml
│ ├── models.toml
│ ├── tflow.toml
│ ├── torch.toml
│ ├── train.toml
│
├── data
│ ├── sfv03_fcci.zarr
│ │
│ ├── scaler
│ │ ├── fcci_max_point_map.nc
│ │ ├── fcci_min_point_map.nc
│ │ ├── fcci_mean_point_map.nc
│ │ ├── fcci_stdv_point_map.nc
│
├── digital_twin_notebooks
│ │
│ ├── img
│ ├── inference_on_test_data.ipynb
│ ├── inference_with_onnx.ipynb
│ ├── model_conversion_pytorch_to_onnx.ipynb
│
├── docs
│
├── experiments
│ │
│ ├── 20240311_222733
│ │ ├── exp_5.toml
│ │ ├── fabric_benchmark.csv
│ │ ├── last_model.onnx
│ │ ├── last_model.pt
│
├── Fires
│ │
│ ├── _datasets
│ │ ├── dataset_zarr.py
│ │ ├── torch_dataset.py
│ │
│ ├── _layers
│ │ ├── unetpp.py
│ │
│ ├── _macros
│ │ ├── macros.py
│ │
│ ├── _models
│ │ ├── base.py
│ │ ├── unetpp.py
│ │ ├── vgg.py
│ │
│ ├── _scalers
│ │ ├── base.py
│ │ ├── minmax.py
│ │ ├── scaling_maps.py
│ │ ├── standard.py
│ │
│ ├── _utilities
│ │ ├── callbacks.py
│ │ ├── cli_args_checker.py
│ │ ├── cli_args_parser.py
│ │ ├── configuration.py
│ │ ├── decorators.py
│ │ ├── logger.py
│ │ ├── swin_model.py
│ │
│ ├── __init__.py
│ ├── augmentation.py
│ ├── datasets.py
│ ├── layers.py
│ ├── macros.py
│ ├── models.py
│ ├── scalers.py
│ ├── trainer.py
│ ├── utils.py
│
├── main.py
├── launch.sh
File | Type | Main function |
---|---|---|
config |
It stores TOML configuration files |
|
data |
It stores important files (e.g., scalers, symbolic links to the dataset, etc) | |
digital_twin_notebooks |
It stores Jupyter notebooks that carry on the Digital Twin's tasks on Wildfires use case | |
docs |
It stores documentation files | |
experiments |
It contains folders named with the current date and time when the experiment took place. The best model, the experiment configuration file and the benchmark file will be saved in this folder after the completion of the experiment. |
|
Fires |
It is the main library that is used to carry on the training of the Machine Learning model and the inference on the SeasFireCube data. It is used to store, in an organized way, all the code that provides support to the |
|
main.py |
It contains all the workflow code that must be executed. |
|
launch.sh |
It runs the experiments once it has been executed (details) |
Warning
Both data
and experiments
directories are empty and are not included in the repository as they contain too heavy files that cannot be stored.
Tip
Before running the code, remeber to download the SeasFire Cube v3 and put into data
folder.