Jurrian Doornbos
RGBtoNDVIconversion
17/3/2024
Evaluating GAN results could be viewed through an application-perspective, where the approach is evaluated beyond direct metrics of accuracy, but a usability-lens, asking the question: 'is the output of the GAN good enough for an NDVI application?'
Furthermore, the generalization of the Pix2Pix model is also explored, by evaluating on different datasets to the training dataset. A different location: Canyelles in 2023, and in a different year: Bodegas Terras Gauda 2022, as well as different RGB sensors. The model is trained on a multispectral composite of Red, Green and Blue bands, whilst Canyelles 2023 and Bodegas Terras Gauda 2022 have true RGB sensors.
This repository supports the work in: Assessing Generative Deep Learning of RGB→NDVI in Vineyards on UAV Imagery Through Real Applications. Presenting all the steps in the evaluation approach of the generated NDVI maps through Jupyter Notebook implementations.
There are 9 distinct steps to the study: creating datasets, training the GAN (Pix2Pix), generating new NDVI maps from RGB, reconstructing this back into the orthomosaic, preprocessing the datasets for further evaluation, and then the three evaluations: pixel, botrytis bunch rot and vigor mapping. These can all be found under the notebooks
folder. All the data (from Zenodo), should be placed in the data
folder.
Required data inputs:
As we are dealing with 2 models: Pix2Pix and Pix2PixHD, requires the training dataset to be made twice, from the same orthomosaic, once at 256x256 resolution, and once at 512x512 resolution.
This is performed in the notebook: 1_training_set_creation.ipynb
It also structures these datasets accordingly in training/evaluation/testing splits.
Additionally, the Pix2Pix models require chips of the correct size, this means that the testing sets (btg2022 and can2023) also need to be processed to that end.
This is performed in the notebooks: 2_testing_set_creation.ipynb
It also structures these chips accordingly for Pix2Pix to read the files correctly.
For Pix2Pix: taken almost directly from the pytorch CycleGAN and Pix2Pix repository. For training the models. For Pix2PixHD, it is adjusted to the NVIDIA Pix2PixHD repository. Which essentially runs the same training code and data structure.
This is also covered in the training notebook: 3_p2p_training.ipynb
. Which requires a slightly different environment: this is covered in the notebook.
Generating the NDVI is performed on the test datasets, using the model weights from the training process. Found under the respecitve Pix2Pix and Pix2PixHD model_weights
folder.
This is also covered in notebook: 3_p2p_training.ipynb
.
The generated NDVI chips should be reconstructed and aligned back into its original shape in the orthomosaic. This is covered in notebook: 5_reconstructing_ortho.ipynb
.
The various evaluations require some additional datasets, consisting of alignment, setting NULL values, writing them into a single folder in data/preprocessed/
. This is covered in notebook: 6_preprocessing_eval.ipynb
.
The first evaluation is at the pixel-level. Flattening the orthomosaics and checkig for absolute accuracy, as well as noise and structural similarity between true and generated NDVI. This is covered in notebook: 7_pixel_level_eval.ipynb
Using the implementation of botrytis-bunch-rot mapping algorithm from Ariza et al. (2023). The NDVI maps are compared in mapping out botrytis risk in the vineyard. This is covered in notebook: 8_bbr_eval.ipynb
.
The final step is evaluating the generated NDVI maps in a vigor-mapping application from Matese et al. 2018. This is covered in notebook: 9_vigor_eval.ipynb
.
The approach makes heavy use of uavgeo
. This is built upon the work of rioxarray
, geopandas
, shapely
and a few more.
You can choose to install everything in a Python virtual environment or directly run a jupyterlab docker:
Some notebooks require a slightly different, incompatible environment: such as running the Pix2Pix models, as well as the BBR-heatmap creation, this is covered in those respective notebooks.
Create a new environment (optional but recommended):
conda create -n uavgeo_env python=3.10
conda activate uavgeo_env
Install uavgeo
package (for now: pip only)
pip install uavgeo
This starts a premade jupyter environment with everything preinstalled, based around a nvidia docker image for DL support.
docker run --rm -it --runtime=nvidia -p 8888:8888 --gpus 1 --shm-size=5gb --network=host -v /path_to_local/dir:/home/jovyan jurrain/drone-ml:gpu-torch11.8-uavgeoformers
--network=host
flag whether you want to run it on a different machine in the same network, and want to access the notebook. (does not run locally)
-v
flag makes sure that once downloaded, it stays in that folder, accessible from the PC, and when restarting, all the weights etc. remain in that folder. path_to_local/dir
is thew path to your working dir where you want to access the notebook from. can be .
if you already cd
ed into it.
--runtime=nvidia
can be skipped when working on WSL2