Deep Learning and Hyperspectral Imaging – AI4AGRICULTURE

Gathering data on crop status by using deep-learning analyses of hyperspectral imaging to improve spray operations.

Concept

This Flagship Innovation Experiment’s (FIE) goal is to reduce spraying applications on crops by combining Artificial Intelligence (AI) with hyperspectral imaging, thus creating a supportive decision model to channel the collected data into task maps. In order to apply the right amount at the right time extensive information on the crop status is needed. These data are collected by hyperspectral cameras, capturing the reflection of light on leaves to provide information about pigment concentration, cell structure or infections. Hence, this FIE relies on deep learning technology and analytics to construct and validate cutting edge algorithms, producing new data and thus better classifying crop images.

Generative Autoencoder Networks (GANs) are essential for agricultural systems since the window for data acquisition is typically short and the need for quantity is prevalent with data-hungry deep architectures. The deep learning algorithms will be implemented on supercomputers to allow fast training of decision algorithms, while inference will be done on embedded Graphics Processing Unit (GPU) devices. The results provide the necessary input to build task maps which are ultimately used to carry out precise spraying operations.

Implementation 

The drone flights were performed manually and no autonomous flights has been done yet. In addition to the drone, cameras were mounted on a tractor boom sprayer.

The first challenge was to define the focus of our analysis and what kind of information we should provide. At the beginning of the project, it was unclear on which crops and which growth stage the focus should be. After an investigation of the images, we concluded that it was too complicated and unnecessary for the project to identify the weed at the species level. Therefore it was agreed to classify the first season based on ‘weed’, ‘crop’, and ‘soil’.

During the second season, the idea is to split up the ‘weed’ class into 2 groups: monocotyls (grassy weeds) and dicotyls (leafy weeds). This information will be enough to decide which Plant Protection Product (PPP) or which mix of products should be used.

The second challenge came with the amount of information collected. Data acquisition started late in the season and took more time than expected. A very large dataset was collected, and multiple meetings were needed to choose the best datasets based on our goals. GoPro images contained a lot of information (tractor, air, shadows, merging crop rows) which confused the Neural Network and which needed to be removed.

Lessons Learnt 

GAN’s are no longer relevant for the project. The original proposal envisioned using such a generative network to generate new artificial labelled data on which to train. We have chosen to use simpler supervised learning techniques and to focus the effort that would have been spent on GAN’s to improve the efficiency with which we can produce non-automatic labels for supervised learning. This has largely taken the form of improvements to Robovision’s labelling tools. We have already generated promising results.

 The construction of a hyperspectral dataset for training, testing and validation was no longer required for disease detection as the modified RGB camera proved to be sufficiently capable of capturing the NIR region to detect Alternaria solani in the field.

RGB images were found to be the best option for identifying weeds within a crop. The weeds are easily identified/labelled on an RGB image.  For disease detection (Alternaria solani in potato) it is found that modified RGB images facilitate the identification/labelling of the disease by capturing the spectral NIR region.