Multi-Sensor Weed Classification using Deep Feature Learning

Download files
Access & Terms of Use
open access
Copyright: Awan, Adnan Farooq
Altmetric
Abstract
Autonomous weed classification is crucial for effective site-specific weed management. Various remote sensing sensors are available to capture high-resolution multi/hyperspectral images to monitor land cover. However, different sensors can generate images with variations in spatial resolution, number of spectral bands and signal-to-noise ratio (S/N). This thesis investigates suitability of weed classification system using different machine learning approaches. Analysis of multi-sensor datasets is conducted by combining multi-level features. Partial transfer learning strategy was proposed to deal with different variations in the multi-sensor datasets. In the first study, the suitability of weed classification system was analysed using Histogram of Oriented Gradients (HOG) and Convolutional Neural Network (CNN) methods. The experimental results show that CNN method can extract semantic feature representation, which helps to accurately classify each category of weed. Investigation of the number of bands shows that colour imaging is inadequate for accurate weed classification. For different spatial resolution case, optimal patch size is essential when using CNN method. The training time of both methods was compared and the results show that the HOG method takes comparatively less training time. The second study was related to the weed classification using a multi-sensor dataset via feature fusion method. Proposed method combined multi-level of the CNN and superpixel based Local Binary Pattern (LBP). Support Vector Machine (SVM) was used to train the extracted features. The experimental results illustrate that the proposed feature fusion model outperforms traditional methods. In the third study, the partial transfer learning method was analysed for multi-sensor imagery. With limited samples to train the CNN model, a strategy was developed to borrow the trained subset layers and refine the current model using limited samples. Therefore, it is important to examine which layers of the trained model can be transferred to deal with different cases (i.e., variations in spatial resolution, different number of bands, and variation in S/N between reference data and target data). In this study, simulated multi-sensor imagery was used to investigate individual cases. The experimental results show feasibility and advantages of partial transfer learning for real multi-sensor dataset.
Persistent link to this record
Link to Publisher Version
Link to Open Access Version
Additional Link
Author(s)
Awan, Adnan Farooq
Supervisor(s)
jia, Xiuping
Creator(s)
Editor(s)
Translator(s)
Curator(s)
Designer(s)
Arranger(s)
Composer(s)
Recordist(s)
Conference Proceedings Editor(s)
Other Contributor(s)
Corporate/Industry Contributor(s)
Publication Year
2020
Resource Type
Thesis
Degree Type
PhD Doctorate
UNSW Faculty
Files
download public versions.pdf 6.62 MB Adobe Portable Document Format
Related dataset(s)