#Deep Learning
Using Citizen Science Data as Pre-Training for Semantic Segmentation of High-Resolution UAV Images for Natural Forests Post-Disturbance Assessment
Published in MDPI Forests journal!

During the last months, I contributed to the paper Using Citizen Science Data as Pre-Training for Semantic Segmentation of High-Resolution UAV Images for Natural Forests Post-Disturbance Assessment, published in the Classification of Forest Tree Species Using Remote Sensing Technologies: Latest Advances and Improvements special issue of the Forests MDPI journal. This paper proposes a novel pre-training approach for semantic segmentation of UAV imagery, where a classifier trained on citizen science data generates over 140,000 auto-labeled images, improving model performance and achieving a higher F1 score (43.74%) than training solely on manually labeled data (41.58%). With this paper, we highlight the importance of AI for large-scale environmental monitoring of dense and vasts forested areas, such as in the province of Quebec.
Here is the abstract:
The ability to monitor forest areas after disturbances is key to ensure their regrowth. Problematic situations that are detected can then be addressed with targeted regeneration efforts. However, achieving this with automated photo interpretation is problematic, as training such systems requires large amounts of labeled data. To this effect, we leverage citizen science data (iNaturalist) to alleviate this issue. More precisely, we seek to generate pre-training data from a classifier trained on selected exemplars. This is accomplished by using a moving-window approach on carefully gathered low-altitude images with an Unmanned Aerial Vehicle (UAV), WilDReF-Q (Wild Drone Regrowth Forest—Quebec) dataset, to generate high-quality pseudo-labels. To generate accurate pseudo-labels, the predictions of our classifier for each window are integrated using a majority voting approach. Our results indicate that pre-training a semantic segmentation network on over 140,000 auto-labeled images yields an 𝐹1 score of 43.74% over 24 different classes, on a separate ground truth dataset. In comparison, using only labeled images yields a score of 32.45%, while fine-tuning the pre-trained network only yields marginal improvements (46.76%). Importantly, we demonstrate that our approach is able to benefit from more unlabeled images, opening the door for learning at scale. We also optimized the hyperparameters for pseudo-labeling, including the number of predictions assigned to each pixel in the majority voting process. Overall, this demonstrates that an auto-labeling approach can greatly reduce the development cost of plant identification in regeneration regions, based on UAV imagery.
Links
For more info,
Artificial Intelligence Resources
Learning resources for anyone interested in the vast domain of AI

Here is a slightly curated list of learning resources and useful links in Computer Vision, Artificial Intelligence and related topics. For resources on robotics, visit the robotics resources post. If you have other great resources to suggest, feel free to contact me.
Deep Learning
- Deep Learning (2016), book, Ian Goodfellow, Yoshua Bengio and Aaron Courville
- Deep Learning (2015), article in Nature, Yann LeCun, Yoshua Bengio & Geoffrey Hinton
- A Recipe for Training Neural Networks, a blog post by Andrej Karpathy
Models
Convolutional Neural Networks
-
LeNet
-
AlexNet
-
GoogLeNet
- Paper
- Inspired the Inception architectures:
-
- Common versions of the architecture: ResNet-50, ResNet-101
- Inspired ResNeXt
-
ConvNeXt
Transformers
- CS25 - Transformers United, a Standford lecture on Transformers
Diffusion models
- Diffusion is spectral autoregression
- Denoising diffusion probabilistic models from first principles, a tutorial series on diffusion models in Julia
Applications
Computer Vision
- CS231n - Deep Learning for Computer Vision, a Stanford CS class on using deep learning for computer vision tasks.
Geospatial learning
torchgeo
- Blog article on
torchgeo
- Blog article on
Reinforcement Learning
-
Reinforcement Learning: An Introduction, by Sutton & Barto
-
David Silver Lectures are a good introductory course.
-
reinforcement-learning, a GitHub repo by Denny Britz to accompany Sutton's Book and David Silver's course
-
OpenAI SpinningUp gives a general overview
-
Stable Baselines, a RL library developed by the DLR Institute of Robotics and Mechatronics (DLR-RM)
Applications
Legged Robotics
- RSL RL, a RL library with algorithms, from the Robotic Systems Lab (RSL), Prof. Dr. Marco Hutter, ETH Zurich
Datasets
Toy Datasets
- Iris
- Wisconsin Breast Cancer Dataset
- Wine Dataset
- Ames Housing Dataset
- MNIST
- FashionMNIST
- AutoMPG
- ImageNet
- CIFAR datasets
Object Detection
- MAN TruckScenes, World's First Public Dataset For Autonomous Trucking
- Challenge on HuggingFace Spaces
Important papers
- Hochreiter et al. (1997): Long Short-Term Memory architecture (LSTM) to address vanishing and exploding gradient problems in vanilla RNNs.
- Bahdanau et al. (2014): RNN with an attention mechanism
Proprioception Is All You Need: Terrain Classification for Boreal Forests

My paper, Proprioception Is All You Need: Terrain Classification for Boreal Forests, will be presented in the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), in Abu Dhabi, UAE. The paper presents BorealTC : a publicly available dataset containing annotated data from a wheeled UGV for various mobility-impeding terrain types typical of the boreal forest. The data was acquired in winter and spring on deep snow and silty loam, two uncommon terrains in an urban setting.
Here is the abstract:
Recent works in field robotics highlighted the importance of resiliency against different types of terrains. Boreal forests, in particular, are home to many mobility-impeding terrains that should be considered for off-road autonomous navigation. Also, being one of the largest land biomes on Earth, boreal forests are an area where autonomous vehicles are expected to become increasingly common. In this paper, we address the issue of classifying boreal terrains by introducing BorealTC, a publicly available dataset for proprioceptive-based terrain classification (TC). Recorded with a Husky A200, our dataset contains 116 min of Inertial Measurement Unit (IMU), motor current, and wheel odometry data, focusing on typical boreal forest terrains, notably snow, ice, and silty loam. Combining our dataset with another dataset from the literature, we evaluate both a Convolutional Neural Network (CNN) and the novel state space model (SSM)-based Mamba architecture on a TC task. We show that while CNN outperforms Mamba on each separate dataset, Mamba achieves greater accuracy when trained on a combination of both. In addition, we demonstrate that Mamba’s learning capacity is greater than a CNN for increasing amounts of data. We show that the combination of two TC datasets yields a latent space that can be interpreted with the properties of the terrains. We also discuss the implications of merging datasets on classification. Our source code and dataset are publicly available online: https://github.com/norlab-ulaval/BorealTC.
Terrains
For this paper, a ClearPath Husky A200 was driven on 5 different terrains:
Slides
Links
For more info,