Damien LaRocque
Using Citizen Science Data as Pre-Training for Semantic Segmentation of High-Resolution UAV Images for Natural Forests Post-Disturbance Assessment
Published in MDPI Forests journal!

During the last months, I contributed to the paper Using Citizen Science Data as Pre-Training for Semantic Segmentation of High-Resolution UAV Images for Natural Forests Post-Disturbance Assessment, published in the Classification of Forest Tree Species Using Remote Sensing Technologies: Latest Advances and Improvements special issue of the Forests MDPI journal. This paper proposes a novel pre-training approach for semantic segmentation of UAV imagery, where a classifier trained on citizen science data generates over 140,000 auto-labeled images, improving model performance and achieving a higher F1 score (43.74%) than training solely on manually labeled data (41.58%). With this paper, we highlight the importance of AI for large-scale environmental monitoring of dense and vasts forested areas, such as in the province of Quebec.
Here is the abstract:
The ability to monitor forest areas after disturbances is key to ensure their regrowth. Problematic situations that are detected can then be addressed with targeted regeneration efforts. However, achieving this with automated photo interpretation is problematic, as training such systems requires large amounts of labeled data. To this effect, we leverage citizen science data (iNaturalist) to alleviate this issue. More precisely, we seek to generate pre-training data from a classifier trained on selected exemplars. This is accomplished by using a moving-window approach on carefully gathered low-altitude images with an Unmanned Aerial Vehicle (UAV), WilDReF-Q (Wild Drone Regrowth Forest—Quebec) dataset, to generate high-quality pseudo-labels. To generate accurate pseudo-labels, the predictions of our classifier for each window are integrated using a majority voting approach. Our results indicate that pre-training a semantic segmentation network on over 140,000 auto-labeled images yields an 𝐹1 score of 43.74% over 24 different classes, on a separate ground truth dataset. In comparison, using only labeled images yields a score of 32.45%, while fine-tuning the pre-trained network only yields marginal improvements (46.76%). Importantly, we demonstrate that our approach is able to benefit from more unlabeled images, opening the door for learning at scale. We also optimized the hyperparameters for pseudo-labeling, including the number of predictions assigned to each pixel in the majority voting process. Overall, this demonstrates that an auto-labeling approach can greatly reduce the development cost of plant identification in regeneration regions, based on UAV imagery.
Links
For more info,
Artificial Intelligence Resources
Learning resources for anyone interested in the vast domain of AI

Here is a slightly curated list of learning resources and useful links in Computer Vision, Artificial Intelligence and related topics. For resources on robotics, visit the robotics resources post. If you have other great resources to suggest, feel free to contact me.
Deep Learning
Models
Convolutional Neural Networks
-
LeNet
-
AlexNet
-
GoogLeNet
- Paper
- Inspired the Inception architectures:
-
- Common versions of the architecture: ResNet-50, ResNet-101
- Inspired ResNeXt
-
ConvNeXt
Transformers
- CS25 - Transformers United, a Standford lecture on Transformers
Diffusion models
- Diffusion is spectral autoregression
- Denoising diffusion probabilistic models from first principles, a tutorial series on diffusion models in Julia
Applications
Computer Vision
- CS231n - Deep Learning for Computer Vision, a Stanford CS class on using deep learning for computer vision tasks.
Geospatial learning
torchgeo
- Blog article on
torchgeo
- Blog article on
Reinforcement Learning
-
Reinforcement Learning: An Introduction, by Sutton & Barto
-
David Silver Lectures are a good introductory course.
-
reinforcement-learning, a GitHub repo by Denny Britz to accompany Sutton's Book and David Silver's course
-
OpenAI SpinningUp gives a general overview
-
Stable Baselines, a RL library developed by the DLR Institute of Robotics and Mechatronics (DLR-RM)
Applications
Legged Robotics
- RSL RL, a RL library with algorithms, from the Robotic Systems Lab (RSL), Prof. Dr. Marco Hutter, ETH Zurich
Datasets
Toy Datasets
- Iris
- Wisconsin Breast Cancer Dataset
- Wine Dataset
- Ames Housing Dataset
- MNIST
- FashionMNIST
- AutoMPG
- ImageNet
- CIFAR datasets
Object Detection
- MAN TruckScenes, World's First Public Dataset For Autonomous Trucking
- Challenge on HuggingFace Spaces
Robotics Resources
Learning resources for anyone starting in robotics

Here is a slightly curated list of learning resources and useful links in robotics. For resources on AI, visit the AI resources post. If you have other great resources to suggest, feel free to contact me.
General
Newsletters
Robot Operating System
Robot Operating System (ROS) is a commonly used open-source robotics middleware. Please note: there is a space in ROS 2: ROS 2
.
A new ROS 2 distribution is released every year on May 23rd, the World Turtle Day. Each distro is related to a release of Ubuntu. Here are the names of the last distributions, with their corresponding Ubuntu versions:

<figcaption class="mb-4">
Ubuntu, ROS and ROS 2 timeline
</figcaption>
Official resources:
- Website
- Documentation
- The tutorials are great for anyone considering to begin with ROS 2
Robots
Legged Robots
- RSL RL, a library with RL algorithms for legged robotics, from the Robotic Systems Lab (RSL), Prof. Dr. Marco Hutter, ETH Zurich
Robot Learning
- LeRobot, a low-cost robotics project by HuggingFace, for accessible end-to-end robot learning
- Robot Learning Course, course material for Marc Toussaint's and Wolfgang Hönig's Robot Learning course in TU Berlin.
Simulation
- Genesis, a generative simulation tool for robotics
Datasets
Autonomous Driving
- MAN TruckScenes, World's First Public Dataset For Autonomous Trucking
- Challenge on HuggingFace Spaces
Christmas Tree PCB

Towards the end of 2023, in preparation for the holidays, I designed and soldered about twenty small Christmas tree 🎄 PCBs to give to my relatives. During this year's holidays, I took advantage of some free time to clean up and prepare the project to publish it as an open-source hardware project on GitHub.

About
This project taught me the basics of ATtiny microcontroller programming (via UPDI), PCB art, and SMD components soldering. I hope this project inspire anyone interested in #hardware, #embedded #programming, and #electronics. If you're interested in making one for next year, I wrote down all the tips and tricks for making your own Christmas tree PCB in an instruction guide.
This project was made with the following free and open-source software:
- FreeCAD for the outline
- Inkscape for the PCB art
- KiCad for the design of the PCB
- PlatformIO for embedded programming
Images
Links
For more info,
Proprioception Is All You Need: Terrain Classification for Boreal Forests

My paper, Proprioception Is All You Need: Terrain Classification for Boreal Forests, will be presented in the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), in Abu Dhabi, UAE. The paper presents BorealTC : a publicly available dataset containing annotated data from a wheeled UGV for various mobility-impeding terrain types typical of the boreal forest. The data was acquired in winter and spring on deep snow and silty loam, two uncommon terrains in an urban setting.
Here is the abstract:
Recent works in field robotics highlighted the importance of resiliency against different types of terrains. Boreal forests, in particular, are home to many mobility-impeding terrains that should be considered for off-road autonomous navigation. Also, being one of the largest land biomes on Earth, boreal forests are an area where autonomous vehicles are expected to become increasingly common. In this paper, we address the issue of classifying boreal terrains by introducing BorealTC, a publicly available dataset for proprioceptive-based terrain classification (TC). Recorded with a Husky A200, our dataset contains 116 min of Inertial Measurement Unit (IMU), motor current, and wheel odometry data, focusing on typical boreal forest terrains, notably snow, ice, and silty loam. Combining our dataset with another dataset from the literature, we evaluate both a Convolutional Neural Network (CNN) and the novel state space model (SSM)-based Mamba architecture on a TC task. We show that while CNN outperforms Mamba on each separate dataset, Mamba achieves greater accuracy when trained on a combination of both. In addition, we demonstrate that Mamba’s learning capacity is greater than a CNN for increasing amounts of data. We show that the combination of two TC datasets yields a latent space that can be interpreted with the properties of the terrains. We also discuss the implications of merging datasets on classification. Our source code and dataset are publicly available online: https://github.com/norlab-ulaval/BorealTC.
Terrains
For this paper, a ClearPath Husky A200 was driven on 5 different terrains:
Slides
Links
For more info,