CeRVIM Seminar: Louis Petit, November 15, 2024, 10 am, PLT-2546

Friday, November 15, 2024

Robotique de terrain intelligente pour la compréhension des environnements
Louis Petit
Professeur, Université de Sherbrooke

Time: 10 am
Room: PLT-2546

Abstract:
La présentation passera en revue les travaux passés de Dr. Petit, incluant la planification de trajectoire quasi-optimale dans les grands environnements non-encombrés, l’exploration autonome de cavités minières pour un drone filaire, ainsi que la planification de trajectoire adaptative aux risques pour un drone d’inspection de structures. Un accent particulier sera mis sur la façon dont la compréhension des applications sur le terrain permet souvent de procéder à des simplifications algorithmiques qui rendent possible des missions autrement impossibles (ex: calcul en temps réel). Certains travaux en cours seront également mentionnés, comme la reconnaissance de plantes rares sur les falaises à l’aide d’un drone, un système d’aide à la conduite hors route terrestre et marine, et la coopération entre robots hétérogènes pour comprendre les écosystèmes.

Biography:
Louis Petit is an Assistant Professor in the Department of Electrical and Computer Engineering at the Université de Sherbrooke. His current research focuses on developing efficient behaviors and strategies to endow autonomous mobile robots with intelligence and environmental awareness for safety, maintenance, and conservation. Applications include exploring unknown environments for search and rescue, advanced driver assistance systems, infrastructure inspection, and spatio-temporal mapping of ecosystems to monitor endangered or invasive fauna and flora. His research involves a combination of path planning, optimization, decision-making, machine learning, computer vision, and other robotics techniques. He is a member of the Createk and IntRoLab research groups at the Interdisciplinary Institute for Technological Innovation (3IT). He was a Postdoctoral Researcher (2024) in Computer Science at the Mobile Robotics Lab at McGill University, working with David Meger and Gregory Dudek. Prior to McGill, he completed his Ph.D. in Mechanical Engineering at the Université de Sherbrooke (2023), where he worked with Alexis Lussier Desbiens. He holds a BSc and a MSc in Mechatronics Engineering from UCLouvain (2019).

CeRVIM Seminar presented by ROBIC: November 12, 2024, 1:30-2:30 pm, PLT-3370

CeRVIM Seminar presented by ROBIC
November 12, 2024, 1:30-2:30 pm, PLT-3370

La propriété intellectuelle, ça sert à quoi?

Résumé : Nous aborderons les différentes formes de propriété intellectuelle ainsi que leurs applications dans un cas concret axé sur la robotique. Nous allons vous expliquer l’utilité de la propriété intellectuelle en recherche et développement ainsi que les moments clés où penser à la propriété intellectuelle. Cette présentation s’adresse à tous et se veut une introduction pour mieux vous outiller lors de vos développements technologiques.

Présentateurs  :

Gabrielle Lemire, ing., Ph.D. : Diplômée de l’Université Laval en génie mécanique et ancienne du laboratoire de robotique, Gabrielle est conseillère technique en brevets pour le domaine de l’ingénierie mécanique. Elle se spécialise dans la rédaction et la poursuite de demandes de brevets.

Henri Lajeunesse, avocat, M.Sc. : Diplômé de l’Université Laval et de l’Université de Montréal, Henri est avocat spécialisé en propriété intellectuelle et en droit des affaires. Il fait partie, entre autres, du Groupe des Technologies Émergentes ainsi que du Groupe des Sciences de la vie.

CeRVIM-IID-IROS-2024 Seminars: Norlab students, Oct. 7, 2024, 2:00 p.m., PLT-3370

CeRVIM-IID-IROS-2024 Seminars: Norlab students, Université Laval
Dry-run for the IROS 2024 conference

Monday, October 7, 2024, 2:00-3:00 p.m., PLT-3370

Poster – Damien LaRocque

LaRocque, D., Guimont-Martin, W., Duclos, D.-A., Giguère, P., & Pomerleau, F. (2024). Proprioception Is All You Need: Terrain Classification for Boreal Forests. ArXiv Preprint ArXiv:2403.16877, Accepted for presentation at the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). https://doi.org/10.48550/arXiv.2403.16877

Oral – Olivier Gamache

Gamache, O., Fortin, J.-M., Boxan, M., Vaidis, M., Pomerleau, F., & Giguère, P. (2024). Exposing the Unseen: Exposure Time Emulation for Offline Benchmarking of Vision Algorithms. ArXiv Preprint ArXiv:2309.13139, Accepted for presentation at the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). https://doi.org/10.48550/arXiv.2309.13139

The presentations will be given in English and the slides will be in English.

CeRVIM-IID-ICRA-2024 Seminars: Norlab students, April 19, 2024

CeRVIM-IID-ICRA-2024 Seminars: Norlab students, Université Laval
Dry-run for the ICRA 2024 conference

Full Program (with presentation abstracts)

Friday, April 19, 2024, 2:00-4:00 p.m., PLT-2501

Program Summary:

Saturation-Aware Angular Velocity Estimation: Extending the Robustness of SLAM to Aggressive Motions
Simon-Pierre Deschênes, 3e cycle

DRIVE: Data-driven Robot Input Vector Exploration
Dominic Baril, 3e cycle

Field Report on a Wearable and Versatile Solution for Field Acquisition and Exploration
Olivier Gamache, 3e cycle

Comparing Motion Distortion Between Vehicle Field Deployments
Nicolas Samson, 2e cycle

The presentation will be given in English and the slides will be in English.

CeRVIM Seminar: Jean-Christophe Ruel, April 11, 2024

CeRVIM Seminar: Détection de la pose d’objets enchevêtrés avec des surfaces spéculaires pour la saisie robotique autonome à faible coût

Jean-Christophe Ruel
Laboratoire de robotique
Dép. de génie mécanique, Université Laval

Thursday, April 11, 2024, 11:00 a.m., PLT-2750

Abstract
L’objectif de ce projet de recherche est de développer une méthode de vision numérique à faible coût pour la saisie autonome robotique d’objets réfléchissants. Plus spécifiquement, l’étude se concentre sur l’estimation de la pose à 6 degrés de liberté et la détection d’objets ayant des surfaces spéculaires dans un amas d’objets. Nous présentons des composants matériels, notamment une caméra et un anneau multiflash synchronisé, ainsi qu’une contribution à l’algorithme Fast Directional Chamfer Matching. Les résultats mettent en évidence la faisabilité de la méthode pour détecter et estimer la pose d’objets dotés de surfaces réfléchissantes, particulièrement ceux présentant plusieurs symétries.

The presentation will be given in French and the slides will be in French.

To obtain the Zoom meeting web link, please contact:
Annette.Schwerdtfeger@gel.ulaval.ca

CeRVIM-IID Seminar: Catherine Bouchard, March 15, 2024

CeRVIM-IID Seminar: Multiplexing fluorescence microscopy images with multi-dimensional deep networks

Catherine Bouchard
Laboratoire de Vision et Systèmes Numériques, LVSN, U. Laval
Laboratoire de Flavie Lavoie-Cardinal, FLC-Lab, U. Laval

Friday, March 15, 2024, 1:30 p.m., PLT-3904

Abstract
Studying the complex interactions between all proteins involved in biological processes requires imaging simultaneously as many targets as possible. Fluorescence lifetime imaging (FLIM) measures the delay between the excitation and the emission of each photon to help discern its emitter, resulting in a multi-color image from a single acquisition. The algorithms developed for this assignment are generally applied pixel-by-pixel, using one dimension (the distribution of the measured time delay), and therefore do not exploit a valuable source of information that spreads across multiple pixels: the spatial organization of the proteins. We developed a method that exploits a multi-dimensional deep neural network that processes simultaneously all dimensions of the image (temporal and spatial) to better assign an emitter to each photon for FLIM images. This method proves to be more accurate than pixel-by-pixel methods for cases where photons are limited, like in super-resolution imaging of living cells, because it simultaneously uses the spatial features of the image with the time information. It can additionally serve as an unsupervised denoising method, further enhancing its performance for low-noise images. The method can be trained on partially simulated images and applied to real acquisitions, enabling its application for the many experimental cases where training datasets can not be acquired.

The presentation will be given in English and the slides will be in English.

CeRVIM Seminar: Akshaya Athwale, February 16, 2024

CeRVIM Seminar: DarSwin: Distortion-Aware Radial Swin Transformers for Wide Angle Image Recognition
https://openaccess.thecvf.com/content/ICCV2023/papers/Athwale_DarSwin_Distortion_Aware_Radial_Swin_Transformer_ICCV_2023_paper.pdf

Akshaya Athwale
Laboratoire de Vision et Systèmes Numériques, LVSN
Dép. de génie électrique et de génie informatique, U. Laval

Friday, February 16, 2024, 11:00 a.m., PLT-3370

Abstract
Wide-angle lenses are commonly used in perception tasks requiring a large field of view. Unfortunately, these lenses produce significant distortions, making conventional models that ignore the distortion effects unable to adapt to wide-angle images. In this research, we present a novel transformer-based model that automatically adapts to the distortion produced by wide-angle lenses. Our proposed image encoder architecture, dubbed DarSwin, leverages the physical characteristics of such lenses analytically defined by the radial distortion profile. In contrast to conventional transformer-based architectures, DarSwin comprises a radial patch partitioning, a distortion-based sampling technique for creating token embeddings, and an angular position encoding for radial patch merging. Compared to other baselines, DarSwin achieves the best results on different datasets with significant gains when trained on bounded levels of distortions (very low, low, medium, and high) and tested on all, including out-of-distribution distortions. While the base DarSwin architecture requires knowledge of the radial distortion profile, we show it can be combined with a self-calibration network that estimates such a profile from the input image itself, resulting in a completely uncalibrated pipeline. Finally, we also present DarSwin-Unet, which extends DarSwin to an encoder-decoder architecture suitable for pixel-level tasks. We demonstrate its performance on depth estimation and show through extensive experiments that DarSwin-Unet can perform zero-shot adaptation to unseen distortions of different wide-angle lenses.

The presentation will be given in English and the slides will be in English.

CeRVIM Seminar: Prof. Javier Vazquez-Corral, Barcelona, July 4, 2023, 2:30 pm, PLT-3370

CeRVIM Seminar

Convolutional neural networks and visual illusions: Can they fool each other?

Javier Vazquez-Corral
Associate Professor, Autonomous University of Barcelona
Researcher, Computer Vision Center, Barcelona

Tuesday, July 4, 2023, 2:30 p.m., PLT-3370

Abstract
Visual illusions teach us that what we see is not always what is represented in the physical world. Their special nature makes them a fascinating tool to test and validate any new vision model proposed. In general, current vision models are based on the concatenation of linear and non-linear operations. The similarity of this structure with the operations present in Convolutional Neural Networks (CNNs) has motivated us to study two research questions:
– Are CNNs trained for low-level visual tasks deceived by visual illusions? If this is the case, a way to obtain CNNs that better replicate human behaviour, might be to start aiming for them to better replicate visual illusions.
– Can we use current deep learning architectures to generate new visual illusions that trick humans?

References
[1] “Convolutional neural networks can be deceived by visual illusions” A Gomez-Villa, A Martin, J Vazquez-Corral, M Bertalmío, CVPR 2019
[2] “Color illusions also deceive CNNs for low-level vision tasks: Analysis and implications”, A Gomez-Villa, A Martín, J Vazquez-Corral, M Bertalmío, J Malo, Vision Research 176, 156-174
[3] “On the synthesis of visual illusions using deep generative models”, A Gomez-Villa, A Martín, J Vazquez-Corral, M Bertalmío, J Malo, Journal of Vision 22 (8), 2022

Biography
Javier Vazquez-Corral is an Associate Professor at the Autonomous University of Barcelona and a researcher at the Computer Vision Center. Prior to that, he held post-doctoral positions both at the Universitat Pompeu Fabra in Barcelona and at the University of East Anglia in Norwich, United Kingdom. His main research interest is computational colour, in which he has developed novel approaches to solve different problems ranging from colour constancy to colour stabilization, colour characterization, colour gamut mapping, high dynamic range imaging, image dehazing, image denoising, and vision colour properties such as unique hue prediction and colour naming.

The presentation will be given in English and the slides will be in English.

CeRVIM Seminar: Andréanne Deschênes, April 28, 2023

CeRVIM Seminar: Simultaneous fluorophore discrimination and resolution improvement of super-resolution images using fluorescence lifetime

Andréanne Deschênes
CERVO Research Centre, Université Laval

Friday, April 28, 2023, 11 a.m., PLT-3370

Abstract
To study the interactions between neuronal proteins with fluorescence microscopy, simultaneous observation of multiple biological markers is required. SPLIT-STED, an approach exploiting the analysis of fluorescence lifetime was developed to improve the spatial resolution of STimulated Emission Depletion microscopy. We developed an analysis using linear combination of components in phasor space to multiplex SPLIT-STED and apply it to separate two spectrally indistinguishable fluorophores per imaging channel. We quantify and characterize the performance of our algorithm on simulated images constructed from real single-staining images.This allows us to perform simultaneous resolution improvement and colocalization analysis of multiple protein species in live and fixed neuronal cultures.

The presentation will be given in English and the slides will be in English.