CeRVIM-IID-ICRA-2024 Seminars: Norlab students, April 19, 2024

CeRVIM-IID-ICRA-2024 Seminars: Norlab students, Université Laval
Dry-run for the ICRA 2024 conference

Full Program (with presentation abstracts)

Friday, April 19, 2024, 2:00-4:00 p.m., PLT-2501

Program Summary:

Saturation-Aware Angular Velocity Estimation: Extending the Robustness of SLAM to Aggressive Motions
Simon-Pierre Deschênes, 3e cycle

DRIVE: Data-driven Robot Input Vector Exploration
Dominic Baril, 3e cycle

Field Report on a Wearable and Versatile Solution for Field Acquisition and Exploration
Olivier Gamache, 3e cycle

Comparing Motion Distortion Between Vehicle Field Deployments
Nicolas Samson, 2e cycle

The presentation will be given in English and the slides will be in English.

CeRVIM Seminar: Jean-Christophe Ruel, April 11, 2024

CeRVIM Seminar: Détection de la pose d’objets enchevêtrés avec des surfaces spéculaires pour la saisie robotique autonome à faible coût

Jean-Christophe Ruel
Laboratoire de robotique
Dép. de génie mécanique, Université Laval

Thursday, April 11, 2024, 11:00 a.m., PLT-2750

Abstract
L’objectif de ce projet de recherche est de développer une méthode de vision numérique à faible coût pour la saisie autonome robotique d’objets réfléchissants. Plus spécifiquement, l’étude se concentre sur l’estimation de la pose à 6 degrés de liberté et la détection d’objets ayant des surfaces spéculaires dans un amas d’objets. Nous présentons des composants matériels, notamment une caméra et un anneau multiflash synchronisé, ainsi qu’une contribution à l’algorithme Fast Directional Chamfer Matching. Les résultats mettent en évidence la faisabilité de la méthode pour détecter et estimer la pose d’objets dotés de surfaces réfléchissantes, particulièrement ceux présentant plusieurs symétries.

The presentation will be given in French and the slides will be in French.

To obtain the Zoom meeting web link, please contact:
Annette.Schwerdtfeger@gel.ulaval.ca

CeRVIM-IID Seminar: Catherine Bouchard, March 15, 2024

CeRVIM-IID Seminar: Multiplexing fluorescence microscopy images with multi-dimensional deep networks

Catherine Bouchard
Laboratoire de Vision et Systèmes Numériques, LVSN, U. Laval
Laboratoire de Flavie Lavoie-Cardinal, FLC-Lab, U. Laval

Friday, March 15, 2024, 1:30 p.m., PLT-3904

Abstract
Studying the complex interactions between all proteins involved in biological processes requires imaging simultaneously as many targets as possible. Fluorescence lifetime imaging (FLIM) measures the delay between the excitation and the emission of each photon to help discern its emitter, resulting in a multi-color image from a single acquisition. The algorithms developed for this assignment are generally applied pixel-by-pixel, using one dimension (the distribution of the measured time delay), and therefore do not exploit a valuable source of information that spreads across multiple pixels: the spatial organization of the proteins. We developed a method that exploits a multi-dimensional deep neural network that processes simultaneously all dimensions of the image (temporal and spatial) to better assign an emitter to each photon for FLIM images. This method proves to be more accurate than pixel-by-pixel methods for cases where photons are limited, like in super-resolution imaging of living cells, because it simultaneously uses the spatial features of the image with the time information. It can additionally serve as an unsupervised denoising method, further enhancing its performance for low-noise images. The method can be trained on partially simulated images and applied to real acquisitions, enabling its application for the many experimental cases where training datasets can not be acquired.

The presentation will be given in English and the slides will be in English.

CeRVIM Seminar: Akshaya Athwale, February 16, 2024

CeRVIM Seminar: DarSwin: Distortion-Aware Radial Swin Transformers for Wide Angle Image Recognition
https://openaccess.thecvf.com/content/ICCV2023/papers/Athwale_DarSwin_Distortion_Aware_Radial_Swin_Transformer_ICCV_2023_paper.pdf

Akshaya Athwale
Laboratoire de Vision et Systèmes Numériques, LVSN
Dép. de génie électrique et de génie informatique, U. Laval

Friday, February 16, 2024, 11:00 a.m., PLT-3370

Abstract
Wide-angle lenses are commonly used in perception tasks requiring a large field of view. Unfortunately, these lenses produce significant distortions, making conventional models that ignore the distortion effects unable to adapt to wide-angle images. In this research, we present a novel transformer-based model that automatically adapts to the distortion produced by wide-angle lenses. Our proposed image encoder architecture, dubbed DarSwin, leverages the physical characteristics of such lenses analytically defined by the radial distortion profile. In contrast to conventional transformer-based architectures, DarSwin comprises a radial patch partitioning, a distortion-based sampling technique for creating token embeddings, and an angular position encoding for radial patch merging. Compared to other baselines, DarSwin achieves the best results on different datasets with significant gains when trained on bounded levels of distortions (very low, low, medium, and high) and tested on all, including out-of-distribution distortions. While the base DarSwin architecture requires knowledge of the radial distortion profile, we show it can be combined with a self-calibration network that estimates such a profile from the input image itself, resulting in a completely uncalibrated pipeline. Finally, we also present DarSwin-Unet, which extends DarSwin to an encoder-decoder architecture suitable for pixel-level tasks. We demonstrate its performance on depth estimation and show through extensive experiments that DarSwin-Unet can perform zero-shot adaptation to unseen distortions of different wide-angle lenses.

The presentation will be given in English and the slides will be in English.

CeRVIM Seminar: Prof. Javier Vazquez-Corral, Barcelona, July 4, 2023, 2:30 pm, PLT-3370

CeRVIM Seminar

Convolutional neural networks and visual illusions: Can they fool each other?

Javier Vazquez-Corral
Associate Professor, Autonomous University of Barcelona
Researcher, Computer Vision Center, Barcelona

Tuesday, July 4, 2023, 2:30 p.m., PLT-3370

Abstract
Visual illusions teach us that what we see is not always what is represented in the physical world. Their special nature makes them a fascinating tool to test and validate any new vision model proposed. In general, current vision models are based on the concatenation of linear and non-linear operations. The similarity of this structure with the operations present in Convolutional Neural Networks (CNNs) has motivated us to study two research questions:
– Are CNNs trained for low-level visual tasks deceived by visual illusions? If this is the case, a way to obtain CNNs that better replicate human behaviour, might be to start aiming for them to better replicate visual illusions.
– Can we use current deep learning architectures to generate new visual illusions that trick humans?

References
[1] “Convolutional neural networks can be deceived by visual illusions” A Gomez-Villa, A Martin, J Vazquez-Corral, M Bertalmío, CVPR 2019
[2] “Color illusions also deceive CNNs for low-level vision tasks: Analysis and implications”, A Gomez-Villa, A Martín, J Vazquez-Corral, M Bertalmío, J Malo, Vision Research 176, 156-174
[3] “On the synthesis of visual illusions using deep generative models”, A Gomez-Villa, A Martín, J Vazquez-Corral, M Bertalmío, J Malo, Journal of Vision 22 (8), 2022

Biography
Javier Vazquez-Corral is an Associate Professor at the Autonomous University of Barcelona and a researcher at the Computer Vision Center. Prior to that, he held post-doctoral positions both at the Universitat Pompeu Fabra in Barcelona and at the University of East Anglia in Norwich, United Kingdom. His main research interest is computational colour, in which he has developed novel approaches to solve different problems ranging from colour constancy to colour stabilization, colour characterization, colour gamut mapping, high dynamic range imaging, image dehazing, image denoising, and vision colour properties such as unique hue prediction and colour naming.

The presentation will be given in English and the slides will be in English.

CeRVIM Seminar: Andréanne Deschênes, April 28, 2023

CeRVIM Seminar: Simultaneous fluorophore discrimination and resolution improvement of super-resolution images using fluorescence lifetime

Andréanne Deschênes
CERVO Research Centre, Université Laval

Friday, April 28, 2023, 11 a.m., PLT-3370

Abstract
To study the interactions between neuronal proteins with fluorescence microscopy, simultaneous observation of multiple biological markers is required. SPLIT-STED, an approach exploiting the analysis of fluorescence lifetime was developed to improve the spatial resolution of STimulated Emission Depletion microscopy. We developed an analysis using linear combination of components in phasor space to multiplex SPLIT-STED and apply it to separate two spectrally indistinguishable fluorophores per imaging channel. We quantify and characterize the performance of our algorithm on simulated images constructed from real single-staining images.This allows us to perform simultaneous resolution improvement and colocalization analysis of multiple protein species in live and fixed neuronal cultures.

The presentation will be given in English and the slides will be in English.

CeRVIM Seminar (hybrid) : Dominic Baril, March 3, 2023, PLT-2501 and on Zoom

CeRVIM Seminar : Kilometer-scale autonomous navigation in subarctic forests: challenges and lessons learned

Dominic Baril
Norlab (Northern Robotics Laboratory)
Dép. d’informatique et de génie logiciel, U. Laval

Friday, March 3, 2023, 1:30 p.m., PLT-2501 and on Zoom

Abstract

With recent major advances in mobile robotics, it is now possible to deploy robots in various scenarios to support a variety of industries. However, few studies document the impact of winter conditions and the boreal forest on autonomous navigation systems. This is why we present a field report on the deployment of an autonomous navigation system in the Montmorency Forest, in the province of Quebec, in Canada. As part of this work, we have designed an autonomous navigation system based on the registration of point clouds measured by lidars to locate the vehicle and map the environment. We have demonstrated the ability of the system to navigate 18.8 km independently on forest trails in harsh winter conditions. We present the impact of vegetation and snow accumulation on autonomous navigation algorithms. The presentation will conclude with a discussion concerning the challenges to be met in order to achieve robust autonomous navigation under the conditions encountered in this deployment.

This presentation is based on the article (https://arxiv.org/abs/2111.13981) which has been published in the journal Field Robotics.

Biography
Dominic Baril is a Ph.D. student in Computer Science under the supervision of Professors François Pomerleau and Philippe Giguère. As part of his research in the Norlab, Université Laval’s mobile robotics laboratory, he seeks to increase the robustness of path-following algorithms for autonomous vehicles under variable traction conditions, more specifically snow-covered terrain. He has coordinated numerous field trials with a 500 kg robot in the Montmorency Forest during which the impact of the environment was quantified on a state-of-the-art autonomous navigation system.

The presentation will be given in French and the slides will be in French.

To obtain the Zoom meeting web link, please contact:
Annette.Schwerdtfeger@gel.ulaval.ca

CeRVIM Seminar: Anne-Sophie Poulin-Girard, February 3, 2023

CeRVIM Seminar: 3D metrology and astronomical instrumentation: an exoplanet searcher in Chile

Anne-Sophie Poulin-Girard
Laboratoire de Recherche en Ingénierie Optique, LRIO
Dép. de physique, de génie physique et d’optique, U. Laval

Friday, February 3, 2023, 11:00 a.m., PLT-3370

Abstract
The use of metrology devices such as portable measuring arms, CMM and 3D scanners is increasingly widespread in the field of astronomical instrumentation. These tools are used to align the optical and mechanical parts of these instruments, thus making it possible to respect very strict positioning tolerances, often below 100 microns for a complex optomechanical assembly.

In this context, the presentation will focus on the exoplanet detection astronomical instrument NIRPS (Near Infra-Red Planet Searcher), more precisely on the assembly, integration, test and validation (AITV) phase of the spectrograph of the instrument. I will also discuss the AITV mission at the European Southern Observatory – La Silla in the Chilean desert in 2022. This presentation is a great opportunity for those who are interested in astronomy, who want to learn more about the field of astronomical instrumentation and potential future collaborations, or who simply wish to see the magnificent landscapes of Chile.

Biography
Anne-Sophie Poulin-Girard received her Ph.D. from Université Laval in 2016, under the supervision of Profs. Simon Thibault and Denis Laurendeau. Her thesis focused on the use of panoramic lenses in 3D reconstruction, at the boundary between optical engineering and computer vision. Following her Ph.D., she became the scientific coordinator of the Canada Excellence Research Chair in Neurophotonics. In 2017, she returned to Prof. Thibault’s team as a research associate. She is also scientific and technical coordinator of the NSERC Industrial chair in optical design. She currently participates in several research and development projects in collaboration with industry, and is involved in the assembly, integration and testing phase for the spectrograph of NIRPS, an instrument dedicated to the detection of exoplanets. Passionate about education, she was the chair of the SPIE Education committee and has hosted the international conference SPIE/OSA/IEEE/ICO Education and Training in optics and photonics in 2019 and is the co-chair of Optics Education and Outreach conference at SPIE O+P since 2020. Since 2021, she is also a member of the Natural Sciences and Engineering Research Council of Canada (NSERC).

The presentation will be given in French and the slides will be in English.

CeRVIM Webinar: William Bonilla, December 2, 2022

CeRVIM Webinar: Introduction à la segmentation sémantique (par Intelligence Artificielle) sur MATLAB

William Bonilla
1. Laboratoire LVSN
Dép. de génie électrique et de génie informatique, U. Laval
2. Test Engineer Intern, Tesla

Friday, December 2, 2022, 11:00 a.m.

Résumé
L’intelligence artificielle est un outil de plus en plus accessible qui est très prisé aujourd’hui. Les chercheurs ont donc plus de choix lorsque vient de le temps de choisir une plateforme pour développer leurs algorithmes d’intelligence artificielle.  Récemment, MATLAB a réussi à produire une solution facile d’utilisation qui permet de développer des algorithmes d’intelligence artificielle. La présentation portera sur toutes les étapes pour la réalisation d’un algorithme d’intelligence artificielle capable d’effectuer la segmentation sémantique sur MATLAB.
Le code sera partagé suite à la présentation.

The presentation will be given in French and the slides will be in French.

To obtain the Zoom meeting web link, please contact:
Annette.Schwerdtfeger@gel.ulaval.ca

CeRVIM Seminar: Sy Nguyen, November 18, 2022

CeRVIM Seminar: A Hybrid Approach for the Motion Control of Kinematically Redundant Hybrid Parallel Robots

Sy Nguyen
Laboratoire de robotique
Dép. de génie mécanique, Université Laval

Friday, November 18, 2022, 12:00 p.m.
Room PLT-3370

Abstract
Classical methods for the motion control of robots are based on the dynamic model of the robot. The dynamics and the errors can then be examined either in the joint coordinates or in the task coordinates. Each of these two approaches has advantages and drawbacks, and this results in two versions of several control techniques such as PD+Gravity Compensation or Computed-Torque. The hybrid method that combines both approaches to control Kinematically Redundant Hybrid Parallel Robots is introduced in this presentation. In short, the two approaches are applied to different parts of the robot and the control signal is then determined based on their combination. In addition to improving the position control performance, this method reduces the modeling process. The robot is divided into two main components and each component is modeled separately. Furthermore, the position and orientation of the robot are considered in the Cartesian space, which is more obvious and easier to work with. Several demo videos are shown to demonstrate the performance of this control method. Extended works for force-related control are discussed.

The presentation will be given in English and the slides will be in English.