CeRVIM Webinar: Jean-Félix Tremblay-Bugeaud, 15 octobre 2020

CeRVIM Webinar: Design, Analysis and Preliminary Validation of a 3-DOF Rotational Inertia Generator

Jean-Félix Tremblay-Bugeaud
Laboratoire de robotique
Dép. de génie mécanique, Université Laval

15 octobre 2020, 11h00

Abstract
This presentation investigates the design of a three-degree-of-freedom rotational inertia generator using the gyroscopic effect to provide ungrounded torque feedback. It uses a rotating mass in order to influence the torques needed to move the device, creating a perceived inertia. The general working of the device is presented, along with a comparable concept using three flywheels instead of a gyroscope. Simulations are conducted to establish motor torque and velocity requirements, and the gyroscopic concept is identified as having the less demanding requirements. Preliminary experimental validations are conducted, confirming that it is possible to both reduce and increase the rendered inertia.

The presentation will be given in French and the slides will be in English.

To obtain the Zoom meeting web link, please contact:
Annette.Schwerdtfeger@gel.ulaval.ca

CeRVIM Webinar: Jérôme Isabelle, 9 octobre 2020

CeRVIM Webinar: A Mixed Reality Interface for Handheld 3D Scanners

Jérôme Isabelle
Laboratoire LVSN
Dép. de génie électrique et de génie informatique, Université Laval

9 octobre 2020, 11h00

Résumé / Abstract
The user interface is an essential part of handheld 3D scanners. During the scanning process, it provides feedback to the user to help him operate the scanner in an efficient way. For instance, it generally displays the reconstructed 3D model in real-time to let the user know which parts of the object have been captured and which have not. Traditionally, this type of information is displayed on a 2D screen, via a graphical user interface. Instead, we propose to use a mixed reality headset. We claim that this technology is better suited for handheld 3D scanning because it allows the reconstructed 3D model to be blended into the user’s perception of the real world. To validate this claim, we developed a prototype that uses the HTC Vive Pro headset as an interface for a handheld 3D scanner based on a Primesense Carmine RGB-D Camera.

Pour obtenir le lien d’accès internet (Zoom), veuillez contacter:
To obtain the Zoom meeting web link, please contact:
Annette.Schwerdtfeger@gel.ulaval.ca

CeRVIM Webinar: Kefei Wen, 25 septembre 2020

CeRVIM Webinar: Workspace enlargement and joint trajectory optimization of a (6+3)-dof 3-[R(RR-RRR)SR] kinematically redundant hybrid parallel robot

Kefei Wen
Laboratoire de robotique
Dép. de génie mécanique, Université Laval

25 septembre 2020, 11h00

Résumé / Abstract
In this presentation, the workspace and trajectory optimization of a (6+3)-dof 3-[R(RR-RRR)SR] kinematically redundant hybrid parallel robot is investigated. The inverse kinematics of the robot can be solved analytically and the singularities are easily avoidable. A workspace analysis is provided and it shows that the orientational workspace is very large. Moreover, the redundant degrees of freedom are optimized in order to further expand the workspace. An approach is developed to determine the desired redundant joint coordinates so that a performance index can be minimized approximately when the robot is following a prescribed Cartesian trajectory.

The presentation will be given in English and the slides will be in English. 

Pour obtenir le lien d’accès internet (Zoom), veuillez contacter:
To obtain the Zoom meeting web link, please contact:
Annette.Schwerdtfeger@gel.ulaval.ca

CeRVIM Webinar: Abdeslam Boularias, 20 août 2020

CeRVIM Webinar: Model Identification for Robotic Manipulation

Abdeslam Boularias
Robot Learning Lab
Dept. of Computer Science, Rutgers School of Arts and Sciences

20 août 2020, 11h00

Résumé / Abstract
A popular approach in robot learning is model-free reinforcement learning (RL), where a control policy is learned directly from sensory inputs by trial and error without explicitly modeling the effects of the robot’s actions on the controlled objects or system. While this approach has proved to be very effective in learning motor skills, it suffers from several drawbacks in the context of object manipulation due to the fact that types of objects and their arrangements vary significantly across different tasks. An alternative approach that may address these issues more efficiently is model-based RL. A model in RL generally refers to a transition function that maps a state and an action into a probability distribution over possible next states. In this talk, I will present my recent works on data-efficient physics-driven techniques for identifying models of manipulated objects. To perform a task in a new environment with unknown objects, a robot first identifies from sequences of images the 3D mesh models of the objects, as well as their physical properties such as their mass distributions, moments of inertia and friction coefficients. The robot then reconstructs in a physics simulation the observed scene, and predicts the motions of the objects when manipulated. The predicted motions are then used to select a sequence of actions to apply on the real objects. Simulated virtual worlds that are learned from data also offer safe environments for exploration and for learning model-free policies.

Biographie:
Abdeslam Boularias is an Assistant Professor of computer science at Rutgers, The State University of New Jersey, where he works on robot learning. Previously, he was a Project Scientist in the Robotics Institute of Carnegie Mellon University, and a Research Scientist at the Max Planck Institute for Intelligent Systems in Tübingen, where he worked with Jan Peters, in the Empirical Inference department, which was directed by Bernhard Schölkopf. From January 2006 to July 2010, he was a PhD student at Laval University under the supervision of Brahim Chaib-draa. His PhD thesis focused on reinforcement learning and planning in partially observable environments.

Pour obtenir le lien d’accès internet (Zoom), veuillez contacter:
To obtain the Zoom meeting web link, please contact:
Annette.Schwerdtfeger@gel.ulaval.ca

CeRVIM Webinar: Yacine Yaddaden, 12 juin 2020

Wébinaire CeRVIM : Automatic Facial Expression Recognition for Ambient Assistance
(Reconnaissance des expressions faciales pour l’assistance ambiante)

Yacine Yaddaden
Laboratoire LVSN
Dép. de génie électrique et de génie informatique, Université Laval

12 juin 2020, 11h00

Résumé / Abstract
In the last decades, the world has witnessed important advances in terms of information and communication technologies. Indeed, we benefit from the use of these technologies in our daily lives. One of their most important uses remains ambient assistant. They are generally used in smart environments or ambient assisted living where they provide guidance in order to improve the productivity and ensure security and safety. These systems are particularly useful in ensuring the wellbeing of the elderly and people suffering from cognitive deficiency.

In this presentation, I will be talking about my Ph.D. thesis work. It will be divided into two main parts, the first one is dedicated to automatic facial expression recognition, while the second one is about ambient assistance. In both parts, I will introduce fundamentals, related works and proposed methods. At the end, I will be describing how automatic facial expression recognition contributes to improve ambient assistance.

Pour obtenir le lien d’accès internet (Zoom), veuillez contacter:
To obtain the Zoom meeting web link, please contact:
Annette.Schwerdtfeger@gel.ulaval.ca

CeRVIM Seminar : Christian Gagné, 3 mars 2020

CeRVIM Seminar: Uniting research and innovation in artificial intelligence and data valorization
(Conference-discussion on the mandate, structure and actions of the Institute Intelligence and data (IID) of Université Laval)

Prof. Christian Gagné
Directeur de l’Institut intelligence et données (IID)
Dép. de génie électrique et de génie informatique, Université Laval

3 mars 2020, 9h30, Pouliot-3370

Abstract
The Institute Intelligence and Data (IID) of Université Laval brings together the driving forces in research and innovation in artificial intelligence and data science in the greater Quebec City area, in four interconnected fields:

  • Physical Environment
  • Health and Life Sciences
  • Methods of Artificial Intelligence and Data Processing
  • Ethics, Confidentiality and Social Acceptability

In a spirit of interdisciplinarity and collaboration, IID researchers, collaborators and associated members contribute to the development and enrichment of knowledge in a multitude of fields of application and support significant technological advances, with particular attention to confidentiality, ethics and social acceptability.

Together with the IID scientific director, Christian Gagné, and other IID team members, this presentation will provide an opportunity to learn more about the mandate, structure and actions of the Institute.

Welcome everyone!

The presentation will be given in French, and the slides will be in English.

CeRVIM Seminar: Flavie Lavoie-Cardinal, 10 février 2020

CeRVIM Seminar : Machine-learning-assisted microscopy : from smart scanning approaches to the generation of synthetic super-resolution images

Prof. Flavie Lavoie-Cardinal
Chercheure, Centre de recherche CERVO
Professeure associée, Dép. de Physique, Génie Physique et Optique
Université Laval

10 février 2020, 13h30, Pouliot-3370

Résumé/Abstract
Super-resolution microscopy (or optical nanoscopy) techniques allow the characterization of molecular interactions inside living cells with unprecedented spatiotemporal resolution. These techniques come with several layers of complexity in their implementation. My research team focuses on transdisciplinary approaches at the interface of molecular neurosciences, multimodal optical nanoscopy, and machine learning to study structure/function relationship of synapses in the brain. We develop machine learning and deep learning tools to increase the adaptability and accessibility of high-end imaging methods (e.g. optical nanoscopy) to complex experimental paradigms. Recently, we implemented a machine learning assisted optimization framework for optical nanoscopy allowing real-time optimization of multi-modal live-cell imaging of synaptic activity and structure proteins. We also implemented diverse deep learning approaches for high throughput microscopy image analysis, allowing us to characterize activity-dependent remodelling of neuronal proteins. We develop weakly supervised deep learning strategies to reduce the burden of extensive labeling of complex images and evaluate how they can be applied to real-time microscopy image analysis. We aim at developing new AI-assisted microscopy techniques that will adapt in real-time to the sample, predict changes in the structures and modify the experimental protocol depending on the measured response to a stimuli.

La présentation sera donnée en français et les diapos seront en anglais.
The presentation will be given in French and the slides will be in English.

REPARTI Workshop 2020

The REPARTI Workshop 2020 (May 26, 2020 at Université Laval), has been cancelled due to the current pandemic.

REPARTI Workshop 2019

The morning session included an invited talk given by Prof. John McPhee, University of Waterloo. The afternoon session was devoted to a poster session including 35 posters and 2 demos presenting research results from each of the research themes of REPARTI.

REPARTI Workshop 2019 Program