Behaviors.AI
2017-2020
The labCom Behaviors.ai between SMA Research group and Hoomano compagny focuses on the design and development of smart applications for social robots.
The aim of Behaviors.ai is to investigate new approaches of artificial intelligence, and more precisely developmental learning, to create new ways to interact with social robots such as Pepper, Nao, Buddy and IJINI (photo). In this project we want to make them more empathetic and able to continuously learn as they interact in “real life” environments. Enhanced with artificial and emotional intelligence, public and robots can interact instinctively.
This Labcom is funded by the French Research Agency (ANR) for three years.
Key-words: developmental learning in robotics, social robotics, human-robot interaction, artificial intelligence
Web site: http://behaviors.ai/
The labCom Behaviors.ai between SMA Research group and Hoomano compagny focuses on the design and development of smart applications for social robots.
The aim of Behaviors.ai is to investigate new approaches of artificial intelligence, and more precisely developmental learning, to create new ways to interact with social robots such as Pepper, Nao, Buddy and IJINI (photo). In this project we want to make them more empathetic and able to continuously learn as they interact in “real life” environments. Enhanced with artificial and emotional intelligence, public and robots can interact instinctively.
This Labcom is funded by the French Research Agency (ANR) for three years.
Key-words: developmental learning in robotics, social robotics, human-robot interaction, artificial intelligence
Web site: http://behaviors.ai/
AMPLIFIER project
Active Multisensory Perception and LearnIng For InteractivE Robots - 2018-2022
Key-words: multimodal data fusion, active perception, decision making, developmental learning, psychophysical data analysis, computational neuroscience, social robotics
Partners: University Lyon 1 (LIRIS, CRNL), Univ. Grenoble Alpes (LJK, LPNC, Gipsa-lab)
In this project, we adopt a transdisciplinary approach combining on the one hand psychophysics and applied mathematics to improve our understanding of perception in humans and on the other hand computer science and robotics for the transfer and adaptation of these paradigms for interaction robots. We position our study in the context of constructivist learning that considers the problem of artificial intelligence from an integrative perspective. Perception is then considered as the result of the experimentation of sensorimotor contingencies learned incrementally throughout life. We explore the following lines of research:
Key-words: multimodal data fusion, active perception, decision making, developmental learning, psychophysical data analysis, computational neuroscience, social robotics
Partners: University Lyon 1 (LIRIS, CRNL), Univ. Grenoble Alpes (LJK, LPNC, Gipsa-lab)
In this project, we adopt a transdisciplinary approach combining on the one hand psychophysics and applied mathematics to improve our understanding of perception in humans and on the other hand computer science and robotics for the transfer and adaptation of these paradigms for interaction robots. We position our study in the context of constructivist learning that considers the problem of artificial intelligence from an integrative perspective. Perception is then considered as the result of the experimentation of sensorimotor contingencies learned incrementally throughout life. We explore the following lines of research:
- The multi-sensory integration that is essential for the use of multiple sensors. One of the central questions is when and how to merge the autonomously available information by relying on past learning of relationships between different sensors. We will also study how this integration interacts with the progressive learning of representations.
- The active perception that guides the collection of information to improve the understanding of the scene (attentional mechanism) and the learning of representations (active learning). This requires both having a predictive model of the world and deciding in what modality and in which area of space to look for the next most relevant information.
Multi-robot cooperation : COMODYS Project
COoperative Multi-robot Observation of DYnamic human poSes (2017-2018)
The "COMODYS" project is motivated by the exploration of the joint-observation of complex (dynamic) scenes by a fleet of mobile robots. In this project, the considered scenes are defined as a sequence of activities, performed by a person in a same place. Then, mobile robots have to cooperate to find a spatial configuration around the scene that maximizes the joint observation of the human pose skeleton. It is assumed that the robots can communicate but have no map of the environment and no external localisation.
We proposed an original concentric navigation model combined with an incremental mapping of the environment. The exploration guided by meta- heuristics in order to limit the complexity of the exploration state space. We developped a simulator that uses real data from real human pose captures to simulate dynamic scene and noise in sensor information (video).
We have also developped an experimental framework for the concentric navigation of several Turtlebot2 robots around a scene. Especially, given that we assume in our work that robots have no map of the environment, we implemented a cooperative multi-robot mapping based on the merging of occupancy grid maps.
The project is funded by the Fédération Informatique de Lyon (FIL). It is a collaborative effort between our research group and the CHROMA team of CITI Lab.
Web site
The "COMODYS" project is motivated by the exploration of the joint-observation of complex (dynamic) scenes by a fleet of mobile robots. In this project, the considered scenes are defined as a sequence of activities, performed by a person in a same place. Then, mobile robots have to cooperate to find a spatial configuration around the scene that maximizes the joint observation of the human pose skeleton. It is assumed that the robots can communicate but have no map of the environment and no external localisation.
We proposed an original concentric navigation model combined with an incremental mapping of the environment. The exploration guided by meta- heuristics in order to limit the complexity of the exploration state space. We developped a simulator that uses real data from real human pose captures to simulate dynamic scene and noise in sensor information (video).
We have also developped an experimental framework for the concentric navigation of several Turtlebot2 robots around a scene. Especially, given that we assume in our work that robots have no map of the environment, we implemented a cooperative multi-robot mapping based on the merging of occupancy grid maps.
The project is funded by the Fédération Informatique de Lyon (FIL). It is a collaborative effort between our research group and the CHROMA team of CITI Lab.
Web site
SmartGOV project
2015-now: This project aims at facilitating urban policies design and test.
It is based on 2 multi-agent simulators,
one dealing with the environment representation and evolution of citizens on the infrastructure ; the other one is a smart policy management which simulates the impact of the tested policies on the environment. A micro-macro dynamic loop between the two MAS make them able to adapt each other, and the stakeholders may also propose specific adaptations from the simulation results. Special focus is given to the interaction with the decision makers, with the aim of a progressive smart co design. The system is able to propose smart adaptations of the policy, based on reinforcement learning of the policy agents.
In the project, the generic system has been applied to urban mobility.
From now, two different policies have been tested:
- One park pricing policy for routine travel in the city center of L.A. (paper) ;
- The LEZ (Low Emission Zone) implementation in Lyon, considering the impact on freight transport behavior and hence the pollution air impact (illustration here)
The project is funded by the French Region Auvergne-Rhône- Alpes (ADR ARC7 2015 grant). It is a collaborative effort between our research group and the NAVER LABS Europe in Grenoble.
It is developed using the multi-agent REPAST SYMPHONY platform in Java for simulation, and Plotly/Dash for visualization.
Key-words: Reinforcement Learning, Smart Governance, Urban Policy making, Smart city
Source code: github project
Web site: PhD Simon PAGEAUD, 2015-2019 (in French)
2017-2019
This project results from the collaboration with the GRC Contact company. Customer relationship management software (CRM) allow managing a company's interaction with its customers. In the context of a B2B environment, the software should provide additional data to facilitate sales. Our collaboration with GRC Contact allowed to introduce indicators from data science methods. Our work also provided an innovative framework to acquire area of expertise from an experienced retailer. Web sites (in French) : https://www.grc-contact.fr/intelligence-artificielle-crm-sicaia https://www.sicaia.fr/ |
Smart Cooperative Transportation Systems
2012-2015
Cooperative transport systems allow communication and exchange of information between vehicles and with the infrastructure. The new types of intelligent vehicles will enable better dynamic management of traffic on urban roads. This PhD is part of the future deployment of these technologies in such complex systems with multiple levels of interactions that require the development of adapted control mechanisms. The objective of this work was to propose a multi-agent modelling of the dynamic coupling of the various dynamics of the system (physical flow, information flow, control flow, etc.). The aim is to provide decentralized control of cooperative transport systems based on the concepts of emergence and self-organization. (PhD Maxime GUERIAU, ended in 2016) Demonstration presented at the AAAI15 Winter Conference in Austin, Texas : video |
Smart Environment: Ambient Intelligence with UBIANT
2012-now
Project performed with UBIANT company
co-financed by the European Union via the Auvergne-Rhone-Alpes Funds for Regional Development
This work participated to the 2016 CES Award (Consumer Electronics Show) obtained by the company (link)
PhD Sébastien Mazac, 2015
PhD Victor Lequay, 2019
Project performed with UBIANT company
co-financed by the European Union via the Auvergne-Rhone-Alpes Funds for Regional Development
This work participated to the 2016 CES Award (Consumer Electronics Show) obtained by the company (link)
PhD Sébastien Mazac, 2015
PhD Victor Lequay, 2019