Our research is motivated by the vision of robotic systems which sense and interact with the physical world. Visual perception methods, mainly designed in the computational intelligence community, are often decoupled from the low-level control algorithms developed in the automatic control community. Our research stands at the confluence of Artificial Intelligence (AI), robot vision and control theory, aiming to build real-time visual-based control algorithms for robots operating in challenging, unstructured and changing environments.
We address this problem based on our Vision Dynamics paradigm, aiming to bridge the gap between robot vision and low-level control. An important aspect of our approach is learning the dynamics of the working environment through AI and deep learning methods. Our core application is the visual-based control of autonomous vehicles.
Our research projects are classified into:
- developing novel visual-based learning and control algorithms for robotic systems operating in real-world scenarios, and
- applying our proposed methodologies to challenging real-world problems, such as autonomous driving.
Visit our research page to learn more about our projects.
Real-time perception and localization based on artificial intelligence and dynamic models of vision. Designing control strategy for visually-based actuated systems in the presence of uncertainties and nonlinear scene dynamics.