Welcome to RovisLab, the Robotics, Vision and Control Laboratory
Our vision: "Enabling robotic systems to sense and interact with the physical world "


We are developing Rovis.AI, a scalable Artificial Intelligence software stack for real-time robotics applications.

Rovis.AI Rovis.Mechatronics Rovis.DataChannel


Rovis.AI is based on our in-house developed Artificial Intelligence technology:
- Rovis.Vision, based on optimized deep neural networks used for computer vision and sensing,
- Rovis.Mechatronics, comprising of our low-level control algorithms for autonomous mobility.
- Rovis.DataChannel, which is our real-time secured communication backbone for controlling distributed teams of robots,

Request a demo via Skynet.

Our research is motivated by the vision of robotic systems which sense and interact with the physical world. Visual perception methods, mainly designed in the computational intelligence community, are often decoupled from the low-level control algorithms developed in the automatic control community. Our research stands at the confluence of Artificial Intelligence (AI), robot vision and control theory, aiming to build real-time visual-based control algorithms for robots operating in challenging, unstructured and changing environments.
We address this problem based on our Vision Dynamics paradigm, aiming to bridge the gap between robot vision and low-level control. An important aspect of our approach is learning the dynamics of the working environment through AI and deep learning methods. Our core application is the visual-based control of autonomous vehicles.

Our research projects are classified into:
- developing novel visual-based learning and control algorithms for robotic systems operating in real-world scenarios, and
- applying our proposed methodologies to challenging real-world problems, such as autonomous driving.

Visit our research page to learn more about our projects.

Research topics

Vision Dynamics

Real-time perception and localization based on artificial intelligence and dynamic models of vision. Designing control strategy for visually-based actuated systems in the presence of uncertainties and nonlinear scene dynamics.

Autonomous Vehicles

Solving perception, planning and motion control for autonomous systems. The driving functions map sensory input to control output and are implemented either as modular perception-planning-action pipelines, End2End, or Deep Reinforcement systems which directly map observations to driving commands.

Rehabilitation Robotics

Helping people with robotics technology. The goal of therapy robots is to help patients recover from different forms of accidents and maladies. On the other hand, assistive robots are there mainly to support persons with disabilities in Activities of Daily Living (ADL) and professional life.