Safety and risk-awareness are important properties for robotic systems, be it for protecting them from potentially dangerous internal states, or for avoiding collisions with obstacles and environmental hazards in disaster scenarios. Ensuring safety may be the role of more than one algorithmic layer in a system, each with varying assumptions and guarantees. This thesis investigates how to provide safety and risk-awareness in a robotic system by leveraging temporal logics, motion planning algorithms, and control theory.
Traditional control theory approaches interpret the collision avoidance safety task as a `stay-away' task; obstacles are abstracted as collections of geometric shapes, and controllers are designed to avoid each shape individually. We propose interpreting the collision avoidance problem as a `stay-within' task: the obstacle-free space is abstracted into safe regions. We propose control laws based on Control Barrier functions that guarantee that the system remains within such safe regions throughout its mission. Our results demonstrate that our controller indirectly avoids obstacles while providing the system the freedom to move within the safe regions, without the necessity to plan and track a safe trajectory. Furthermore, by extending our idea with Metric Interval Temporal Logic, we are able to consider missions with explicit time bounds.
Temporal logics are often used to define hard constraints on motion plans for robotic systems. However, some missions may require the system to violate constraints to make progress. Therefore, we propose to soften the hard constraints when necessary. Such soft constraints, here coined as spatial preferences, are used to account for relations between the system and the environment, such as distance from obstacles. The proposed minimally-violating motion planning algorithm attempts to find trajectories that satisfy the spatial preferences as much as possible, but violate them when needed. We demonstrate the use of spatial preferences on 3D exploration scenarios with Unmanned Aerial Vehicles, where we provide safer trajectories to the system while improving exploration efficiency.
In the last part of the thesis, we address safety in scenarios where a precise model of the environment is not available. In such scenarios, the system is required to fulfil the mission while minimizing risk, considering the imprecise model. We leverage Gaussian Processes to build approximate models of the environment, and use their posterior distributions in a risk metric. This risk metric allows us to consider less likely but possible events along the missions. To this end, we propose an online risk-aware motion planning approach, and validate it on disaster scenarios, where exposure to the unmodeled hazards might damage the system. Moreover, we explore risk-awareness between the control and mapping layers, by considering smooth approximations of Euclidean Distance Fields.
Our results indicate that our algorithms provide robotic systems with i) provably-safe controllers, ii) soft safety constraints, and iii) risk-awareness in unmodeled environments. These three properties contribute to safer and risk-aware robotic systems in the real world.