Human multi-robot interaction: from workspace sharing to physical collaboration
Abstract
In decentralized systems, the control loop between robots, sensors, actuators and processing units does not close on the same physical device, but through a communication network on which the explicit or implicit exchange of information takes place; this leads each agent to only access local information despite a global task to accomplish. Furthermore, a human operator interacting with robotic systems represents a subsystem whose dynamics, however, is unknown and not deterministic.
The thesis work aims to investigate different aspects and scenarios that arise from human multi-robot collaboration. Despite the potential increase in performance in terms of efficiency, robustness to failure and flexibility resulting from the use of multiple robots in collaboration with humans (compared to the case with a single robot), this scenario is still unexplored in the current state of the art. More in detail, this collaboration requires to combine actions to ensure human safety and possibly assist him/her, and, at the same time, actions to coordinate the multi-robot system while handling any related constraints that it poses. Among the latter, for example, there are closed kinematic chains that characterize transport applications and, in general, cooperative manipulation of objects. Moreover, this coordination becomes more critical if a distributed control strategy is adopted.
The first identified scenario is that of pure coexistence of human operators and robots in the same working environment. In this context, a decentralized coordination architecture is designed that ensures human safety while achieving a cooperative robotic task. In particular, considering that in industrial applications it is typically preferable to preserve the path planned off-line, the solution proposes to modulate the task velocity of the multi-robot system, without deviations from its nominal path, in order to ensure human safety. Then, only if the dynamic modulation of the velocity is not sufficient to guarantee safety, an evasive action is foreseen; the latter is compliant with kinematic constraints deriving from rigidly connected robots.
The mere coexistence in the same work environment is, however, limiting in terms of possible collaborative tasks. Therefore, scenarios of multi-robot physical human interaction through co-manipulation of rigid objects are investigated. More in detail, a two-layer architecture is proposed, in which the top level establishes the dynamics to impose to the object based on the human interaction, while the bottom level allows to impose such dynamics in a distributed way and, at the same time, to regulate the internal stresses resulting from the manipulation of the same rigid object. In relation to the desired dynamics of the object, two possible scenarios are considered: (i) assistance to the person, formulated as a force regulation problem where the goal is to minimize the operator's effort in transporting the load, and (ii) shared control of the object, formulated as an optimization problem where the goal is to combine autonomous robot tasks with human intention.
However, the complexity of robotic tasks and the environment, the uncertainty of human behavior and the typical lack of awareness of mutual human-robot actions cause not only intentional but also accidental contacts to arise in a scenario of physical collaboration. For this reason, a data driven approach is proposed that allows to identify possible contact with the human on the basis of on board torque sensors, to distinguish its nature, intentional or accidental, and to react accordingly. Differently from solutions present in literature, the methodology allows to realize the above also in case the robot task itself requires interaction with the environment. For this purpose, recurrent neural networks are used for time series classification and Gaussian mixtures models to define the expected interaction wrenches.
In addition, note that, in order to achieve a safe human multi-robot collaboration, it is necessary that the multi-robot system itself is reliable, i.e., in case of faults, they need to be promptly detected and isolated not to compromise the personnel safety. For this reason, distributed solutions of fault identification and isolation are designed that, based on local state observers, allow to define residual signals for monitoring the health state of each robot in the team.
Finally, the role of the human operator can also be to provide examples to the robotic system rather than to perform tasks side-by-side with it. This functionality becomes particularly relevant when the state of the system cannot be easily determined analytically, for example for handling deformable objects, causing planning operations to achieve desired configurations extremely complex. Consequently, a solution is developed to enable visual and action planning based on examples in a high dimensional space. In particular, a latent space of reduced dimensionality is defined using images representing consecutive states of the system, and a graph structure is built in it to capture the dynamics of the system and, finally, to perform task planning.
The entire research activity is supported by a continuous prototyping and validation activity in both simulative and real environments. [edited by Author]