The goal of formation control is to move a group of agents in order to achieve and maintain a set of desired relative positions. This problem has a long history, and latest trends emphasize the use of vision-based solution. In this setting, the measurement of the relative direction (i.e., bearing) between two agents can be quite accurate, while the measurement of their distance is typically less reliable.
We propose a general solution which is based on pure bearing measurements, optionally augmented with the corresponding distances. As opposed to the state of the art, our control law does not require auxiliary distance measurements or estimators, it can be applied to leaderless or leader-based formations with arbitrary topologies. Our framework is based on distributed optimization, and it has global convergence guarantees.
We have experimentally validated our approach on a platform of three quadrotors.
The images of 3-D points in two views are related by the so-called _essential matrix_.
There have been attempts to characterize the space of valid essential matrices as a Riemannian manifold. These approaches either put an unnatural emphasis on one of the two cameras, or do not accurately take into account the geometric meaning of the representation.
We addressed these limitations[^1] by proposing a new parametrization which aligns the global reference frame with the baseline between the two cameras. This provides a symmetric, geometrically meaningful representation which can be naturally derived as a quotient manifold. This not only provides a principled way to define distances between essential matrices, but it also sheds new light on older results (such as the well-known twisted pair ambiguity).
We provide an implementation of the basic function for working with the essential manifold integrated with the Matlab toolbox MANOPT. Download link: Manopt 1.06b with essential manifold.
My initial research included the comparison of different algorithms for segmenting multiple moving objects in a monocular video. For this purpose,
I created the Hopkins 155 dataset, which, since its introduction, has been used in over 150 scholarly articles and is a de-facto standard benchmark in this field.
The following is a frame from the dataset, together with the manually labelled feature tracks.
Please refer to the dataset page on the JHU Vision Lab for more detailed information and download instructions.