For the range measurement based pose estimation of a multicopter we need at least three anchors (e.g. UWB modules) with known positions in a defined navigation frame and one mobile tag on the multicopter that measures the distance to the anchors.
Visually speaking, each anchor generates a sphere with the distance as the radius to the mobile tag. All three spheres together intersect in two points. With weak assumptions, the unique position of the mobile tag can be disambiguated.
Once initialized in a common navigation frame, a tightly coupled extended Kalman filter is used to merge these range measurements and the data from an on-board inertial measurement unit (IMU) in order to provide high frequent, high accurate pose estimates to the underlying controller. In this case a tightly coupled approach means that the position of the multicopter gets directly inferred by the measured (1D) distances from each anchor, rather than using the position of the mobile tag (triangulated in 3D in a preprocessing step) in the filter framework.
The position of the anchors will be computed automatically in the initialization phase of the filter so that we do not need this information before the multicopter starts. In order to acquire sufficient information to calculate a first estimate of the position of the anchors, the multicopter has to move to certain points in the environment. To reduce the number of points that are necessary, the points have to be well-chosen (i.e. they have to be “informative”). We will develop a procedure to define such informative locations automaticlly, such that the multicopter is able to autonomously initialize its full estimator right after take-off.
We use visual fiducials (e.g. AR-markers) as references for automatic take-off and initial setup of the navigation frame. With this information, we aim at navigating the multicopter to the autonomously defined informative points. This allows the system to automatically initialize the UWB anchors. A seamless sensor switching mechanism will then allow the multicopter to incorporate the UWB measurement as soon as they are deemed to have sufficient accuracy. This multi-sensor fusion strategy will allow the multicopter to extend his action radius beyond the volume where the visual fiducial is in the field of view.
The inverse compositional image alignment algorithm is an image registration algorithms which estimates the change between two similar pictures in translation, rotation and scaling. An example are two pictures which are taken consecutively in a small timeframe by the same camera. Therefore, we use the gradient of the pictures (Dense approach). In contrary to feature based (sparse) approaches, dense approaches deliver better results but are more computationally expensive.
We find applications for the inverse compositional algorithm in the pose estimation of unmanned aerial vehicles, so called UAV’s. These shall be able to maneuver only based on pictures of a camera which is mounted on the body of the UAV. By calculating the change of consecutive pictures, it is possible to estimate the pose of the UAV. These kinds of algorithms are generally referred to as visual odometry.
Goals of this project are the implementation in Matlab and C++ so that it can be utilized on board of an UAV to estimate the pose. To reduce the computational effort a method to maximize the information per used pixel shall be found.
This algorithm performs visual-inertial-odometry (VIO), that is, estimating the motion by just using visual (cameras) and inertial sensors accelerometers and gyroscopes, IMUs).
This filter based approach will be implemented in UAVs to enable them to localize themselves and to operate autonomously.
The MSCKF approach has shown accurate results and is a lightweight solution suitable for UAVs with limited computation power.
Features are highly descriptive points in images that can be tracked over multiple frames and are used to infer position information from he surroundings.
MSCKF can be classified as tightly-coupled and sparse, because it uses those features to localize.
Tightly-coupled means that the computation of past poses on the trajectory are mathematically directly effected by the features.
It is sparse, because image points are used, and not whole image patches.
This algorithm is a hybrid error-state EKF (Extended Kalman Filter) approach. Hybrid, because the states are integrated during propagation, exploiting the continuous time dynamics.
And error-state, because we filter the error states, instead of the nominal states. The error states are small signals, that are good to linearize and can be advantageous in Kalman filters.
The goal of this project is to implement a framework that allows quick deployment of a Micro Air Vehicle (MAV) using only information from inertial sensors (accelerometer, gyroscope) and a single camera. The focus in the quick deployment procedure is on user friendliness which directly includes the parameter of deployment time and fast system readiness (i.e. fast state convergence). Thus, we aim at implementing a simple procedure to i) switch on and initiate the platform, and ii) to deploy it so autonomous missions can start. We assume an underlying MAV state estimator to provide metric pose and velocity to the on-board controller and focus on the deployment state machine itself. As we focus on GPS denied visual-inertial state estimation the system requires (aggressive) motion for fast state convergence. Thus, we specifically focus on a deployment strategy where the MAV is literally thrown in the air aiming at “throw-and-go” MAVs for GPS denied environments.