AutoLOG will focus on research towards automating challenging and DDD (dull, dangerous, dirty) tasks concerning the handling of raw material in production lines currently executed manually. We will focus on artificial intelligence inspired vision based approaches to categorize and segment the raw material and its geometry to subsequently define (again through AI) optimal handling/grasping poses for the automation machinery. We seek to automate existing infrastructure in a versatile and cost effective way. Thus, we will investigate in retrofittable sensors and robust control strategies for seamless and cost efficient upgrading/retrofitting/automating existing infrastructure. As a specific application scenario and immediate benefit to Austrian’s industry, we will tackle the problem of autonomously grasping logs to be placed from the truck to the processing machinery.
We will pursue a concentrated research effort to enable research platform prototype consisting of one unmanned aerial vehicle (UAV) navigating autonomously through managed mature forest providing sufficiently dense visual data for accurate 3D reconstruction and subsequent autonomous extraction of ecological data from objects of interest. The ecological data includes the estimation of the position of the trees, the diameter as breast height, the stem shape, and coverage of herb layer. The project will yield new innovative algorithms for GPS independent, vision based autonomous UAV navigation, including self-healing state estimation, vision based obstacle avoidance, and adaptive path planning. In addition, novel 3D reconstruction algorithms will enable on-site extraction of ecological forest parameters in unprecedented precision and efficiency in both time and cost.
MODULES tackles the challenges of high-precision state estimation with the application to approach, landing, and subsequent take-off of a multi-copter on a landing pad for autonomous recharging. The goal is to develop a reliable algorithm for state estimation using multiple sensor modalities in a consistent and real-time multi-sensor fusion framework which will serve as back-bone for the high-precision maneuvers. In addition to the data fusion of multiple sensors in a given sensor suite, the question of how to increase modularity of such a framework such that sensor signals and entire sensor modules can adaptively be added and removed in-flight without disabling state estimation for reliable UAV navigation will be analyzed. System self-calibration, fast state convergence and consistency are key aspects to be considered.
U.S. Army International Technology Center – Atlantic
Visual information fused with inertial cues has proven to be able to provide pose information to the robots in a variety of different scenarios. However, current real-time capable solutions still use most of the resources on computationally constrained platforms, require well textured and low cluttered areas, and do not make use of the dense information the camera image provides. VI-MuSe will investigate in further reducing the computational complexity of visual-inertial state estimation and at the same time increase the information used from the camera image to mitigate the current limitations. In addition, the project will investigate in the use of other sensors to mitigate failure modes in visually homogeneous areas. In contrast to state-of-the-art multi-sensor fusion algorithms, the goal is to develop a method to seamlessly add sensors during run-time without the need of pre-calibration.