It is fairly obvious that Microsoft has two devices that fit together very well - Kinect and HoloLens. After seeing demos of using a Kinect as a scanner to create objects that could be displayed using HoloLens, Joost van Schaik decided that it was time to implement and document the technique so that others could do the same.
This is a step-by-step procedure using existing hardware and software. You basically need a Kinect 2 working on a PC and a free 3D scan app, a copy of the open source CloudCompare and a ccopy of Unity.
The link to CloudCompare given in the write up isn't working. Instead you can use:
Kinect can be used to work out where people are. RFID can be used to identify people, but isn't so good at getting their position. Put the two together and you have a people tracker:
We introduce a novel, accurate and practical system for real-time people tracking and identification. We used a Kinect V2 sensor for tracking that generates a body skeleton for up to six people in the view.
We perform identification using both Kinect and passive RFID, by first measuring the velocity vector of person's skeleton and of their RFID tag using the position of the RFID reader antennas as reference points and then finding the best match between skeletons and tags.
We introduce a method for synchronizing Kinect data, which is captured regularly, with irregular or missing RFID data readouts. Our experiments show centimeter-level people tracking resolution with 80% average identification accuracy for up to six people in indoor environments, which meets the needs of many applications. Our system can preserve user privacy and work with different lighting.
Robust Intrinsic and Extrinsic Calibration of RGB-D Cameras
If you are using say the Kinect for serious applications, and who is using it for games any more, then calibration is vital. We have a new implementation of a novel calibration procedure and the good news is that the code is open source and you can use it in your own projects:
Color-depth cameras (RGB-D cameras) have become the primary sensors in most robotics systems, from service robotics to industrial robotics applications. Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and extrinsic calibration that generally does not meet the accuracy requirements needed by many robotics applications (e.g., high accuracy 3D environment reconstruction and mapping, high precision object recognition and localization, ...).
In this paper, we propose a human-friendly, reliable and accurate calibration framework that enables to easily estimate both the intrinsic and extrinsic parameters of a general color-depth sensor couple. Our approach is based on a novel, two components, measurement error model that unifies the error sources of different RGB-D pairs based on different technologies, such as structured-light 3D cameras and time-of-flight cameras.
The proposed correction model is implemented using two different parametric undistortion maps that provide the calibrated readings by means of linear combinations of control functions. Our method provides some important advantages compared to other state-of-the-art systems: it is general (i.e., well suited for different types of sensors), it is based on an easy and stable calibration protocol, it provides a greater calibration accuracy, and it has been implemented within the ROS robotics framework. We report detailed and comprehensive experimental validations and performance comparisons to support our statements.
Results of the calibration applied to a set of clouds of a wall at different distances.
picoCTF, the world's largest online hacking competition, is a computer security game for middle and high school students. Organised by CMU's CyLab, the third contest opens on March 31st and runs for t [ ... ]