Kernel Density Estimation Using Joint Spatial-Color-Depth Data for Background Modeling
The use of low-cost devices for depth estimation, such as Microsoft Kinect, is becoming more and more popular in computer vision research. In this paper, we propose an algorithm for background modeling which exploits this kind of devices to make the background and foreground models more robust to effects such as camouflage and illumination changes. Our algorithm, after a preprocessing stage for aligning color and depth data and for filtering/filling noisy depth measurements, explicitly models the scene's background and foreground with a Kernel Density Estimation approach in a quantized x-y-hue-saturation-depth space. The results in three different indoor environments, with different lighting conditions, showed that our approach is able to achieve an accuracy in foreground segmentation over 90% and that the combination of depth data and illumination-independent color space proved to be very robust against noise and illumination changes.