Skip to main content Skip to main navigation

Projekt

DENSITY

RGB-D Image-based Reconstruction of Rigid and Non-Rigid Objects for End-Users Applications

  • Laufzeit:

Project overview

The goal of DENSITY is to develop a new methodology for 3D reconstruction suitable to inexperienced end-users. The basic idea relies on the observation that it is difficult to control the user environment and camera settings, and that certain knowledge is needed in order to take “appropriate” pictures. Our easy-to-use, cost-effective scanning solution which is based on such a sensor could make 3D scanning technology more accessible to everyday users. Instead of guiding the user in trying to reduce the number of necessary pictures to a minimum, which leads in practice to very unreliable results, we propose to make use of depth images (RGB-D images) available from using low-cost depth cameras (Kinect or Time-of-Flight sensor). The depth images of those low cost depth cameras are currently of low resolution and noisy, but provide relatively stable results and a good coverage of the perceived area. However, those partial and coarse 3D views still need to be registered and refined in order to be useful in our scanning task.

KinectAvatar: Fully Automatic Body Capture Using a Sigle Kinect

We present a novel scanning system for capturing a full 3D human body model using just a single depth camera and no auxiliary equipment. We claim that data captured from a single Kinect is sufficient to produce a good quality full 3D human model. In this setting, the challenges we face are the sensor’s low resolution with random noise and the subject’s non-rigid movement when capturing the data. To overcome these challenges, we develop an improved super-resolution algorithm that takes color constraints into account. We then align the super-resolved scans using a combination of automatic rigid and nonrigid registration. As the system is of low price and obtains impressive results in several minutes, full 3D human body scanning technology can now become more accessible to everyday users at home. First, we give an overview of our scanning system, easily built as shown in Fig. 1a. The user stands before a Kinect in a range of about 2 meters, such that the full body falls in the Kinect’s range of view. Then, the user simply turns around 360 degrees for about 20 to 30 seconds while maintaining an approximate “T” pose. Fig. 2 shows our scanning results on five users. Our result reproduces the whole human structure well (especially the arms and legs), and can reconstruct detailed geometry such as the face structure and the wrinkles of the clothes. To evaluate the accuracy of the reconstruction, we compare the biometric measurements of the reconstructed human models with data from the actual people in Tab. 1. The values are the average absolute distance among eight people. We also show average runtime statistics in Tab. 1. The whole processing time for each model is about 14 minutes on average, using an Intel(R) Xeon 2.67GHz CPU with 12GB of memory. Note that 90% of the time in our method is used for computing closest points. Previous work on human body reconstruction can only capture nearly naked human bodies and spends nearly one hour of computation time, and prior work on articulated registration computes the registration frame by frame inK minimization steps, taking nearly two hours to compute.

Body Capture Using Multiple Kinects

In this application, we present a scanning system with muliple Kinects. The Kinects are fixed on the pillars from different views, as Fig. 3a shows. We delete the interference each other of the Kinects by shaking the kinects. And the first scanning result is shown in Fig. 3b. Because the there is only three views, not enough to cover all the human shape, there are some holes on the final mesh. We fill the holes based on a human template . Deform and fit the template human shape to the scanning human, then merge and smooth both mesh together to get a final result.

Virtual Clothes Try-on

There are many applications in virtual reality task with 3d full body model. Online shopping has grown exponentially during the past decade. More andmore customers turn to purchase dresses online. However, the customer can not try out the garment before purchasing, do not know the size of the clothes is suitable or not. With our easy-to-use and low cost Kinect scanning system, the untrained customer can get his own 3D model at home. Then virtual try on clothes with human model, even can interactively edit 2D pattern designs.

Fördergeber

BMBF - Bundesministerium für Bildung und Forschung

BMBF - Bundesministerium für Bildung und Forschung

Publikationen zum Projekt

Jingyi Zhang; Josef van Genabith

In: Proceedings of the 2020 Conference on Emperical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing (EMNLP-2020), located at EMNLP, November 16-18, .. 2020.

Zur Publikation

Vladislav Golyanik; Didier Stricker

In: Proceeding of IEEE Winter Conference on Applications of Computer Vision |. IEEE Winter Conference on Applications of Computer Vision (WACV-17), March 27-30, Santa Rosa, CA, USA, IEEE, 3/2017.

Zur Publikation

Christiano Couto Gava; Didier Stricker

In: Computer Vision, Imaging and Computer Graphics Theory and Applications |. International Joint Conference on Computer Vision, Imaging and Computer Graphics (VISIGRAPP-2015), 10th, March 11-14, Berlin, Germany, Pages 256-273, Springer International Publishing, 2/2016.

Zur Publikation