3D Surrogates of Furniture and Interiors

During the course of the semester we made two trips to O’Keeffe’s house in Abiquiu.  The first trip was focused on using photogrammetry on interior spaces.  The second trip included the 3d documentation of furniture. We tested two chairs located from the home for which we tested two shooting methods:

METHOD 1: three pictures per position (taken at 90, 180, 270 degrees)
METHOD 2: Lens calibration method using Agisoft’s Lens application: http://agisoft.ru/products/lens/

The results with both the methods were successful.  However, the lens calibration method limited the overall number of images (we did not need to take pictures at 90, 180, 270 degrees but just one orientation), thus lessening the processing time (which has been quite timely with large “chunks”).  The resulted meshes for single objects are pretty amazing. The lens calibration create a distortion just in one case, probably due to the use of a reflector over the object and  a white fabric laid under the object to protect it eventually end up partially covering the bottom of the object itself.

This is an example of 3D meshes with texture using the second method:

Photogrammetry: 3D MESHES OF INTERIORS

We met the biggest challenges with the use of photogrammetry in the interiors of O’Keeffe’s house.  After several experiments we feel the lackluster results are due to the following reasons:

1. The presence of objects and furniture that could not be moved.
2. Walls without great surface detail.
3. Computational problems-the processing speed of our computers.

We will further unpack each problems we encountered below:

PRESENCE OF OBJECTS/FURNITURE

The room should be without objects and with limited furniture – in particular without tables and chairs – for two main reasons. First, photogrammetry requires to take picture to a constant distance and furniture can limit movements. A second motive is that it resulted problematic to shoot behind and underneath objects.
In particular, we had several issues with the indian room: a big table was located almost in the middle of the room limiting our movements and one wall was occupied by a big shelf. The meshes presented lot of holes, especially under the shelf, and the area under-behind the table was completely distorted. Then all a series of elements must be consistent through the beginning to the end of the shooting, we cannot move any objects. For example, we unobtrusively moved a big pillow while shooting the indian room and the resulted meshed were distorted as the pillow appears in different positions and did not have a defined shape.

One wall of the Indian room presented many problems as there were a huge shelf with several objects over. The meshes displays multiple holes and some of the objects are incomplete:

LIGHT & LACK OF SURFACE DETAIL

The light must be as uniform as possible when doing photogrammetry.  It is important to not overexpose the pictures.  Reflective surfaces, such as the windows, are particularly problematic for the software when trying to create 3D meshes.  So we tried both to cover the windows or to mask them out with the software to avoid this problem.  Another issue we dealt with were walls with a lack of surface detail.  We utilized the Dennison dots on the walls to attempt to create a reference point, though most of the meshes still resulted in distorted or incomplete information.

Here is an example of a meshe from a wall with little surface detail that was overexposed:

The meshes appear to be incomplete and when we add the texture the result was unrecognizable. Also, in the first shoot we had problems with creating clear meshes of the transition from the wall to the ceiling. In this case, it is important to keep the lens fixed and parallel to the floor, without turning or rotating the camera. To solve this issue, during the second shoot we photographed the walls and then the ceiling independently. This was much more successful.

Here is an example of first time we photographed O’Keeffe’s kitchen where we used the incorrect method, rotating the camera to go from wall to ceiling.

COMPUTATIONAL PROBLEMS

Finally, it is important to note that the process of creating 3D meshes will require powerful computer machines.  The meshes (for a possible virtual tour) require significant memory.  Also, we’ve noticed that Photoscan does not work well with certain graphic-video cards. During our tests we used different machines: some repeatedly crashed.  The computers with non-compatible or dedicated graphic cards produced distorted meshes or in some cases would not show the texture of the mesh.

For example, one wall of the Indian Room with and without texture using a non optimized graphic card:

We are now trying to create 3D meshes of single walls, then clean and merge them using Blender or Mesh-lab. Even after filling the holes, the meshes still lack of quality and detail, and resulted distorted in some points, so we are working on smoothing surfaces, moving vertices, and simplifying the meshes.

Advertisements

5 thoughts on “3D Surrogates of Furniture and Interiors

  1. Hi folks,
    I am currently involved in a project with very similar aims to your imaging project but mine is focused on some rock art sites in Spain. I have just discovered your blog and I am astonished by the similarity of our approaches to preservation of cultural and heritage sites.
    My main concern is about monitoring changes over time of rock paintings and on the walls of these rock shelters, so accuracy of the rendition is a must for me. I have been exploring PhotoScan, Photomodeler Scanner, Visual SFM, Meshlab and Blender (that’s how I found you). I prefer this approach to laser scan or structure light scan because of its general cost effective but I am worried about time. I am getting good results with photogrammetry soft, mainly Photoscan Pro, but it is so time consuming… A whole shelter is taking me about 2 or 3 whole days to end, before to begin with decimating, filling gaps, merging meshes and so on, so my question is, what computers are you using? What are their technical features? I have watched one of your renderings of one of the rooms and it looks beautiful, so I would like to know how long it takes to you.
    Thanks in advance

    • Hi Juan. I apologize for the delayed response. Photogrammetry is definitely more affordable than laser scanning but like you said way more time consuming. Two or three whole days sounds about right. Processing the meshes can take up to 10 hours depending on the target quality you’re seeking. I am going to ask Dale to reply with the specs of his laptop we were using. However, I do know half of the processing in Photoscan required 24GB of RAM and the other half used the video card. Meshes are heavy so a high end video card is necessary.. but it definitely takes time to put together in Meshlab or Blender. You must be speaking of the “roofless room” as it was the best surrogate we got last summer. The target quality of that mesh was at low and we processed the room as one piece without merging meshes. I’ll ask Dale to post the specs of the laptop we used. Thanks for the question and good luck with your project!

  2. Pingback: MULTIMÉDIAS

  3. Hello Juan. I’m a spanish resident, I’m working with 123d catch from Autodesk and Meshlab. I’m using an ordinary laptop with about 10 GB of RAM and I can have the results in about 5 minutes.

    My field is architectural reconstruction and game design, I work with Revit and Max, and making real time 3d presentations in Lumion and CryEngine. If you think we can colaborate, send me an email to dragos_coste@yahoo.com . I’m also unemployed 🙂

    I have some of my portfolio here : http://archinect.com/people/project/54164830/various-stills-and-renderings/54165440

  4. Hey guys,

    This is a great project, and your real world experiences are great. I have been going through several tools for 3D reconstruction, but mostly focused on smaller scale items (like people). After several attempts with Photoscan/123d catch/etc, I’ve started using Kinect Fusion.

    The results are great, but the original work has two limitations:
    1) Limited reconstruction volume
    2) Doesn’t work outside/strong sunlight (infrared light washes out sensor data)

    Since you are doing indoor scans right now, (2) shouldn’t be that big of a deal. (1) is, but there is an opensource variant from the PCL team called Large Scale Kinfu. You’ll need someone able to build/compile it to make it work, but the videos are impressive: http://pointclouds.org/documentation/tutorials/using_kinfu_large_scale.php

    Good luck!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s