3D Surrogates of Furniture and Interiors

During the course of the semester we made two trips to O’Keeffe’s house in Abiquiu.  The first trip was focused on using photogrammetry on interior spaces.  The second trip included the 3d documentation of furniture. We tested two chairs located from the home for which we tested two shooting methods:

METHOD 1: three pictures per position (taken at 90, 180, 270 degrees)
METHOD 2: Lens calibration method using Agisoft’s Lens application: http://agisoft.ru/products/lens/

The results with both the methods were successful.  However, the lens calibration method limited the overall number of images (we did not need to take pictures at 90, 180, 270 degrees but just one orientation), thus lessening the processing time (which has been quite timely with large “chunks”).  The resulted meshes for single objects are pretty amazing. The lens calibration create a distortion just in one case, probably due to the use of a reflector over the object and  a white fabric laid under the object to protect it eventually end up partially covering the bottom of the object itself.

This is an example of 3D meshes with texture using the second method:

Photogrammetry: 3D MESHES OF INTERIORS

We met the biggest challenges with the use of photogrammetry in the interiors of O’Keeffe’s house.  After several experiments we feel the lackluster results are due to the following reasons:

1. The presence of objects and furniture that could not be moved.
2. Walls without great surface detail.
3. Computational problems-the processing speed of our computers.

We will further unpack each problems we encountered below:


The room should be without objects and with limited furniture – in particular without tables and chairs – for two main reasons. First, photogrammetry requires to take picture to a constant distance and furniture can limit movements. A second motive is that it resulted problematic to shoot behind and underneath objects.
In particular, we had several issues with the indian room: a big table was located almost in the middle of the room limiting our movements and one wall was occupied by a big shelf. The meshes presented lot of holes, especially under the shelf, and the area under-behind the table was completely distorted. Then all a series of elements must be consistent through the beginning to the end of the shooting, we cannot move any objects. For example, we unobtrusively moved a big pillow while shooting the indian room and the resulted meshed were distorted as the pillow appears in different positions and did not have a defined shape.

One wall of the Indian room presented many problems as there were a huge shelf with several objects over. The meshes displays multiple holes and some of the objects are incomplete:


The light must be as uniform as possible when doing photogrammetry.  It is important to not overexpose the pictures.  Reflective surfaces, such as the windows, are particularly problematic for the software when trying to create 3D meshes.  So we tried both to cover the windows or to mask them out with the software to avoid this problem.  Another issue we dealt with were walls with a lack of surface detail.  We utilized the Dennison dots on the walls to attempt to create a reference point, though most of the meshes still resulted in distorted or incomplete information.

Here is an example of a meshe from a wall with little surface detail that was overexposed:

The meshes appear to be incomplete and when we add the texture the result was unrecognizable. Also, in the first shoot we had problems with creating clear meshes of the transition from the wall to the ceiling. In this case, it is important to keep the lens fixed and parallel to the floor, without turning or rotating the camera. To solve this issue, during the second shoot we photographed the walls and then the ceiling independently. This was much more successful.

Here is an example of first time we photographed O’Keeffe’s kitchen where we used the incorrect method, rotating the camera to go from wall to ceiling.


Finally, it is important to note that the process of creating 3D meshes will require powerful computer machines.  The meshes (for a possible virtual tour) require significant memory.  Also, we’ve noticed that Photoscan does not work well with certain graphic-video cards. During our tests we used different machines: some repeatedly crashed.  The computers with non-compatible or dedicated graphic cards produced distorted meshes or in some cases would not show the texture of the mesh.

For example, one wall of the Indian Room with and without texture using a non optimized graphic card:

We are now trying to create 3D meshes of single walls, then clean and merge them using Blender or Mesh-lab. Even after filling the holes, the meshes still lack of quality and detail, and resulted distorted in some points, so we are working on smoothing surfaces, moving vertices, and simplifying the meshes.

Agisoft Lens – Lens Calibration Without Three Orientations

The class shot photos of the Kitchen, Indian Room and Pantry during our first visit to Georgia O’Keefe’s house, using the previously established method in which we shot one photo in horizontal orientation and two photos in vertical orientation utilizing 60 to 80 percent overlap. Additionally, all corner surfaces of the rooms were fanned utilizing transitions at 15 degree increments, also utilizing the same three shot technique. Upon returning and processing the meshes, we encountered the following issues.

The amount of photos required to create a point cloud proved prohibitive, due
to the size of the rooms.

Available processing power, even in the most robust computer workstation the Media Arts program possesses, proved slow and yielded inconsistent results.

Attempting to find a resolution to the issue, Mariano researched methods, as outlined by the PhotoScan user’s guide. This research produced the following results.

The amount of photos required for the three photo per station method was
necessary to compensate for inherent lens distortion. The vertical shots
allowed PhotoScan to generate the information necessary to create accurate camera positions in the point cloud.

PhotoScan allows the user to customize settings based on lens calibration. The process involves the photographing of a high contrast display at 60 to 80 percent overlap in a horizontal fashion. The resulting data can be saved and input into the program specific to the lens used to shoot the object or location.

The calibration allows the photographer to shoot 2/3 fewer photos, eliminating the vertical shots of the original method. Instead, the software compensates for lens distortion based on values gathered during the calibration method.

The class utilized the lens calibration method during the second visit, re-photographing the rooms from the initial visit and adding O’Keefe’s studio. Additionally, we shot various objects utilizing the same method. The shoot proved more efficient, both on time and resources. The following results were observed.

Objects photographed utilizing the lens calibration method produced consistent
point clouds and accurate representations of the object photographed.

Rooms photographed utilizing this same method produced less consistent
results. Point clouds were less accurate with holes and inconsistent
representations of surfaces.

Subsequent research and searches for successfully rendered interior spaces utilizing photogrammetry proved futile. Notably, the disparity between successfully created exteriors and objects, compared to successfully created interiors was noted, leading us to reason that 3d Scanning and virtual panoramas instead of photogrammetry might prove more successful. To this end, we constructed a Pan-O-Head to explore the results this method would produce. Results to come.