3D Surrogates of Furniture and Interiors

During the course of the semester we made two trips to O’Keeffe’s house in Abiquiu.  The first trip was focused on using photogrammetry on interior spaces.  The second trip included the 3d documentation of furniture. We tested two chairs located from the home for which we tested two shooting methods:

METHOD 1: three pictures per position (taken at 90, 180, 270 degrees)
METHOD 2: Lens calibration method using Agisoft’s Lens application: http://agisoft.ru/products/lens/

The results with both the methods were successful.  However, the lens calibration method limited the overall number of images (we did not need to take pictures at 90, 180, 270 degrees but just one orientation), thus lessening the processing time (which has been quite timely with large “chunks”).  The resulted meshes for single objects are pretty amazing. The lens calibration create a distortion just in one case, probably due to the use of a reflector over the object and  a white fabric laid under the object to protect it eventually end up partially covering the bottom of the object itself.

This is an example of 3D meshes with texture using the second method:

Photogrammetry: 3D MESHES OF INTERIORS

We met the biggest challenges with the use of photogrammetry in the interiors of O’Keeffe’s house.  After several experiments we feel the lackluster results are due to the following reasons:

1. The presence of objects and furniture that could not be moved.
2. Walls without great surface detail.
3. Computational problems-the processing speed of our computers.

We will further unpack each problems we encountered below:

PRESENCE OF OBJECTS/FURNITURE

The room should be without objects and with limited furniture – in particular without tables and chairs – for two main reasons. First, photogrammetry requires to take picture to a constant distance and furniture can limit movements. A second motive is that it resulted problematic to shoot behind and underneath objects.
In particular, we had several issues with the indian room: a big table was located almost in the middle of the room limiting our movements and one wall was occupied by a big shelf. The meshes presented lot of holes, especially under the shelf, and the area under-behind the table was completely distorted. Then all a series of elements must be consistent through the beginning to the end of the shooting, we cannot move any objects. For example, we unobtrusively moved a big pillow while shooting the indian room and the resulted meshed were distorted as the pillow appears in different positions and did not have a defined shape.

One wall of the Indian room presented many problems as there were a huge shelf with several objects over. The meshes displays multiple holes and some of the objects are incomplete:

LIGHT & LACK OF SURFACE DETAIL

The light must be as uniform as possible when doing photogrammetry.  It is important to not overexpose the pictures.  Reflective surfaces, such as the windows, are particularly problematic for the software when trying to create 3D meshes.  So we tried both to cover the windows or to mask them out with the software to avoid this problem.  Another issue we dealt with were walls with a lack of surface detail.  We utilized the Dennison dots on the walls to attempt to create a reference point, though most of the meshes still resulted in distorted or incomplete information.

Here is an example of a meshe from a wall with little surface detail that was overexposed:

The meshes appear to be incomplete and when we add the texture the result was unrecognizable. Also, in the first shoot we had problems with creating clear meshes of the transition from the wall to the ceiling. In this case, it is important to keep the lens fixed and parallel to the floor, without turning or rotating the camera. To solve this issue, during the second shoot we photographed the walls and then the ceiling independently. This was much more successful.

Here is an example of first time we photographed O’Keeffe’s kitchen where we used the incorrect method, rotating the camera to go from wall to ceiling.

COMPUTATIONAL PROBLEMS

Finally, it is important to note that the process of creating 3D meshes will require powerful computer machines.  The meshes (for a possible virtual tour) require significant memory.  Also, we’ve noticed that Photoscan does not work well with certain graphic-video cards. During our tests we used different machines: some repeatedly crashed.  The computers with non-compatible or dedicated graphic cards produced distorted meshes or in some cases would not show the texture of the mesh.

For example, one wall of the Indian Room with and without texture using a non optimized graphic card:

We are now trying to create 3D meshes of single walls, then clean and merge them using Blender or Mesh-lab. Even after filling the holes, the meshes still lack of quality and detail, and resulted distorted in some points, so we are working on smoothing surfaces, moving vertices, and simplifying the meshes.

Agisoft Lens – Lens Calibration Without Three Orientations

The class shot photos of the Kitchen, Indian Room and Pantry during our first visit to Georgia O’Keefe’s house, using the previously established method in which we shot one photo in horizontal orientation and two photos in vertical orientation utilizing 60 to 80 percent overlap. Additionally, all corner surfaces of the rooms were fanned utilizing transitions at 15 degree increments, also utilizing the same three shot technique. Upon returning and processing the meshes, we encountered the following issues.

The amount of photos required to create a point cloud proved prohibitive, due
to the size of the rooms.

Available processing power, even in the most robust computer workstation the Media Arts program possesses, proved slow and yielded inconsistent results.

Attempting to find a resolution to the issue, Mariano researched methods, as outlined by the PhotoScan user’s guide. This research produced the following results.

The amount of photos required for the three photo per station method was
necessary to compensate for inherent lens distortion. The vertical shots
allowed PhotoScan to generate the information necessary to create accurate camera positions in the point cloud.

PhotoScan allows the user to customize settings based on lens calibration. The process involves the photographing of a high contrast display at 60 to 80 percent overlap in a horizontal fashion. The resulting data can be saved and input into the program specific to the lens used to shoot the object or location.

The calibration allows the photographer to shoot 2/3 fewer photos, eliminating the vertical shots of the original method. Instead, the software compensates for lens distortion based on values gathered during the calibration method.

The class utilized the lens calibration method during the second visit, re-photographing the rooms from the initial visit and adding O’Keefe’s studio. Additionally, we shot various objects utilizing the same method. The shoot proved more efficient, both on time and resources. The following results were observed.

Objects photographed utilizing the lens calibration method produced consistent
point clouds and accurate representations of the object photographed.

Rooms photographed utilizing this same method produced less consistent
results. Point clouds were less accurate with holes and inconsistent
representations of surfaces.

Subsequent research and searches for successfully rendered interior spaces utilizing photogrammetry proved futile. Notably, the disparity between successfully created exteriors and objects, compared to successfully created interiors was noted, leading us to reason that 3d Scanning and virtual panoramas instead of photogrammetry might prove more successful. To this end, we constructed a Pan-O-Head to explore the results this method would produce. Results to come.

Capture Process

When capturing images at close range with a wide angle lens, depth of field may be limited and focal distance may be critical for recording condition details. Holding a relatively constant distance from the subject plays an important role in meaningful data capture.

1) Camera distance from subject, choice of lens, and Dennison Dots should be setup as per the previous step.

2) Establish an F-stop that provides full depth of field focus.

3) Set the focus and the depth of field. Once the focus has been established, tape the lens so that the focus stays set and does not accidentally change.

4) Frame your shot starting with your camera horizontal.  Establish a point on the subject located at the center of the frame.  Centering on this point for each shot, take three consecutive pictures changing the orientation of your camera for every picture.  One orientation is horizontal 180º, one is a vertical orientation rotating the camera to 90º, the final orientation is the opposite vertical position rotating the camera to 270º.  This changing of orientation allows the software to correct for lens distortion.

5) Using either your center point of focus or the Dennison Dots as a reference, move along your subject by 30%.  If done correctly, 60% of your previous frame will be included in your new frame and only 30% of new subject will be introduced into the next series of pictures.

6) Repeat steps 4-5 until the entire subject has been photographed.

Capture Setup

1) The first step is to consider your subject.  What is it you are trying to study?  Whether you are trying to capture cracks, dents, minute changes in topology, or macroscopic changes in topology, the absolute smallest detail you need to capture can be no smaller than 25 pixels.  Taking your camera’s resolution into consideration, calculate if the details you are trying to study are at least 25 pixels.  This can be done by taking test shots, then bringing those test shots into a photo editor such as Photoshop, or even a free paint program such as Gimp or Microsoft’s Paint.  All of these programs have the ability to measure distance in pixels.

If there is not sufficient detail, either move closer to the subject, change lenses, or do both so that optimal resolution is achieved.

Photo of dots in one line.
One row of dennison dots.

2) Once your distance from the subject and camera lens are chosen, a horizontal line of known length needs to be placed in the frame with your subject.  This allows for accurate distance measuring and scaling in the 3D mesh.  A great way to do this is to place Dennison Dots equidistant and level all the way across the subject.  Not only do Dennison Dots placed onto the subject help in scaling and measuring along the mesh, but it also helps in the capture process by giving visual cues for the amount of distance to move along the subject.  For this to be effective, the distance in between dots should be roughly one third of the total horizontal distance of your photo.

Joey and Greg capture the outside of the kitchen.

3) The distance from the subject to the camera and the distance in between your Dennison Dots should now be established.  Pick a line that will run the horizontal length of your subject and place the Dennison Dots along this line.  Make sure that these dots are not only equidistant, but are also perfectly level.