3D Surrogates of Furniture and Interiors

During the course of the semester we made two trips to O’Keeffe’s house in Abiquiu.  The first trip was focused on using photogrammetry on interior spaces.  The second trip included the 3d documentation of furniture. We tested two chairs located from the home for which we tested two shooting methods:

METHOD 1: three pictures per position (taken at 90, 180, 270 degrees)
METHOD 2: Lens calibration method using Agisoft’s Lens application: http://agisoft.ru/products/lens/

The results with both the methods were successful.  However, the lens calibration method limited the overall number of images (we did not need to take pictures at 90, 180, 270 degrees but just one orientation), thus lessening the processing time (which has been quite timely with large “chunks”).  The resulted meshes for single objects are pretty amazing. The lens calibration create a distortion just in one case, probably due to the use of a reflector over the object and  a white fabric laid under the object to protect it eventually end up partially covering the bottom of the object itself.

This is an example of 3D meshes with texture using the second method:

Photogrammetry: 3D MESHES OF INTERIORS

We met the biggest challenges with the use of photogrammetry in the interiors of O’Keeffe’s house.  After several experiments we feel the lackluster results are due to the following reasons:

1. The presence of objects and furniture that could not be moved.
2. Walls without great surface detail.
3. Computational problems-the processing speed of our computers.

We will further unpack each problems we encountered below:

PRESENCE OF OBJECTS/FURNITURE

The room should be without objects and with limited furniture – in particular without tables and chairs – for two main reasons. First, photogrammetry requires to take picture to a constant distance and furniture can limit movements. A second motive is that it resulted problematic to shoot behind and underneath objects.
In particular, we had several issues with the indian room: a big table was located almost in the middle of the room limiting our movements and one wall was occupied by a big shelf. The meshes presented lot of holes, especially under the shelf, and the area under-behind the table was completely distorted. Then all a series of elements must be consistent through the beginning to the end of the shooting, we cannot move any objects. For example, we unobtrusively moved a big pillow while shooting the indian room and the resulted meshed were distorted as the pillow appears in different positions and did not have a defined shape.

One wall of the Indian room presented many problems as there were a huge shelf with several objects over. The meshes displays multiple holes and some of the objects are incomplete:

LIGHT & LACK OF SURFACE DETAIL

The light must be as uniform as possible when doing photogrammetry.  It is important to not overexpose the pictures.  Reflective surfaces, such as the windows, are particularly problematic for the software when trying to create 3D meshes.  So we tried both to cover the windows or to mask them out with the software to avoid this problem.  Another issue we dealt with were walls with a lack of surface detail.  We utilized the Dennison dots on the walls to attempt to create a reference point, though most of the meshes still resulted in distorted or incomplete information.

Here is an example of a meshe from a wall with little surface detail that was overexposed:

The meshes appear to be incomplete and when we add the texture the result was unrecognizable. Also, in the first shoot we had problems with creating clear meshes of the transition from the wall to the ceiling. In this case, it is important to keep the lens fixed and parallel to the floor, without turning or rotating the camera. To solve this issue, during the second shoot we photographed the walls and then the ceiling independently. This was much more successful.

Here is an example of first time we photographed O’Keeffe’s kitchen where we used the incorrect method, rotating the camera to go from wall to ceiling.

COMPUTATIONAL PROBLEMS

Finally, it is important to note that the process of creating 3D meshes will require powerful computer machines.  The meshes (for a possible virtual tour) require significant memory.  Also, we’ve noticed that Photoscan does not work well with certain graphic-video cards. During our tests we used different machines: some repeatedly crashed.  The computers with non-compatible or dedicated graphic cards produced distorted meshes or in some cases would not show the texture of the mesh.

For example, one wall of the Indian Room with and without texture using a non optimized graphic card:

We are now trying to create 3D meshes of single walls, then clean and merge them using Blender or Mesh-lab. Even after filling the holes, the meshes still lack of quality and detail, and resulted distorted in some points, so we are working on smoothing surfaces, moving vertices, and simplifying the meshes.

Agisoft Lens – Lens Calibration Without Three Orientations

The class shot photos of the Kitchen, Indian Room and Pantry during our first visit to Georgia O’Keefe’s house, using the previously established method in which we shot one photo in horizontal orientation and two photos in vertical orientation utilizing 60 to 80 percent overlap. Additionally, all corner surfaces of the rooms were fanned utilizing transitions at 15 degree increments, also utilizing the same three shot technique. Upon returning and processing the meshes, we encountered the following issues.

The amount of photos required to create a point cloud proved prohibitive, due
to the size of the rooms.

Available processing power, even in the most robust computer workstation the Media Arts program possesses, proved slow and yielded inconsistent results.

Attempting to find a resolution to the issue, Mariano researched methods, as outlined by the PhotoScan user’s guide. This research produced the following results.

The amount of photos required for the three photo per station method was
necessary to compensate for inherent lens distortion. The vertical shots
allowed PhotoScan to generate the information necessary to create accurate camera positions in the point cloud.

PhotoScan allows the user to customize settings based on lens calibration. The process involves the photographing of a high contrast display at 60 to 80 percent overlap in a horizontal fashion. The resulting data can be saved and input into the program specific to the lens used to shoot the object or location.

The calibration allows the photographer to shoot 2/3 fewer photos, eliminating the vertical shots of the original method. Instead, the software compensates for lens distortion based on values gathered during the calibration method.

The class utilized the lens calibration method during the second visit, re-photographing the rooms from the initial visit and adding O’Keefe’s studio. Additionally, we shot various objects utilizing the same method. The shoot proved more efficient, both on time and resources. The following results were observed.

Objects photographed utilizing the lens calibration method produced consistent
point clouds and accurate representations of the object photographed.

Rooms photographed utilizing this same method produced less consistent
results. Point clouds were less accurate with holes and inconsistent
representations of surfaces.

Subsequent research and searches for successfully rendered interior spaces utilizing photogrammetry proved futile. Notably, the disparity between successfully created exteriors and objects, compared to successfully created interiors was noted, leading us to reason that 3d Scanning and virtual panoramas instead of photogrammetry might prove more successful. To this end, we constructed a Pan-O-Head to explore the results this method would produce. Results to come.

The Evolution of Our Photogrammetry Workflow

Since we began working with photogrammetry we have developed a workflow that we follow every time we’re on site. That workflow has definitely changed and developed over the past several weeks. The process has become much smoother and what used to take 3-4 hours to shoot, now only takes 1.5-2 hours to shoot. Here are some of the changes we have made.

Topics covered within this post:

1. Tripod – Do you need it?
2. Creating Accurate Measurements
3. Image Composition
4. 60% Image Overlap
5. Calibration Sequence
6. Photogrammetry Around Corners or Objects

1. Tripod – Do you need it?

From what we have discovered, not really. Photogrammetry captures require a FIXED FOCUS and a FIXED APERTURE for all three capture orientations (90°, 180° and 270°). This means that your focal depth of field will be non-negotiable after you begin shooting. The only thing you can choose is your exposure time. You want an F-Stop of 11 or below to minimize distortion but high enough so that your depth of field will capture everything in the Z dimension (forward and back) in focus.  If you have enough ambient or artificial light to keep your shutter speed above 1/40th of a second, a tripod will not be necessary and hand-holding the camera, even with the inevitable shifts in camera orientation and slight distance irregularities, can work very well.  But remember, closer to the subject or further from the subject than the limits of the focal depth of field, and falling ambient light levels can result in unfocused regions. Blurry pixels contribute nothing in 3D or in 2D.  Marking the optimal distance from the subject using a string of chalk line, taping your lens focus ring to ensure it does not move, checking your F-Stop to be certain you have not moved it and checking your focus at EVERY capture will save you a lot of reshooting, whether you use a tripod or not. That said, moving the tripod along a chalkline laterally took up much of our time. Especially since we needed to overlap our images by at least 60%. Not only were we making sure the camera was almost always the same distance from the subject, but we always made sure it was level.

Dale adjusting the camera head

After many working days in the field and discovering just how powerful Agisoft PhotoScan is, we have decided the tripod is unnecessary except for low-ambient-lighting conditions,  haven’t experienced any true need for it yet in photogrammetry. As long as you walk along a measured line, keeping the distance between the subject and camera consistent, PhotoScan will have no problem creating the 3D mesh. We started handholding the camera a few weeks ago and the results have still been superb. PhotoScan is able to detect where each camera was positioned in relation to the subject AND at what angle. If the camera is angled slightly upward or downward, this doesn’t affect the processing at all. In fact, PhotoScan makes up for it and still creates an accurate digital surrogate. Now depending on your subject matter, it still may be necessary to use a tripod or monopod. We haven’t experienced any true need for it yet in photogrammetry.

Processed mesh of the roofless room. Images were captured handheld.

2. Creating Accurate Measurements

If one needs accurate measurements within the mesh of a certain subject, Dennison® dots have proved to be extremely valuable in this regard. Initially we were placing two parallel rows of dennison dots along our subjects (mainly walls and rooms). Each dot was 4 feet away from every other dot both vertically and horizontally. When it’s seen within the mesh, the dots help to know the exact scale of the object. Based on the distance between the dots, one is able to measure everything else within the mesh and know their exact dimensions.  If the distance between the dots is exactly 4 feet, or 122 cm, then any distance in the space is some fraction of 122 cm.  20% of the distance is 9.6 inches or 24.4cm. 1% is 0.48 inches or 1.22 cm.

The question becomes, what spacial resolution do you want to be able to resolve? If  you need to resolve characteristics as small as 5mm, then you need to be able to clearly resolve 5mm across several pixels in your initial Raw capture and your scale dots should be less than 2 feet apart.  For our first set of 3D condition captures at O’Keeffe’s historic home in Abiquiu, we went from using two rows, two feet apart to using one row. The only difference it made was making the process quicker.

3. Image Composition

When capturing our images we try to keep the whole subject tight within the frame. This speeds up the masking step in processing (see tutorial) by removing unnecessary objects and background from the photo.

The subject tight within the frame.

However, this isn’t always practical when working in tighter spaces. We had this experience last week when large cacti blocked our path.  We just had to do our best at the start of the shoot to plan ways of moving around them.  We could move the whole capture distance way-back and then shoot at 15 degrees to overlap areas behind the cacti, but this would include tons of sky and foreground.  Getting closer to the subject, capturing a smaller area in each shot and taking more photographs to cover the desired area was the best answer.   A 24mm lens helps when forced into tight places because it is a wide-angle lens. Wide angle lenses capture a broader area of the subject at a much closer distance than a 50mm lens. Sometimes you’ll need to take several photos vertically to capture the whole subject. And the distortion of the subject is much greater around the outer edges of the lens and capture area. But, thanks to the calibration sequence – taking the same area at 90°, 180° and 270°, the software will still process them without a problem as long as there’s still 60% overlap in the sequence of each shooting orientation. Like we said before, when PhotoScan detects these points, it also detects the angle of the camera and makes up for it.

4. 60% Image Overlap

When you’re making sure every sequential image overlaps at least 60% there are a couple of ways to do it. When looking through the viewfinder and you snap your shot. Locate a point on the subject that would be 1/3 of the frame and move to where that same point is at 2/3 of the frame. Having dennison dots helps in this regard if there isn’t anything distinct on the wall. We use the focusing squares within the viewfinder of our Canon Mark II to assist in overlapping ~60%.  They divide the view into 3rds in both the horizontal and vertical orientations.  We just find a feature at the center of our view, take the shot and then make the same feature be a third of the way from center, which ever way we are moving. Locate, shoot, move 33% and repeat!

Image depicting the 60% overlap between two sequential images.

5. Calibration Sequence

To calibrate the captured images and allow the software to correct for any lens distortion, we have to shoot photos at three different camera orientations per camera position before we move laterally. Our method of capturing the images at horizontal (the axis of the lens to the bottom of the camera at 180°), and two verticals (axis of the lens to the bottom of the camera at 90°and  270°) changed over a period of sessions.  First we would shoot our horizontal images moving laterally parallel to the wall, say from the left edge of the wall to the right edge of the wall.  Then we would come back and shoot the other two angles, each left to right again, for a total of 3 passes along the subject. When changing light and shadow movement rapidly changed the light on the subject area, we decided to begin shooting all three angles from the same spot, overlapping 60% on the verticals.

Since we were overlapping our images based on the narrower, vertical view, we knew the horizontal images would overlap more than 60% due to the increase in width of the capture area, giving us more than the necessary 60% overlap. However, this also resulted in us taking more images than were absolutely necessary.

So whether you want to shoot all angles from the same spot, or each angle one sequence at a time, it is up to your circumstances. Also whether you mind having extra images on the horizontal shots to have to process in your chunk.

6. Photogrammetry Around Corners or Objects

When moving around objects, or in our case, around an arced corner of a wall, the images have to be shot every 15 degrees. When doing photogrammetry on a smaller object, such as a sculpture or statue, the images must be captured every 15 degrees until a full 360 degree rotation is complete.

Photogrammetry around an object

In our case, going around a curved wall, we moved 15 degrees while maintaining our distance until we were once again parallel to the wall.

Hope you found this post useful!

Week 5 Overview (August 10th, 2012)

We’ve had tons of photogrammetry progress this week. We completed all exteriors of the O’Keeffe house and studio in Abiquiu. We now have around 1600 images that need to be processed and generated in PhotoScan. We also started our first indoor capture with the roofless room at Abiquiu.  There’s a room in O’Keeffe’s house that was built without a roof. Instead there are logs laying across the top and currently has plastic covering the topside of the logs. We captured the whole interior of the roofless room and generated a mesh in the lowest quality. We didn’t capture the totality of the ground nor the logs, so there are holes and distortions in the mesh.

Wireframe 3D geometry of the roofless room.

We attempted to create a mesh with 149 photos of the roofless room on high quality. We let it process for almost 72 hours and it ended up freezing. The elapsed time timer continued to tick, but the percentage completed remained the same. We are going to have to process no more than 50 photos at a time and put them together in Blender if we want to use high quality.

Agisoft has a great chart on memory requirements for processing (taken from Agisoft’s website). But we had a good test on whether or not it would work.

Agisoft’s memory requirements chart on processing in PhotoScan.

In PhotoScan you are able to create several chunks to process different images, though only one at a time. Since the roofless room consisted of 149 images we have to divide the processing into several chunks. After we have the meshes created, we export the model and load it in Blender (see Blender review). From there we can load all individual meshes and merge to act as one unit. At that point we will end up with a high quality mesh to work with. Currently we are researching and testing the ability to create a blueprint with accurate measurements of our model. There is a Photoshop CS6 tutorial on the subject.

We had several questions regarding PhotoScan this week. We are having some mesh display problems when we apply texture. Over on the Agisoft forums there is a section directed to bug reports and after posting a few questions, I received PROMT answers. Sometimes merely minutes after posting we would receive a response from a technical support agent. Even though we were having some display issues, exporting the mesh and importing it into Blender worked perfectly. The resolution of the mesh was actually quite better than what was shown in PhotoScan.

While we still have to process high quality meshes of the roofless room, here is a screen capture of the lowest quality demonstrated in Meshlab.

Thanks for reading!