Getting Started With Photogrammetry

Advertisements

3D Surrogates of Furniture and Interiors

During the course of the semester we made two trips to O’Keeffe’s house in Abiquiu.  The first trip was focused on using photogrammetry on interior spaces.  The second trip included the 3d documentation of furniture. We tested two chairs located from the home for which we tested two shooting methods:

METHOD 1: three pictures per position (taken at 90, 180, 270 degrees)
METHOD 2: Lens calibration method using Agisoft’s Lens application: http://agisoft.ru/products/lens/

The results with both the methods were successful.  However, the lens calibration method limited the overall number of images (we did not need to take pictures at 90, 180, 270 degrees but just one orientation), thus lessening the processing time (which has been quite timely with large “chunks”).  The resulted meshes for single objects are pretty amazing. The lens calibration create a distortion just in one case, probably due to the use of a reflector over the object and  a white fabric laid under the object to protect it eventually end up partially covering the bottom of the object itself.

This is an example of 3D meshes with texture using the second method:

Photogrammetry: 3D MESHES OF INTERIORS

We met the biggest challenges with the use of photogrammetry in the interiors of O’Keeffe’s house.  After several experiments we feel the lackluster results are due to the following reasons:

1. The presence of objects and furniture that could not be moved.
2. Walls without great surface detail.
3. Computational problems-the processing speed of our computers.

We will further unpack each problems we encountered below:

PRESENCE OF OBJECTS/FURNITURE

The room should be without objects and with limited furniture – in particular without tables and chairs – for two main reasons. First, photogrammetry requires to take picture to a constant distance and furniture can limit movements. A second motive is that it resulted problematic to shoot behind and underneath objects.
In particular, we had several issues with the indian room: a big table was located almost in the middle of the room limiting our movements and one wall was occupied by a big shelf. The meshes presented lot of holes, especially under the shelf, and the area under-behind the table was completely distorted. Then all a series of elements must be consistent through the beginning to the end of the shooting, we cannot move any objects. For example, we unobtrusively moved a big pillow while shooting the indian room and the resulted meshed were distorted as the pillow appears in different positions and did not have a defined shape.

One wall of the Indian room presented many problems as there were a huge shelf with several objects over. The meshes displays multiple holes and some of the objects are incomplete:

LIGHT & LACK OF SURFACE DETAIL

The light must be as uniform as possible when doing photogrammetry.  It is important to not overexpose the pictures.  Reflective surfaces, such as the windows, are particularly problematic for the software when trying to create 3D meshes.  So we tried both to cover the windows or to mask them out with the software to avoid this problem.  Another issue we dealt with were walls with a lack of surface detail.  We utilized the Dennison dots on the walls to attempt to create a reference point, though most of the meshes still resulted in distorted or incomplete information.

Here is an example of a meshe from a wall with little surface detail that was overexposed:

The meshes appear to be incomplete and when we add the texture the result was unrecognizable. Also, in the first shoot we had problems with creating clear meshes of the transition from the wall to the ceiling. In this case, it is important to keep the lens fixed and parallel to the floor, without turning or rotating the camera. To solve this issue, during the second shoot we photographed the walls and then the ceiling independently. This was much more successful.

Here is an example of first time we photographed O’Keeffe’s kitchen where we used the incorrect method, rotating the camera to go from wall to ceiling.

COMPUTATIONAL PROBLEMS

Finally, it is important to note that the process of creating 3D meshes will require powerful computer machines.  The meshes (for a possible virtual tour) require significant memory.  Also, we’ve noticed that Photoscan does not work well with certain graphic-video cards. During our tests we used different machines: some repeatedly crashed.  The computers with non-compatible or dedicated graphic cards produced distorted meshes or in some cases would not show the texture of the mesh.

For example, one wall of the Indian Room with and without texture using a non optimized graphic card:

We are now trying to create 3D meshes of single walls, then clean and merge them using Blender or Mesh-lab. Even after filling the holes, the meshes still lack of quality and detail, and resulted distorted in some points, so we are working on smoothing surfaces, moving vertices, and simplifying the meshes.

Agisoft Lens – Lens Calibration Without Three Orientations

The class shot photos of the Kitchen, Indian Room and Pantry during our first visit to Georgia O’Keefe’s house, using the previously established method in which we shot one photo in horizontal orientation and two photos in vertical orientation utilizing 60 to 80 percent overlap. Additionally, all corner surfaces of the rooms were fanned utilizing transitions at 15 degree increments, also utilizing the same three shot technique. Upon returning and processing the meshes, we encountered the following issues.

The amount of photos required to create a point cloud proved prohibitive, due
to the size of the rooms.

Available processing power, even in the most robust computer workstation the Media Arts program possesses, proved slow and yielded inconsistent results.

Attempting to find a resolution to the issue, Mariano researched methods, as outlined by the PhotoScan user’s guide. This research produced the following results.

The amount of photos required for the three photo per station method was
necessary to compensate for inherent lens distortion. The vertical shots
allowed PhotoScan to generate the information necessary to create accurate camera positions in the point cloud.

PhotoScan allows the user to customize settings based on lens calibration. The process involves the photographing of a high contrast display at 60 to 80 percent overlap in a horizontal fashion. The resulting data can be saved and input into the program specific to the lens used to shoot the object or location.

The calibration allows the photographer to shoot 2/3 fewer photos, eliminating the vertical shots of the original method. Instead, the software compensates for lens distortion based on values gathered during the calibration method.

The class utilized the lens calibration method during the second visit, re-photographing the rooms from the initial visit and adding O’Keefe’s studio. Additionally, we shot various objects utilizing the same method. The shoot proved more efficient, both on time and resources. The following results were observed.

Objects photographed utilizing the lens calibration method produced consistent
point clouds and accurate representations of the object photographed.

Rooms photographed utilizing this same method produced less consistent
results. Point clouds were less accurate with holes and inconsistent
representations of surfaces.

Subsequent research and searches for successfully rendered interior spaces utilizing photogrammetry proved futile. Notably, the disparity between successfully created exteriors and objects, compared to successfully created interiors was noted, leading us to reason that 3d Scanning and virtual panoramas instead of photogrammetry might prove more successful. To this end, we constructed a Pan-O-Head to explore the results this method would produce. Results to come.

Executive Summary – 3D Photo Imaging for Condition Detection, Monitoring and Documentation

This project studied whether existing three-dimensional (3D) digital photo imaging technologies can be easily adopted by historic sites and collections for documenting and monitoring the condition of buildings and objects.  It also aimed to determine whether workflows could be established that would result in consistent and scientifically reliable 3D condition data capture.

The Problem: For over a century, conservators have relied on 2-dimensional (2D) photography to document and monitor the condition of heritage sites and objects.  When a sudden and shocking visual indicator appeared (e.g., a sudden crack in a surface), conservators took notice of what had actually been thousands of slow, tiny, incremental condition changes (e.g., tiny fissures and failures in slowly aging materials).  Conservators then photographed and monitored the area to try to determine how rapidly the deterioration was advancing.  Two factors limited our ability to detect and monitor deterioration: our ability detect slow and small “micro” changes, and our ability to compare what humans see using 3D stereo vision and what we recorded in 2D photographs.

Digital 3D images or surrogates allow us to see and measure 3D volumes and contours and they allow us to use a computer to detect and highlight very small and slowly occurring changes.  Detailed 3D models can be made using laser scanning and 2D digital photography (Photogrammetry, Reflectance Transformation Imaging –RTI- and Structured light). Each technology has condition-imaging strengths and weaknesses. In each, the finer the resolution – that is, the smaller the area captured by either a laser point reflection or a digital camera sensor pixel, the larger the data set for an entire object will be. In practice, the larger the object you want to image – an entire historic house or landscape versus a single door or wall, for example – the lower your resolution or detail will need to be or the more expensive and customized your assembly computer will have to be. Laser scanning equipment uses proprietary laser projectors, collection cameras, software and digital file formats. The files are huge – tens to hundreds of gigabytes each, and require very specialized computing hardware and software packages.

The Technological Solution: This study found that two 3D digital photographic technologies –Highlight RTI and Photogrammetry – promise to change the condition-detecting and monitoring paradigm in important and beneficial ways:

  • 3D photo images allow us to record and study surfaces and volumes in ways that mimic the stereo view of a real-time human examiner.
  • Because specular, color and texture data is digital, a viewer can selectively highlight and study details by removing the color data, changing the virtual light source and reflectivity or colorize contour levels.
  • Photography can be done by anyone with a basic knowledge of digital camera operation and a lap-top computer. Laser scanning engineers are not required.
  • Photography uses an ordinary, consumer-professional grade, digital camera. Expensive and complex laser scanning equipment is not required.
  • Capture photographs are the same, open-format 2D photos you have always taken –RAW, TIFF and JPEG. The capture data is not a proprietary laser-scan file format that requires special, brand-name software to use and view.
  • The user owns their own images and controls their naming, archiving, meta-data and digital management protocols.
  • Assembled 3D, computational meshes, solids, and RTI images and their viewers are open-source software – there are no hidden steps and every transformation of the data is documented and visible.
  • Because the capture photos are regular, open-format digital images, 3D images can be assembled with whatever future enhanced versions of 3D assembly software happen to come along. You are not dependent upon the life of one assembly software or manufacturer.
  • Capture photography can use the best digital camera the heritage site can afford. While 21 megapixel, full frame CMOS sensors with prime lenses and RAW capture capability give the highest resolution, less expensive APS-C sensor cameras that have manual capture modes (allowing the user to fix the lens aperture and focus) give reliable 3D condition documentation.

We created an interactive, web-platform blog site that documented our evolving working methodologies, equipment and resulting images. We specifically targeted the historic preservation field and publicized the project blog, videos and workflows using historic preservation groups on Linked-In, Facebook and Twitter social media outlets. While we anticipated somewhere near 3000 views of the website content during the course of the project, we were astonished by how rapidly an international audience, heritage preservation audience for the project was developed:

9000 views were from the USA and Caribbean.

4000 views were from Europe

700 views were from Central and South America.

500 views were from Asia (China does not report by Chinese government regulation)

400 views from Australia and New Zealand

325 views were from Canada

200 views were from the Middle East

70 views were from India

35 views were from Africa

15,230 views, total

Does RTI give repeatable and reliable normals of objects taken at different times and positions to facilitate detection of changes?

On the Linked-In discussion group Cultural Heritage Conservation Science. Research and practice’s discussion on 3-D digital imaging and photogrammetry for scientific documentation of heritage sites and collections http://linkd.in/RZMpFj , Greg Bearman wrote the following question:

“Does RTI give repeatable and quantitative set of normals good enough for looking for change? If I take an RTI set, rotate the object, let it warp a bit (flexible substrate), what do I get the second time? How do I align the datasets for comparison?

what is the system uncertainty? ie if I just take repeated images of the same object without moving anything, how well does the RTI data line up. Second, suppose I take something with some topography but is totally inflexible and cannot distort(make up test object here!) and I do repeated RTI on it in different orientations? Can I make the data all the same? If you are going to use an imaging method to determine changes in an object, the first thing to do is understand what is in inherent noise and uncertainty in the measuring system. It could be some combination of software, camera or inherent issues with the method itself”

I wrote back: “Hey Greg – tried sending response earlier last week but I do not see it!? Sorry. I’m on vacation until the 22nd – trying to recover and recharge. It is going well but I wanted to jot down my initial thoughts. One of my interns – Greg Williamson – is working on aberration recognition software that can recognize and highlight changes in condition captured by different H-RTI computational image assemblies – obviously taken at different times, but also with different equipment and with randomly different highlight flash positions. It seems, initally, that normal reflection is normal reflection, regardless of object or flash position and that the software correctly interpolates 3D positions of surface characteristics regardless of the precise position of the flash, because it is accustomed to calculating the highlights both the capture points and everywhere in between! Likewise, we have had promise with photogrammetry when the resolution of the images used to create the mesh and solids are similar. What may turn out to be key is a calibration set that will allow correction of the various lens distortions that would naturally come from different lenses. I know Mark Mudge at Cultural Heritage Imaging has suggested that we begin taking a calibration set before RTI capture, as we had before Photogrammetry. He may be working on incorporating a calibration correction into the highlight RTI Builder that CHI has made available. I’m sending this discussion along to the CHI forum at http://forums.cultur…ageimaging.org/ to see what others might have to add. When I return to work, I’ll ask Greg to give this some additional thought”

Whadaya think, Greg?