Camera Setup

10) Determine correct lens size for framing and focus.

11) Align black spheres accordingly.
11a) Align spheres in such a way that will allow you to easily crop them out of the photo.
11b) Check that spheres are correct size. They should occupy at least 250 pixels of the
picture. If unsure, take a picture and export photo into photo editing software to     measure the sphere’s diameter.
11c) The center of the sphere should be level with the surface of the subject being     captured.

12) Attach camera to computer using USB cable. Since touching the camera is not     allowed, all shots will be taken remotely.
12a) Start camera operating software and open a live view window.
12b) Adjust ambient lighting in the room to the minimum amount that will still allow clear     vision of subject and possible obstructions in the room. Subject and user safety is the     top priority.

13) Focus the shot and take a picture, then open photo in editing software.
13a) Check histogram to make sure even balance of color and light.
13b) Continue taking test shots adjusting F-stop, shutter speed, flash light intensity, and     ambient lighting until well lit even coloring is achieved in the test photo.

Equipment Setup

1) Check batteries in camera and flash. Once the RTI capture sequence starts, touching     the camera or moving it in any way is prohibited.

2) Assemble tripods and light stands.
2a) Be sure to secure or weigh down tripods with sandbags.

3) Assemble flash, battery pack, and receiver to mono-pod.

4) Test flash to make sure wireless signal is working.

5) Secure camera to tripod making sure it is level.

6) Setup subject to be photographed.

7) Measure art using a measuring tape.

8) Measure and cut a length of string at least twice the length of the widest part of the             subject being photographed.

9) Attach this string to the end of the flash.
9a) This will not only ensure correct distance of the flash from the subject, but will also     help in aiming the flash in the correct direction.

Digital Photographic 3D Imaging for Preservation: What’s the Buzz?

Why 3D imaging for conservation and preservation documentation?

It would hardly be worth the effort of learning and building skills in 3D, digital camera image capture and processing if 3D images didn’t offer conservators and preservation professionals better and more actionable information than 2D film, digital and IR/UV imaging.  Most people know 3D imaging either from the gaming and motion picture special effects industries or from splashy, well-funded 3D laser scanning projects of high profile art objects or heritage sites. But using 3D, digital photographic images to document and monitor conditions? This is a slightly more obscure use of imaging technology and one that, it turns out, is far more practical and adoptable.

What do you have in digital, 3D photographic documentation that you do not have with 2D documentation?

  • In addition to all the rich color data of a digital photographic 2D images, 3D, point-in-time surrogates contain quantifiable contours, volumes, textures, forms and transitions from one plane or material to another that can be continuously and seamlessly viewed and measured, from any vantage point around the object.
  • These digital 3D surrogates can be saved, recalled and enhanced by future software improvements.
  • The ability of 3D digital surrogates to locate, detail and highlight deterioration, damage and condition changes are legion. The obvious major advantage is that you can capture images using any resolution and focal distance that can be used for 2D digital photographs but you can view the damaged or undamaged surfaces and volumes in ways that truly replicate the stereo view of a real-time human examiner.
  • You can also view the damaged or undamaged surfaces and volumes in ways that ENHANCE the stereo views of a real-time human examiner.  Unlike looking at the actual surface or object, a viewer can selectively view the details of the object by removing the color data, changing the virtual light source, changing the virtual reflectivity of the surface features or colorizing volumetric levels.
  • Subtle changes that human conservators are trained to look-for and detect can be observed, captured, quantified and compared in ways that are far more revealing than 2D digital images.
  • 3D digital photographic surrogates, taken over time, reveal greater detail about the rate and extent of change to a feature – tiny soap micro-protrusions in an oil paint brushstroke, for example, or the slow, volumetric sag of a 20 meter earthen adobe wall.

Conservators always need to answer and document several key questions about anything they are trying to preserve:

  • Where is the damage or deterioration located?
  • What is the nature, size, extent and apparent character of the damage compared to the surrounding, undamaged areas?
  • What properties of the undamaged materials or structures have been lost or diminished and what degree of recovery is required to arrest deterioration and impart stability and functionality?
  • Are the conditions actively changing or deteriorating and at what rate?
  • What are the causes or precipitating events that result in, or accelerate deterioration and damage?
  • Do treatment strategies arrest, slow or accelerate the rate of deterioration?

3D, digital photographic surrogates greatly enhance our ability to identify, document and monitor the answers to these questions in ways that 2D photographic images cannot.

In this 8-week project we wanted to determine:

  • If off-the-shelf, high-end, consumer-grade digital cameras, open-format digital photographs, combined with a consumer-grade lap-top computer could be used to capture and assemble detailed, data-rich 3D images.
  • If three 3D imaging capture and processing techniques – highlight reflectance transformation imaging (RTI), photogrammetry and structured light imaging – were mature enough to be used, right now, in the summer of 2012, to capture accurate, detailed and digitally-rich condition information for works of art, historic objects and heritage architectural sites and features.
  • If two graduate students and two collections technicians with no prior experience in 3D imaging of any kind could become fully conversant and self reliant in capturing and assembling 3D images in only 8 weeks, under the guidance of a conservator and 3D imaging engineers.
  • If the capture and processing metadata – the digital capture conditions and digital pathways and transformations leading to the assembly of condition-detail-rich 3D images – could be completely open-source and open-format, with no proprietary file formats or data pathways. In this way, scientifically valid digital lab notebooks can be kept and evaluated for their validity, replicativity and value.  Further, with no proprietary files or computational pathways, all steps and all images belong to and reside with the public trust agency of the resource, rather than a private or commercial entity with no legal, public trust fiduciary requirements and restrictions.
  • If the digital camera images could be captured and formatted archivally, using ISO digital standards (Digital Negative or DNG) so that the images could always be used to assemble 3D digital surrogates far into the future, regardless of future improvements or changes in digital cameras, 3D assembly and editing software or computer operating systems and file formats.
  •  If the digital photographic, 3D files could be computationally compared by computer software so that small, slow, incremental changes in condition, often missed by museum and heritage site professionals, could be recognized by computer software and highlighted so that conservators could make better assessments about the active and unstable nature of damage and deterioration.

The Evolution of Our Photogrammetry Workflow

Since we began working with photogrammetry we have developed a workflow that we follow every time we’re on site. That workflow has definitely changed and developed over the past several weeks. The process has become much smoother and what used to take 3-4 hours to shoot, now only takes 1.5-2 hours to shoot. Here are some of the changes we have made.

Topics covered within this post:

1. Tripod – Do you need it?
2. Creating Accurate Measurements
3. Image Composition
4. 60% Image Overlap
5. Calibration Sequence
6. Photogrammetry Around Corners or Objects

1. Tripod – Do you need it?

From what we have discovered, not really. Photogrammetry captures require a FIXED FOCUS and a FIXED APERTURE for all three capture orientations (90°, 180° and 270°). This means that your focal depth of field will be non-negotiable after you begin shooting. The only thing you can choose is your exposure time. You want an F-Stop of 11 or below to minimize distortion but high enough so that your depth of field will capture everything in the Z dimension (forward and back) in focus.  If you have enough ambient or artificial light to keep your shutter speed above 1/40th of a second, a tripod will not be necessary and hand-holding the camera, even with the inevitable shifts in camera orientation and slight distance irregularities, can work very well.  But remember, closer to the subject or further from the subject than the limits of the focal depth of field, and falling ambient light levels can result in unfocused regions. Blurry pixels contribute nothing in 3D or in 2D.  Marking the optimal distance from the subject using a string of chalk line, taping your lens focus ring to ensure it does not move, checking your F-Stop to be certain you have not moved it and checking your focus at EVERY capture will save you a lot of reshooting, whether you use a tripod or not. That said, moving the tripod along a chalkline laterally took up much of our time. Especially since we needed to overlap our images by at least 60%. Not only were we making sure the camera was almost always the same distance from the subject, but we always made sure it was level.

Dale adjusting the camera head

After many working days in the field and discovering just how powerful Agisoft PhotoScan is, we have decided the tripod is unnecessary except for low-ambient-lighting conditions,  haven’t experienced any true need for it yet in photogrammetry. As long as you walk along a measured line, keeping the distance between the subject and camera consistent, PhotoScan will have no problem creating the 3D mesh. We started handholding the camera a few weeks ago and the results have still been superb. PhotoScan is able to detect where each camera was positioned in relation to the subject AND at what angle. If the camera is angled slightly upward or downward, this doesn’t affect the processing at all. In fact, PhotoScan makes up for it and still creates an accurate digital surrogate. Now depending on your subject matter, it still may be necessary to use a tripod or monopod. We haven’t experienced any true need for it yet in photogrammetry.

Processed mesh of the roofless room. Images were captured handheld.

2. Creating Accurate Measurements

If one needs accurate measurements within the mesh of a certain subject, Dennison® dots have proved to be extremely valuable in this regard. Initially we were placing two parallel rows of dennison dots along our subjects (mainly walls and rooms). Each dot was 4 feet away from every other dot both vertically and horizontally. When it’s seen within the mesh, the dots help to know the exact scale of the object. Based on the distance between the dots, one is able to measure everything else within the mesh and know their exact dimensions.  If the distance between the dots is exactly 4 feet, or 122 cm, then any distance in the space is some fraction of 122 cm.  20% of the distance is 9.6 inches or 24.4cm. 1% is 0.48 inches or 1.22 cm.

The question becomes, what spacial resolution do you want to be able to resolve? If  you need to resolve characteristics as small as 5mm, then you need to be able to clearly resolve 5mm across several pixels in your initial Raw capture and your scale dots should be less than 2 feet apart.  For our first set of 3D condition captures at O’Keeffe’s historic home in Abiquiu, we went from using two rows, two feet apart to using one row. The only difference it made was making the process quicker.

3. Image Composition

When capturing our images we try to keep the whole subject tight within the frame. This speeds up the masking step in processing (see tutorial) by removing unnecessary objects and background from the photo.

The subject tight within the frame.

However, this isn’t always practical when working in tighter spaces. We had this experience last week when large cacti blocked our path.  We just had to do our best at the start of the shoot to plan ways of moving around them.  We could move the whole capture distance way-back and then shoot at 15 degrees to overlap areas behind the cacti, but this would include tons of sky and foreground.  Getting closer to the subject, capturing a smaller area in each shot and taking more photographs to cover the desired area was the best answer.   A 24mm lens helps when forced into tight places because it is a wide-angle lens. Wide angle lenses capture a broader area of the subject at a much closer distance than a 50mm lens. Sometimes you’ll need to take several photos vertically to capture the whole subject. And the distortion of the subject is much greater around the outer edges of the lens and capture area. But, thanks to the calibration sequence – taking the same area at 90°, 180° and 270°, the software will still process them without a problem as long as there’s still 60% overlap in the sequence of each shooting orientation. Like we said before, when PhotoScan detects these points, it also detects the angle of the camera and makes up for it.

4. 60% Image Overlap

When you’re making sure every sequential image overlaps at least 60% there are a couple of ways to do it. When looking through the viewfinder and you snap your shot. Locate a point on the subject that would be 1/3 of the frame and move to where that same point is at 2/3 of the frame. Having dennison dots helps in this regard if there isn’t anything distinct on the wall. We use the focusing squares within the viewfinder of our Canon Mark II to assist in overlapping ~60%.  They divide the view into 3rds in both the horizontal and vertical orientations.  We just find a feature at the center of our view, take the shot and then make the same feature be a third of the way from center, which ever way we are moving. Locate, shoot, move 33% and repeat!

Image depicting the 60% overlap between two sequential images.

5. Calibration Sequence

To calibrate the captured images and allow the software to correct for any lens distortion, we have to shoot photos at three different camera orientations per camera position before we move laterally. Our method of capturing the images at horizontal (the axis of the lens to the bottom of the camera at 180°), and two verticals (axis of the lens to the bottom of the camera at 90°and  270°) changed over a period of sessions.  First we would shoot our horizontal images moving laterally parallel to the wall, say from the left edge of the wall to the right edge of the wall.  Then we would come back and shoot the other two angles, each left to right again, for a total of 3 passes along the subject. When changing light and shadow movement rapidly changed the light on the subject area, we decided to begin shooting all three angles from the same spot, overlapping 60% on the verticals.

Since we were overlapping our images based on the narrower, vertical view, we knew the horizontal images would overlap more than 60% due to the increase in width of the capture area, giving us more than the necessary 60% overlap. However, this also resulted in us taking more images than were absolutely necessary.

So whether you want to shoot all angles from the same spot, or each angle one sequence at a time, it is up to your circumstances. Also whether you mind having extra images on the horizontal shots to have to process in your chunk.

6. Photogrammetry Around Corners or Objects

When moving around objects, or in our case, around an arced corner of a wall, the images have to be shot every 15 degrees. When doing photogrammetry on a smaller object, such as a sculpture or statue, the images must be captured every 15 degrees until a full 360 degree rotation is complete.

Photogrammetry around an object

In our case, going around a curved wall, we moved 15 degrees while maintaining our distance until we were once again parallel to the wall.

Hope you found this post useful!

Week 5 Overview (August 10th, 2012)

We’ve had tons of photogrammetry progress this week. We completed all exteriors of the O’Keeffe house and studio in Abiquiu. We now have around 1600 images that need to be processed and generated in PhotoScan. We also started our first indoor capture with the roofless room at Abiquiu.  There’s a room in O’Keeffe’s house that was built without a roof. Instead there are logs laying across the top and currently has plastic covering the topside of the logs. We captured the whole interior of the roofless room and generated a mesh in the lowest quality. We didn’t capture the totality of the ground nor the logs, so there are holes and distortions in the mesh.

Wireframe 3D geometry of the roofless room.

We attempted to create a mesh with 149 photos of the roofless room on high quality. We let it process for almost 72 hours and it ended up freezing. The elapsed time timer continued to tick, but the percentage completed remained the same. We are going to have to process no more than 50 photos at a time and put them together in Blender if we want to use high quality.

Agisoft has a great chart on memory requirements for processing (taken from Agisoft’s website). But we had a good test on whether or not it would work.

Agisoft’s memory requirements chart on processing in PhotoScan.

In PhotoScan you are able to create several chunks to process different images, though only one at a time. Since the roofless room consisted of 149 images we have to divide the processing into several chunks. After we have the meshes created, we export the model and load it in Blender (see Blender review). From there we can load all individual meshes and merge to act as one unit. At that point we will end up with a high quality mesh to work with. Currently we are researching and testing the ability to create a blueprint with accurate measurements of our model. There is a Photoshop CS6 tutorial on the subject.

We had several questions regarding PhotoScan this week. We are having some mesh display problems when we apply texture. Over on the Agisoft forums there is a section directed to bug reports and after posting a few questions, I received PROMT answers. Sometimes merely minutes after posting we would receive a response from a technical support agent. Even though we were having some display issues, exporting the mesh and importing it into Blender worked perfectly. The resolution of the mesh was actually quite better than what was shown in PhotoScan.

While we still have to process high quality meshes of the roofless room, here is a screen capture of the lowest quality demonstrated in Meshlab.

Thanks for reading!

Blender vs. Meshlab – 3D Mesh Editor Review

blender logo        meshlab logo

Researching workflows for photogrammetry and reflectance transformation imaging is just part of the work we are doing on this project. The other part of our work is to test different equipment and software. One of the big decisions we have to make is in our 3D mesh editor. Once the photos have been captured for photogrammetry and after the photos have been taken into Agisoft’s Photoscan for processing, you should now have a 3D mesh. This 3D mesh is Photoscan’s best guess at what the actual object looks like. It is far from perfect and requires editing. Holes in the mesh might need to be filled or superfluous surfaces deleted. A building as large as Abiquiu has to be photographed over the course of several days. Each day is processed in Photoscan as a separate mesh. This creates many different individual 3D meshes that need to be cleaned up and joined together to create a single composite mesh. Initially we chose Meshlab as our mesh editor. It is free and open source, which is very important for two reasons:

1) Full disclosure of all data transformations. Any changes to the original pictures across all of the pixels in each picture is visible to us. It is open and free from secrets created by proprietary software. This makes the data transparent and ideal for scientific recording.

2) With open source we have the ability to apply our own scripts and transform it in ways that are specific to our needs. This frees us from the constraints place on us by proprietary software.

We discovered that there were features that were lacking in Meshlab that made editing and joining difficult and time consuming.

3D mesh viewed in Meshlab

Just as in life, going back to change past mistakes is impossible in Meshlab.

1) There is no undo button. Professional 3D mesh editors all share a non-destructive editing environment. Any changes made to Meshlab are final. If a mistake is made, it is necessary to reload from you last save point.

2) It is not well documented. There is an instruction manual, but there is not a very big community of users, or a large number of tutorials available. This makes learning the software difficult.

After researching various alternatives we decided that Blender was our best bet for mesh editing and joining. Blender is free, open source, cross-platform compatible, extremely robust, contains a wealth of documentation, and boasts a very helpful learning community. Since Blender contains so many features, there is a steep learning curve in getting proficient at mesh manipulation, but the time spent in learning the software is well worth it.  Thanks to the Blender, we will soon be putting up some very high quality 3D meshes of the Abiquiu House on the blog!

Picture of 3D mesh

3D mesh with texture from Abiquiu main entrance.

So to be clear for those of you who are Meshlab enthusiast and are screaming at your computer “These guys are just noobs!  Meshlab is awesome if you know how to use it!” Yes you are right, but that is exactly our point.  We ARE noobs as are all of the people who will be using this software in their work. Blender does have a steep learning curve but there is plenty of help.  We have yet to have a question about the software that wasn’t answered in print as well as with accompanying video tutorials.

So while we will keep a link to Meshlab on our equipment and software page, Blender is our recommendation for anyone editing meshes exported from Photoscan.

Week 4 Overview (August 3rd, 2012)

This week consisted solely of photogrammetry. Georgia O’Keeffe’s home is quite large and we still have a lot of images to capture of the interior; however, on August 1st we finished capturing the entire exterior of the house, not including the courtyard and studio. Every time we are on site, the process goes much smoother and quicker. For the entire perimeter of the house we have taken over 1200 photographs that need to be processed.

The exterior of the front side of Georgia O’Keeffe’s home.

Joey and Greg capture the outside of the kitchen.

Before this week, we would place two rows of dennison dots all 4 feet apart. We figured this would allow for accurate scale within the mesh. It would also help the software with finding overlapping points on the highly textured wall. This week we reduced the number of rows to one, all the same distance from the ground, and 4 feet apart.  This proved sufficient for the mesh to be able to create accurate measurements. It saved a lot of time too!

Two rows of dennison dots.

One row of dennison dots.

We began processing our images and received the highest quality mesh to date! We did several tests on what kind of image quality is necessary for our own documentation. Using Photoshop CS6 we exported all the jpegs at 10 quality. Each image was around 10-12MB. We attempted to export at 8 quality but the images were only 3-5MB and the point cloud was too scarce. We processed the images at 10 quality, and in PhotoScan we made the Geometry in medium quality. Here are the comparisons:

JPEG: Photoshop exported 10 quality
PhotoScan Geometry: Medium quality
Number of Photos: 38 pictures
Time to Process: 1 Hour

JPEG: Photoshop exported 10 quality
PhotoScan Geometry: High quality
Number of Photos: 38 pictures
Time to Process: 7 Hours

There is a significant processing difference in adjusting the geometry quality. At medium it took an hour and at high it took seven hours. We also compiled a mesh of 84 images at high and left it overnight. It was finished by the morning but we don’t know exactly how long processing took for that batch. Comparing the textured meshes in PhotoScan showed some difference. The medium was more pixelated than the high as you zoomed in. Ultra quality would take several days to process so we haven’t tested that yet.

Mesh with a medium quality geometry

Mesh with a high quality geometry.

It would be of great help if we could create a queue to process in PhotoScan. We could then leave several “chunks” to process over a weekend and clean the meshes up during the week. We will have to request this feature from Agisoft!

Here are some of the 3D renderings we got of the front of the house!

The wireframe PhotoScan compiled.