Capture Process

When capturing images at close range with a wide angle lens, depth of field may be limited and focal distance may be critical for recording condition details. Holding a relatively constant distance from the subject plays an important role in meaningful data capture.

1) Camera distance from subject, choice of lens, and Dennison Dots should be setup as per the previous step.

2) Establish an F-stop that provides full depth of field focus.

3) Set the focus and the depth of field. Once the focus has been established, tape the lens so that the focus stays set and does not accidentally change.

4) Frame your shot starting with your camera horizontal.  Establish a point on the subject located at the center of the frame.  Centering on this point for each shot, take three consecutive pictures changing the orientation of your camera for every picture.  One orientation is horizontal 180º, one is a vertical orientation rotating the camera to 90º, the final orientation is the opposite vertical position rotating the camera to 270º.  This changing of orientation allows the software to correct for lens distortion.

5) Using either your center point of focus or the Dennison Dots as a reference, move along your subject by 30%.  If done correctly, 60% of your previous frame will be included in your new frame and only 30% of new subject will be introduced into the next series of pictures.

6) Repeat steps 4-5 until the entire subject has been photographed.

Advertisements

Capture Setup

1) The first step is to consider your subject.  What is it you are trying to study?  Whether you are trying to capture cracks, dents, minute changes in topology, or macroscopic changes in topology, the absolute smallest detail you need to capture can be no smaller than 25 pixels.  Taking your camera’s resolution into consideration, calculate if the details you are trying to study are at least 25 pixels.  This can be done by taking test shots, then bringing those test shots into a photo editor such as Photoshop, or even a free paint program such as Gimp or Microsoft’s Paint.  All of these programs have the ability to measure distance in pixels.

If there is not sufficient detail, either move closer to the subject, change lenses, or do both so that optimal resolution is achieved.

Photo of dots in one line.
One row of dennison dots.

2) Once your distance from the subject and camera lens are chosen, a horizontal line of known length needs to be placed in the frame with your subject.  This allows for accurate distance measuring and scaling in the 3D mesh.  A great way to do this is to place Dennison Dots equidistant and level all the way across the subject.  Not only do Dennison Dots placed onto the subject help in scaling and measuring along the mesh, but it also helps in the capture process by giving visual cues for the amount of distance to move along the subject.  For this to be effective, the distance in between dots should be roughly one third of the total horizontal distance of your photo.

Joey and Greg capture the outside of the kitchen.

3) The distance from the subject to the camera and the distance in between your Dennison Dots should now be established.  Pick a line that will run the horizontal length of your subject and place the Dennison Dots along this line.  Make sure that these dots are not only equidistant, but are also perfectly level.

Camera Setup

10) Determine correct lens size for framing and focus.

11) Align black spheres accordingly.
11a) Align spheres in such a way that will allow you to easily crop them out of the photo.
11b) Check that spheres are correct size. They should occupy at least 250 pixels of the
picture. If unsure, take a picture and export photo into photo editing software to     measure the sphere’s diameter.
11c) The center of the sphere should be level with the surface of the subject being     captured.

12) Attach camera to computer using USB cable. Since touching the camera is not     allowed, all shots will be taken remotely.
12a) Start camera operating software and open a live view window.
12b) Adjust ambient lighting in the room to the minimum amount that will still allow clear     vision of subject and possible obstructions in the room. Subject and user safety is the     top priority.

13) Focus the shot and take a picture, then open photo in editing software.
13a) Check histogram to make sure even balance of color and light.
13b) Continue taking test shots adjusting F-stop, shutter speed, flash light intensity, and     ambient lighting until well lit even coloring is achieved in the test photo.

Equipment Setup

1) Check batteries in camera and flash. Once the RTI capture sequence starts, touching     the camera or moving it in any way is prohibited.

2) Assemble tripods and light stands.
2a) Be sure to secure or weigh down tripods with sandbags.

3) Assemble flash, battery pack, and receiver to mono-pod.

4) Test flash to make sure wireless signal is working.

5) Secure camera to tripod making sure it is level.

6) Setup subject to be photographed.

7) Measure art using a measuring tape.

8) Measure and cut a length of string at least twice the length of the widest part of the             subject being photographed.

9) Attach this string to the end of the flash.
9a) This will not only ensure correct distance of the flash from the subject, but will also     help in aiming the flash in the correct direction.

Digital Photographic 3D Imaging for Preservation: What’s the Buzz?

Why 3D imaging for conservation and preservation documentation?

It would hardly be worth the effort of learning and building skills in 3D, digital camera image capture and processing if 3D images didn’t offer conservators and preservation professionals better and more actionable information than 2D film, digital and IR/UV imaging.  Most people know 3D imaging either from the gaming and motion picture special effects industries or from splashy, well-funded 3D laser scanning projects of high profile art objects or heritage sites. But using 3D, digital photographic images to document and monitor conditions? This is a slightly more obscure use of imaging technology and one that, it turns out, is far more practical and adoptable.

What do you have in digital, 3D photographic documentation that you do not have with 2D documentation?

  • In addition to all the rich color data of a digital photographic 2D images, 3D, point-in-time surrogates contain quantifiable contours, volumes, textures, forms and transitions from one plane or material to another that can be continuously and seamlessly viewed and measured, from any vantage point around the object.
  • These digital 3D surrogates can be saved, recalled and enhanced by future software improvements.
  • The ability of 3D digital surrogates to locate, detail and highlight deterioration, damage and condition changes are legion. The obvious major advantage is that you can capture images using any resolution and focal distance that can be used for 2D digital photographs but you can view the damaged or undamaged surfaces and volumes in ways that truly replicate the stereo view of a real-time human examiner.
  • You can also view the damaged or undamaged surfaces and volumes in ways that ENHANCE the stereo views of a real-time human examiner.  Unlike looking at the actual surface or object, a viewer can selectively view the details of the object by removing the color data, changing the virtual light source, changing the virtual reflectivity of the surface features or colorizing volumetric levels.
  • Subtle changes that human conservators are trained to look-for and detect can be observed, captured, quantified and compared in ways that are far more revealing than 2D digital images.
  • 3D digital photographic surrogates, taken over time, reveal greater detail about the rate and extent of change to a feature – tiny soap micro-protrusions in an oil paint brushstroke, for example, or the slow, volumetric sag of a 20 meter earthen adobe wall.

Conservators always need to answer and document several key questions about anything they are trying to preserve:

  • Where is the damage or deterioration located?
  • What is the nature, size, extent and apparent character of the damage compared to the surrounding, undamaged areas?
  • What properties of the undamaged materials or structures have been lost or diminished and what degree of recovery is required to arrest deterioration and impart stability and functionality?
  • Are the conditions actively changing or deteriorating and at what rate?
  • What are the causes or precipitating events that result in, or accelerate deterioration and damage?
  • Do treatment strategies arrest, slow or accelerate the rate of deterioration?

3D, digital photographic surrogates greatly enhance our ability to identify, document and monitor the answers to these questions in ways that 2D photographic images cannot.

In this 8-week project we wanted to determine:

  • If off-the-shelf, high-end, consumer-grade digital cameras, open-format digital photographs, combined with a consumer-grade lap-top computer could be used to capture and assemble detailed, data-rich 3D images.
  • If three 3D imaging capture and processing techniques – highlight reflectance transformation imaging (RTI), photogrammetry and structured light imaging – were mature enough to be used, right now, in the summer of 2012, to capture accurate, detailed and digitally-rich condition information for works of art, historic objects and heritage architectural sites and features.
  • If two graduate students and two collections technicians with no prior experience in 3D imaging of any kind could become fully conversant and self reliant in capturing and assembling 3D images in only 8 weeks, under the guidance of a conservator and 3D imaging engineers.
  • If the capture and processing metadata – the digital capture conditions and digital pathways and transformations leading to the assembly of condition-detail-rich 3D images – could be completely open-source and open-format, with no proprietary file formats or data pathways. In this way, scientifically valid digital lab notebooks can be kept and evaluated for their validity, replicativity and value.  Further, with no proprietary files or computational pathways, all steps and all images belong to and reside with the public trust agency of the resource, rather than a private or commercial entity with no legal, public trust fiduciary requirements and restrictions.
  • If the digital camera images could be captured and formatted archivally, using ISO digital standards (Digital Negative or DNG) so that the images could always be used to assemble 3D digital surrogates far into the future, regardless of future improvements or changes in digital cameras, 3D assembly and editing software or computer operating systems and file formats.
  •  If the digital photographic, 3D files could be computationally compared by computer software so that small, slow, incremental changes in condition, often missed by museum and heritage site professionals, could be recognized by computer software and highlighted so that conservators could make better assessments about the active and unstable nature of damage and deterioration.

The Evolution of Our Photogrammetry Workflow

Since we began working with photogrammetry we have developed a workflow that we follow every time we’re on site. That workflow has definitely changed and developed over the past several weeks. The process has become much smoother and what used to take 3-4 hours to shoot, now only takes 1.5-2 hours to shoot. Here are some of the changes we have made.

Topics covered within this post:

1. Tripod – Do you need it?
2. Creating Accurate Measurements
3. Image Composition
4. 60% Image Overlap
5. Calibration Sequence
6. Photogrammetry Around Corners or Objects

1. Tripod – Do you need it?

From what we have discovered, not really. Photogrammetry captures require a FIXED FOCUS and a FIXED APERTURE for all three capture orientations (90°, 180° and 270°). This means that your focal depth of field will be non-negotiable after you begin shooting. The only thing you can choose is your exposure time. You want an F-Stop of 11 or below to minimize distortion but high enough so that your depth of field will capture everything in the Z dimension (forward and back) in focus.  If you have enough ambient or artificial light to keep your shutter speed above 1/40th of a second, a tripod will not be necessary and hand-holding the camera, even with the inevitable shifts in camera orientation and slight distance irregularities, can work very well.  But remember, closer to the subject or further from the subject than the limits of the focal depth of field, and falling ambient light levels can result in unfocused regions. Blurry pixels contribute nothing in 3D or in 2D.  Marking the optimal distance from the subject using a string of chalk line, taping your lens focus ring to ensure it does not move, checking your F-Stop to be certain you have not moved it and checking your focus at EVERY capture will save you a lot of reshooting, whether you use a tripod or not. That said, moving the tripod along a chalkline laterally took up much of our time. Especially since we needed to overlap our images by at least 60%. Not only were we making sure the camera was almost always the same distance from the subject, but we always made sure it was level.

Dale adjusting the camera head

After many working days in the field and discovering just how powerful Agisoft PhotoScan is, we have decided the tripod is unnecessary except for low-ambient-lighting conditions,  haven’t experienced any true need for it yet in photogrammetry. As long as you walk along a measured line, keeping the distance between the subject and camera consistent, PhotoScan will have no problem creating the 3D mesh. We started handholding the camera a few weeks ago and the results have still been superb. PhotoScan is able to detect where each camera was positioned in relation to the subject AND at what angle. If the camera is angled slightly upward or downward, this doesn’t affect the processing at all. In fact, PhotoScan makes up for it and still creates an accurate digital surrogate. Now depending on your subject matter, it still may be necessary to use a tripod or monopod. We haven’t experienced any true need for it yet in photogrammetry.

Processed mesh of the roofless room. Images were captured handheld.

2. Creating Accurate Measurements

If one needs accurate measurements within the mesh of a certain subject, Dennison® dots have proved to be extremely valuable in this regard. Initially we were placing two parallel rows of dennison dots along our subjects (mainly walls and rooms). Each dot was 4 feet away from every other dot both vertically and horizontally. When it’s seen within the mesh, the dots help to know the exact scale of the object. Based on the distance between the dots, one is able to measure everything else within the mesh and know their exact dimensions.  If the distance between the dots is exactly 4 feet, or 122 cm, then any distance in the space is some fraction of 122 cm.  20% of the distance is 9.6 inches or 24.4cm. 1% is 0.48 inches or 1.22 cm.

The question becomes, what spacial resolution do you want to be able to resolve? If  you need to resolve characteristics as small as 5mm, then you need to be able to clearly resolve 5mm across several pixels in your initial Raw capture and your scale dots should be less than 2 feet apart.  For our first set of 3D condition captures at O’Keeffe’s historic home in Abiquiu, we went from using two rows, two feet apart to using one row. The only difference it made was making the process quicker.

3. Image Composition

When capturing our images we try to keep the whole subject tight within the frame. This speeds up the masking step in processing (see tutorial) by removing unnecessary objects and background from the photo.

The subject tight within the frame.

However, this isn’t always practical when working in tighter spaces. We had this experience last week when large cacti blocked our path.  We just had to do our best at the start of the shoot to plan ways of moving around them.  We could move the whole capture distance way-back and then shoot at 15 degrees to overlap areas behind the cacti, but this would include tons of sky and foreground.  Getting closer to the subject, capturing a smaller area in each shot and taking more photographs to cover the desired area was the best answer.   A 24mm lens helps when forced into tight places because it is a wide-angle lens. Wide angle lenses capture a broader area of the subject at a much closer distance than a 50mm lens. Sometimes you’ll need to take several photos vertically to capture the whole subject. And the distortion of the subject is much greater around the outer edges of the lens and capture area. But, thanks to the calibration sequence – taking the same area at 90°, 180° and 270°, the software will still process them without a problem as long as there’s still 60% overlap in the sequence of each shooting orientation. Like we said before, when PhotoScan detects these points, it also detects the angle of the camera and makes up for it.

4. 60% Image Overlap

When you’re making sure every sequential image overlaps at least 60% there are a couple of ways to do it. When looking through the viewfinder and you snap your shot. Locate a point on the subject that would be 1/3 of the frame and move to where that same point is at 2/3 of the frame. Having dennison dots helps in this regard if there isn’t anything distinct on the wall. We use the focusing squares within the viewfinder of our Canon Mark II to assist in overlapping ~60%.  They divide the view into 3rds in both the horizontal and vertical orientations.  We just find a feature at the center of our view, take the shot and then make the same feature be a third of the way from center, which ever way we are moving. Locate, shoot, move 33% and repeat!

Image depicting the 60% overlap between two sequential images.

5. Calibration Sequence

To calibrate the captured images and allow the software to correct for any lens distortion, we have to shoot photos at three different camera orientations per camera position before we move laterally. Our method of capturing the images at horizontal (the axis of the lens to the bottom of the camera at 180°), and two verticals (axis of the lens to the bottom of the camera at 90°and  270°) changed over a period of sessions.  First we would shoot our horizontal images moving laterally parallel to the wall, say from the left edge of the wall to the right edge of the wall.  Then we would come back and shoot the other two angles, each left to right again, for a total of 3 passes along the subject. When changing light and shadow movement rapidly changed the light on the subject area, we decided to begin shooting all three angles from the same spot, overlapping 60% on the verticals.

Since we were overlapping our images based on the narrower, vertical view, we knew the horizontal images would overlap more than 60% due to the increase in width of the capture area, giving us more than the necessary 60% overlap. However, this also resulted in us taking more images than were absolutely necessary.

So whether you want to shoot all angles from the same spot, or each angle one sequence at a time, it is up to your circumstances. Also whether you mind having extra images on the horizontal shots to have to process in your chunk.

6. Photogrammetry Around Corners or Objects

When moving around objects, or in our case, around an arced corner of a wall, the images have to be shot every 15 degrees. When doing photogrammetry on a smaller object, such as a sculpture or statue, the images must be captured every 15 degrees until a full 360 degree rotation is complete.

Photogrammetry around an object

In our case, going around a curved wall, we moved 15 degrees while maintaining our distance until we were once again parallel to the wall.

Hope you found this post useful!