Photoscan Processing

Advertisements

Capture Process

When capturing images at close range with a wide angle lens, depth of field may be limited and focal distance may be critical for recording condition details. Holding a relatively constant distance from the subject plays an important role in meaningful data capture.

1) Camera distance from subject, choice of lens, and Dennison Dots should be setup as per the previous step.

2) Establish an F-stop that provides full depth of field focus.

3) Set the focus and the depth of field. Once the focus has been established, tape the lens so that the focus stays set and does not accidentally change.

4) Frame your shot starting with your camera horizontal.  Establish a point on the subject located at the center of the frame.  Centering on this point for each shot, take three consecutive pictures changing the orientation of your camera for every picture.  One orientation is horizontal 180º, one is a vertical orientation rotating the camera to 90º, the final orientation is the opposite vertical position rotating the camera to 270º.  This changing of orientation allows the software to correct for lens distortion.

5) Using either your center point of focus or the Dennison Dots as a reference, move along your subject by 30%.  If done correctly, 60% of your previous frame will be included in your new frame and only 30% of new subject will be introduced into the next series of pictures.

6) Repeat steps 4-5 until the entire subject has been photographed.

Capture Setup

1) The first step is to consider your subject.  What is it you are trying to study?  Whether you are trying to capture cracks, dents, minute changes in topology, or macroscopic changes in topology, the absolute smallest detail you need to capture can be no smaller than 25 pixels.  Taking your camera’s resolution into consideration, calculate if the details you are trying to study are at least 25 pixels.  This can be done by taking test shots, then bringing those test shots into a photo editor such as Photoshop, or even a free paint program such as Gimp or Microsoft’s Paint.  All of these programs have the ability to measure distance in pixels.

If there is not sufficient detail, either move closer to the subject, change lenses, or do both so that optimal resolution is achieved.

Photo of dots in one line.
One row of dennison dots.

2) Once your distance from the subject and camera lens are chosen, a horizontal line of known length needs to be placed in the frame with your subject.  This allows for accurate distance measuring and scaling in the 3D mesh.  A great way to do this is to place Dennison Dots equidistant and level all the way across the subject.  Not only do Dennison Dots placed onto the subject help in scaling and measuring along the mesh, but it also helps in the capture process by giving visual cues for the amount of distance to move along the subject.  For this to be effective, the distance in between dots should be roughly one third of the total horizontal distance of your photo.

Joey and Greg capture the outside of the kitchen.

3) The distance from the subject to the camera and the distance in between your Dennison Dots should now be established.  Pick a line that will run the horizontal length of your subject and place the Dennison Dots along this line.  Make sure that these dots are not only equidistant, but are also perfectly level.

Camera Setup

10) Determine correct lens size for framing and focus.

11) Align black spheres accordingly.
11a) Align spheres in such a way that will allow you to easily crop them out of the photo.
11b) Check that spheres are correct size. They should occupy at least 250 pixels of the
picture. If unsure, take a picture and export photo into photo editing software to     measure the sphere’s diameter.
11c) The center of the sphere should be level with the surface of the subject being     captured.

12) Attach camera to computer using USB cable. Since touching the camera is not     allowed, all shots will be taken remotely.
12a) Start camera operating software and open a live view window.
12b) Adjust ambient lighting in the room to the minimum amount that will still allow clear     vision of subject and possible obstructions in the room. Subject and user safety is the     top priority.

13) Focus the shot and take a picture, then open photo in editing software.
13a) Check histogram to make sure even balance of color and light.
13b) Continue taking test shots adjusting F-stop, shutter speed, flash light intensity, and     ambient lighting until well lit even coloring is achieved in the test photo.

Equipment Setup

1) Check batteries in camera and flash. Once the RTI capture sequence starts, touching     the camera or moving it in any way is prohibited.

2) Assemble tripods and light stands.
2a) Be sure to secure or weigh down tripods with sandbags.

3) Assemble flash, battery pack, and receiver to mono-pod.

4) Test flash to make sure wireless signal is working.

5) Secure camera to tripod making sure it is level.

6) Setup subject to be photographed.

7) Measure art using a measuring tape.

8) Measure and cut a length of string at least twice the length of the widest part of the             subject being photographed.

9) Attach this string to the end of the flash.
9a) This will not only ensure correct distance of the flash from the subject, but will also     help in aiming the flash in the correct direction.

Digital Photographic 3D Imaging for Preservation: What’s the Buzz?

Why 3D imaging for conservation and preservation documentation?

It would hardly be worth the effort of learning and building skills in 3D, digital camera image capture and processing if 3D images didn’t offer conservators and preservation professionals better and more actionable information than 2D film, digital and IR/UV imaging.  Most people know 3D imaging either from the gaming and motion picture special effects industries or from splashy, well-funded 3D laser scanning projects of high profile art objects or heritage sites. But using 3D, digital photographic images to document and monitor conditions? This is a slightly more obscure use of imaging technology and one that, it turns out, is far more practical and adoptable.

What do you have in digital, 3D photographic documentation that you do not have with 2D documentation?

  • In addition to all the rich color data of a digital photographic 2D images, 3D, point-in-time surrogates contain quantifiable contours, volumes, textures, forms and transitions from one plane or material to another that can be continuously and seamlessly viewed and measured, from any vantage point around the object.
  • These digital 3D surrogates can be saved, recalled and enhanced by future software improvements.
  • The ability of 3D digital surrogates to locate, detail and highlight deterioration, damage and condition changes are legion. The obvious major advantage is that you can capture images using any resolution and focal distance that can be used for 2D digital photographs but you can view the damaged or undamaged surfaces and volumes in ways that truly replicate the stereo view of a real-time human examiner.
  • You can also view the damaged or undamaged surfaces and volumes in ways that ENHANCE the stereo views of a real-time human examiner.  Unlike looking at the actual surface or object, a viewer can selectively view the details of the object by removing the color data, changing the virtual light source, changing the virtual reflectivity of the surface features or colorizing volumetric levels.
  • Subtle changes that human conservators are trained to look-for and detect can be observed, captured, quantified and compared in ways that are far more revealing than 2D digital images.
  • 3D digital photographic surrogates, taken over time, reveal greater detail about the rate and extent of change to a feature – tiny soap micro-protrusions in an oil paint brushstroke, for example, or the slow, volumetric sag of a 20 meter earthen adobe wall.

Conservators always need to answer and document several key questions about anything they are trying to preserve:

  • Where is the damage or deterioration located?
  • What is the nature, size, extent and apparent character of the damage compared to the surrounding, undamaged areas?
  • What properties of the undamaged materials or structures have been lost or diminished and what degree of recovery is required to arrest deterioration and impart stability and functionality?
  • Are the conditions actively changing or deteriorating and at what rate?
  • What are the causes or precipitating events that result in, or accelerate deterioration and damage?
  • Do treatment strategies arrest, slow or accelerate the rate of deterioration?

3D, digital photographic surrogates greatly enhance our ability to identify, document and monitor the answers to these questions in ways that 2D photographic images cannot.

In this 8-week project we wanted to determine:

  • If off-the-shelf, high-end, consumer-grade digital cameras, open-format digital photographs, combined with a consumer-grade lap-top computer could be used to capture and assemble detailed, data-rich 3D images.
  • If three 3D imaging capture and processing techniques – highlight reflectance transformation imaging (RTI), photogrammetry and structured light imaging – were mature enough to be used, right now, in the summer of 2012, to capture accurate, detailed and digitally-rich condition information for works of art, historic objects and heritage architectural sites and features.
  • If two graduate students and two collections technicians with no prior experience in 3D imaging of any kind could become fully conversant and self reliant in capturing and assembling 3D images in only 8 weeks, under the guidance of a conservator and 3D imaging engineers.
  • If the capture and processing metadata – the digital capture conditions and digital pathways and transformations leading to the assembly of condition-detail-rich 3D images – could be completely open-source and open-format, with no proprietary file formats or data pathways. In this way, scientifically valid digital lab notebooks can be kept and evaluated for their validity, replicativity and value.  Further, with no proprietary files or computational pathways, all steps and all images belong to and reside with the public trust agency of the resource, rather than a private or commercial entity with no legal, public trust fiduciary requirements and restrictions.
  • If the digital camera images could be captured and formatted archivally, using ISO digital standards (Digital Negative or DNG) so that the images could always be used to assemble 3D digital surrogates far into the future, regardless of future improvements or changes in digital cameras, 3D assembly and editing software or computer operating systems and file formats.
  •  If the digital photographic, 3D files could be computationally compared by computer software so that small, slow, incremental changes in condition, often missed by museum and heritage site professionals, could be recognized by computer software and highlighted so that conservators could make better assessments about the active and unstable nature of damage and deterioration.