When capturing images at close range with a wide angle lens, depth of field may be limited and focal distance may be critical for recording condition details. Holding a relatively constant distance from the subject plays an important role in meaningful data capture.
1) Camera distance from subject, choice of lens, and Dennison Dots should be setup as per the previous step.
2) Establish an F-stop that provides full depth of field focus.
3) Set the focus and the depth of field. Once the focus has been established, tape the lens so that the focus stays set and does not accidentally change.
4) Frame your shot starting with your camera horizontal. Establish a point on the subject located at the center of the frame. Centering on this point for each shot, take three consecutive pictures changing the orientation of your camera for every picture. One orientation is horizontal 180º, one is a vertical orientation rotating the camera to 90º, the final orientation is the opposite vertical position rotating the camera to 270º. This changing of orientation allows the software to correct for lens distortion.
5) Using either your center point of focus or the Dennison Dots as a reference, move along your subject by 30%. If done correctly, 60% of your previous frame will be included in your new frame and only 30% of new subject will be introduced into the next series of pictures.
6) Repeat steps 4-5 until the entire subject has been photographed.
1) The first step is to consider your subject. What is it you are trying to study? Whether you are trying to capture cracks, dents, minute changes in topology, or macroscopic changes in topology, the absolute smallest detail you need to capture can be no smaller than 25 pixels. Taking your camera’s resolution into consideration, calculate if the details you are trying to study are at least 25 pixels. This can be done by taking test shots, then bringing those test shots into a photo editor such as Photoshop, or even a free paint program such as Gimp or Microsoft’s Paint. All of these programs have the ability to measure distance in pixels.
If there is not sufficient detail, either move closer to the subject, change lenses, or do both so that optimal resolution is achieved.
2) Once your distance from the subject and camera lens are chosen, a horizontal line of known length needs to be placed in the frame with your subject. This allows for accurate distance measuring and scaling in the 3D mesh. A great way to do this is to place Dennison Dots equidistant and level all the way across the subject. Not only do Dennison Dots placed onto the subject help in scaling and measuring along the mesh, but it also helps in the capture process by giving visual cues for the amount of distance to move along the subject. For this to be effective, the distance in between dots should be roughly one third of the total horizontal distance of your photo.
3) The distance from the subject to the camera and the distance in between your Dennison Dots should now be established. Pick a line that will run the horizontal length of your subject and place the Dennison Dots along this line. Make sure that these dots are not only equidistant, but are also perfectly level.
10) Determine correct lens size for framing and focus.
11) Align black spheres accordingly.
11a) Align spheres in such a way that will allow you to easily crop them out of the photo.
11b) Check that spheres are correct size. They should occupy at least 250 pixels of the
picture. If unsure, take a picture and export photo into photo editing software to measure the sphere’s diameter.
11c) The center of the sphere should be level with the surface of the subject being captured.
12) Attach camera to computer using USB cable. Since touching the camera is not allowed, all shots will be taken remotely.
12a) Start camera operating software and open a live view window.
12b) Adjust ambient lighting in the room to the minimum amount that will still allow clear vision of subject and possible obstructions in the room. Subject and user safety is the top priority.
13) Focus the shot and take a picture, then open photo in editing software.
13a) Check histogram to make sure even balance of color and light.
13b) Continue taking test shots adjusting F-stop, shutter speed, flash light intensity, and ambient lighting until well lit even coloring is achieved in the test photo.
1) Check batteries in camera and flash. Once the RTI capture sequence starts, touching the camera or moving it in any way is prohibited.
2) Assemble tripods and light stands.
2a) Be sure to secure or weigh down tripods with sandbags.
3) Assemble flash, battery pack, and receiver to mono-pod.
4) Test flash to make sure wireless signal is working.
5) Secure camera to tripod making sure it is level.
6) Setup subject to be photographed.
7) Measure art using a measuring tape.
8) Measure and cut a length of string at least twice the length of the widest part of the subject being photographed.
9) Attach this string to the end of the flash.
9a) This will not only ensure correct distance of the flash from the subject, but will also help in aiming the flash in the correct direction.
Researching workflows for photogrammetry and reflectance transformation imaging is just part of the work we are doing on this project. The other part of our work is to test different equipment and software. One of the big decisions we have to make is in our 3D mesh editor. Once the photos have been captured for photogrammetry and after the photos have been taken into Agisoft’s Photoscan for processing, you should now have a 3D mesh. This 3D mesh is Photoscan’s best guess at what the actual object looks like. It is far from perfect and requires editing. Holes in the mesh might need to be filled or superfluous surfaces deleted. A building as large as Abiquiu has to be photographed over the course of several days. Each day is processed in Photoscan as a separate mesh. This creates many different individual 3D meshes that need to be cleaned up and joined together to create a single composite mesh. Initially we chose Meshlab as our mesh editor. It is free and open source, which is very important for two reasons:
1) Full disclosure of all data transformations. Any changes to the original pictures across all of the pixels in each picture is visible to us. It is open and free from secrets created by proprietary software. This makes the data transparent and ideal for scientific recording.
2) With open source we have the ability to apply our own scripts and transform it in ways that are specific to our needs. This frees us from the constraints place on us by proprietary software.
We discovered that there were features that were lacking in Meshlab that made editing and joining difficult and time consuming.
1) There is no undo button. Professional 3D mesh editors all share a non-destructive editing environment. Any changes made to Meshlab are final. If a mistake is made, it is necessary to reload from you last save point.
2) It is not well documented. There is an instruction manual, but there is not a very big community of users, or a large number of tutorials available. This makes learning the software difficult.
After researching various alternatives we decided that Blender was our best bet for mesh editing and joining. Blender is free, open source, cross-platform compatible, extremely robust, contains a wealth of documentation, and boasts a very helpful learning community. Since Blender contains so many features, there is a steep learning curve in getting proficient at mesh manipulation, but the time spent in learning the software is well worth it. Thanks to the Blender, we will soon be putting up some very high quality 3D meshes of the Abiquiu House on the blog!
So to be clear for those of you who are Meshlab enthusiast and are screaming at your computer “These guys are just noobs! Meshlab is awesome if you know how to use it!” Yes you are right, but that is exactly our point. We ARE noobs as are all of the people who will be using this software in their work. Blender does have a steep learning curve but there is plenty of help. We have yet to have a question about the software that wasn’t answered in print as well as with accompanying video tutorials.
So while we will keep a link to Meshlab on our equipment and software page, Blender is our recommendation for anyone editing meshes exported from Photoscan.