Does RTI give repeatable and reliable normals of objects taken at different times and positions to facilitate detection of changes?

On the Linked-In discussion group Cultural Heritage Conservation Science. Research and practice’s discussion on 3-D digital imaging and photogrammetry for scientific documentation of heritage sites and collections http://linkd.in/RZMpFj , Greg Bearman wrote the following question:

“Does RTI give repeatable and quantitative set of normals good enough for looking for change? If I take an RTI set, rotate the object, let it warp a bit (flexible substrate), what do I get the second time? How do I align the datasets for comparison?

what is the system uncertainty? ie if I just take repeated images of the same object without moving anything, how well does the RTI data line up. Second, suppose I take something with some topography but is totally inflexible and cannot distort(make up test object here!) and I do repeated RTI on it in different orientations? Can I make the data all the same? If you are going to use an imaging method to determine changes in an object, the first thing to do is understand what is in inherent noise and uncertainty in the measuring system. It could be some combination of software, camera or inherent issues with the method itself”

I wrote back: “Hey Greg – tried sending response earlier last week but I do not see it!? Sorry. I’m on vacation until the 22nd – trying to recover and recharge. It is going well but I wanted to jot down my initial thoughts. One of my interns – Greg Williamson – is working on aberration recognition software that can recognize and highlight changes in condition captured by different H-RTI computational image assemblies – obviously taken at different times, but also with different equipment and with randomly different highlight flash positions. It seems, initally, that normal reflection is normal reflection, regardless of object or flash position and that the software correctly interpolates 3D positions of surface characteristics regardless of the precise position of the flash, because it is accustomed to calculating the highlights both the capture points and everywhere in between! Likewise, we have had promise with photogrammetry when the resolution of the images used to create the mesh and solids are similar. What may turn out to be key is a calibration set that will allow correction of the various lens distortions that would naturally come from different lenses. I know Mark Mudge at Cultural Heritage Imaging has suggested that we begin taking a calibration set before RTI capture, as we had before Photogrammetry. He may be working on incorporating a calibration correction into the highlight RTI Builder that CHI has made available. I’m sending this discussion along to the CHI forum at http://forums.cultur…ageimaging.org/ to see what others might have to add. When I return to work, I’ll ask Greg to give this some additional thought”

Whadaya think, Greg?

Advertisements

The Evolution of Our Photogrammetry Workflow

Since we began working with photogrammetry we have developed a workflow that we follow every time we’re on site. That workflow has definitely changed and developed over the past several weeks. The process has become much smoother and what used to take 3-4 hours to shoot, now only takes 1.5-2 hours to shoot. Here are some of the changes we have made.

Topics covered within this post:

1. Tripod – Do you need it?
2. Creating Accurate Measurements
3. Image Composition
4. 60% Image Overlap
5. Calibration Sequence
6. Photogrammetry Around Corners or Objects

1. Tripod – Do you need it?

From what we have discovered, not really. Photogrammetry captures require a FIXED FOCUS and a FIXED APERTURE for all three capture orientations (90°, 180° and 270°). This means that your focal depth of field will be non-negotiable after you begin shooting. The only thing you can choose is your exposure time. You want an F-Stop of 11 or below to minimize distortion but high enough so that your depth of field will capture everything in the Z dimension (forward and back) in focus.  If you have enough ambient or artificial light to keep your shutter speed above 1/40th of a second, a tripod will not be necessary and hand-holding the camera, even with the inevitable shifts in camera orientation and slight distance irregularities, can work very well.  But remember, closer to the subject or further from the subject than the limits of the focal depth of field, and falling ambient light levels can result in unfocused regions. Blurry pixels contribute nothing in 3D or in 2D.  Marking the optimal distance from the subject using a string of chalk line, taping your lens focus ring to ensure it does not move, checking your F-Stop to be certain you have not moved it and checking your focus at EVERY capture will save you a lot of reshooting, whether you use a tripod or not. That said, moving the tripod along a chalkline laterally took up much of our time. Especially since we needed to overlap our images by at least 60%. Not only were we making sure the camera was almost always the same distance from the subject, but we always made sure it was level.

Dale adjusting the camera head

After many working days in the field and discovering just how powerful Agisoft PhotoScan is, we have decided the tripod is unnecessary except for low-ambient-lighting conditions,  haven’t experienced any true need for it yet in photogrammetry. As long as you walk along a measured line, keeping the distance between the subject and camera consistent, PhotoScan will have no problem creating the 3D mesh. We started handholding the camera a few weeks ago and the results have still been superb. PhotoScan is able to detect where each camera was positioned in relation to the subject AND at what angle. If the camera is angled slightly upward or downward, this doesn’t affect the processing at all. In fact, PhotoScan makes up for it and still creates an accurate digital surrogate. Now depending on your subject matter, it still may be necessary to use a tripod or monopod. We haven’t experienced any true need for it yet in photogrammetry.

Processed mesh of the roofless room. Images were captured handheld.

2. Creating Accurate Measurements

If one needs accurate measurements within the mesh of a certain subject, Dennison® dots have proved to be extremely valuable in this regard. Initially we were placing two parallel rows of dennison dots along our subjects (mainly walls and rooms). Each dot was 4 feet away from every other dot both vertically and horizontally. When it’s seen within the mesh, the dots help to know the exact scale of the object. Based on the distance between the dots, one is able to measure everything else within the mesh and know their exact dimensions.  If the distance between the dots is exactly 4 feet, or 122 cm, then any distance in the space is some fraction of 122 cm.  20% of the distance is 9.6 inches or 24.4cm. 1% is 0.48 inches or 1.22 cm.

The question becomes, what spacial resolution do you want to be able to resolve? If  you need to resolve characteristics as small as 5mm, then you need to be able to clearly resolve 5mm across several pixels in your initial Raw capture and your scale dots should be less than 2 feet apart.  For our first set of 3D condition captures at O’Keeffe’s historic home in Abiquiu, we went from using two rows, two feet apart to using one row. The only difference it made was making the process quicker.

3. Image Composition

When capturing our images we try to keep the whole subject tight within the frame. This speeds up the masking step in processing (see tutorial) by removing unnecessary objects and background from the photo.

The subject tight within the frame.

However, this isn’t always practical when working in tighter spaces. We had this experience last week when large cacti blocked our path.  We just had to do our best at the start of the shoot to plan ways of moving around them.  We could move the whole capture distance way-back and then shoot at 15 degrees to overlap areas behind the cacti, but this would include tons of sky and foreground.  Getting closer to the subject, capturing a smaller area in each shot and taking more photographs to cover the desired area was the best answer.   A 24mm lens helps when forced into tight places because it is a wide-angle lens. Wide angle lenses capture a broader area of the subject at a much closer distance than a 50mm lens. Sometimes you’ll need to take several photos vertically to capture the whole subject. And the distortion of the subject is much greater around the outer edges of the lens and capture area. But, thanks to the calibration sequence – taking the same area at 90°, 180° and 270°, the software will still process them without a problem as long as there’s still 60% overlap in the sequence of each shooting orientation. Like we said before, when PhotoScan detects these points, it also detects the angle of the camera and makes up for it.

4. 60% Image Overlap

When you’re making sure every sequential image overlaps at least 60% there are a couple of ways to do it. When looking through the viewfinder and you snap your shot. Locate a point on the subject that would be 1/3 of the frame and move to where that same point is at 2/3 of the frame. Having dennison dots helps in this regard if there isn’t anything distinct on the wall. We use the focusing squares within the viewfinder of our Canon Mark II to assist in overlapping ~60%.  They divide the view into 3rds in both the horizontal and vertical orientations.  We just find a feature at the center of our view, take the shot and then make the same feature be a third of the way from center, which ever way we are moving. Locate, shoot, move 33% and repeat!

Image depicting the 60% overlap between two sequential images.

5. Calibration Sequence

To calibrate the captured images and allow the software to correct for any lens distortion, we have to shoot photos at three different camera orientations per camera position before we move laterally. Our method of capturing the images at horizontal (the axis of the lens to the bottom of the camera at 180°), and two verticals (axis of the lens to the bottom of the camera at 90°and  270°) changed over a period of sessions.  First we would shoot our horizontal images moving laterally parallel to the wall, say from the left edge of the wall to the right edge of the wall.  Then we would come back and shoot the other two angles, each left to right again, for a total of 3 passes along the subject. When changing light and shadow movement rapidly changed the light on the subject area, we decided to begin shooting all three angles from the same spot, overlapping 60% on the verticals.

Since we were overlapping our images based on the narrower, vertical view, we knew the horizontal images would overlap more than 60% due to the increase in width of the capture area, giving us more than the necessary 60% overlap. However, this also resulted in us taking more images than were absolutely necessary.

So whether you want to shoot all angles from the same spot, or each angle one sequence at a time, it is up to your circumstances. Also whether you mind having extra images on the horizontal shots to have to process in your chunk.

6. Photogrammetry Around Corners or Objects

When moving around objects, or in our case, around an arced corner of a wall, the images have to be shot every 15 degrees. When doing photogrammetry on a smaller object, such as a sculpture or statue, the images must be captured every 15 degrees until a full 360 degree rotation is complete.

Photogrammetry around an object

In our case, going around a curved wall, we moved 15 degrees while maintaining our distance until we were once again parallel to the wall.

Hope you found this post useful!

Week 5 Overview (August 10th, 2012)

We’ve had tons of photogrammetry progress this week. We completed all exteriors of the O’Keeffe house and studio in Abiquiu. We now have around 1600 images that need to be processed and generated in PhotoScan. We also started our first indoor capture with the roofless room at Abiquiu.  There’s a room in O’Keeffe’s house that was built without a roof. Instead there are logs laying across the top and currently has plastic covering the topside of the logs. We captured the whole interior of the roofless room and generated a mesh in the lowest quality. We didn’t capture the totality of the ground nor the logs, so there are holes and distortions in the mesh.

Wireframe 3D geometry of the roofless room.

We attempted to create a mesh with 149 photos of the roofless room on high quality. We let it process for almost 72 hours and it ended up freezing. The elapsed time timer continued to tick, but the percentage completed remained the same. We are going to have to process no more than 50 photos at a time and put them together in Blender if we want to use high quality.

Agisoft has a great chart on memory requirements for processing (taken from Agisoft’s website). But we had a good test on whether or not it would work.

Agisoft’s memory requirements chart on processing in PhotoScan.

In PhotoScan you are able to create several chunks to process different images, though only one at a time. Since the roofless room consisted of 149 images we have to divide the processing into several chunks. After we have the meshes created, we export the model and load it in Blender (see Blender review). From there we can load all individual meshes and merge to act as one unit. At that point we will end up with a high quality mesh to work with. Currently we are researching and testing the ability to create a blueprint with accurate measurements of our model. There is a Photoshop CS6 tutorial on the subject.

We had several questions regarding PhotoScan this week. We are having some mesh display problems when we apply texture. Over on the Agisoft forums there is a section directed to bug reports and after posting a few questions, I received PROMT answers. Sometimes merely minutes after posting we would receive a response from a technical support agent. Even though we were having some display issues, exporting the mesh and importing it into Blender worked perfectly. The resolution of the mesh was actually quite better than what was shown in PhotoScan.

While we still have to process high quality meshes of the roofless room, here is a screen capture of the lowest quality demonstrated in Meshlab.

Thanks for reading!

Blender vs. Meshlab – 3D Mesh Editor Review

blender logo        meshlab logo

Researching workflows for photogrammetry and reflectance transformation imaging is just part of the work we are doing on this project. The other part of our work is to test different equipment and software. One of the big decisions we have to make is in our 3D mesh editor. Once the photos have been captured for photogrammetry and after the photos have been taken into Agisoft’s Photoscan for processing, you should now have a 3D mesh. This 3D mesh is Photoscan’s best guess at what the actual object looks like. It is far from perfect and requires editing. Holes in the mesh might need to be filled or superfluous surfaces deleted. A building as large as Abiquiu has to be photographed over the course of several days. Each day is processed in Photoscan as a separate mesh. This creates many different individual 3D meshes that need to be cleaned up and joined together to create a single composite mesh. Initially we chose Meshlab as our mesh editor. It is free and open source, which is very important for two reasons:

1) Full disclosure of all data transformations. Any changes to the original pictures across all of the pixels in each picture is visible to us. It is open and free from secrets created by proprietary software. This makes the data transparent and ideal for scientific recording.

2) With open source we have the ability to apply our own scripts and transform it in ways that are specific to our needs. This frees us from the constraints place on us by proprietary software.

We discovered that there were features that were lacking in Meshlab that made editing and joining difficult and time consuming.

3D mesh viewed in Meshlab

Just as in life, going back to change past mistakes is impossible in Meshlab.

1) There is no undo button. Professional 3D mesh editors all share a non-destructive editing environment. Any changes made to Meshlab are final. If a mistake is made, it is necessary to reload from you last save point.

2) It is not well documented. There is an instruction manual, but there is not a very big community of users, or a large number of tutorials available. This makes learning the software difficult.

After researching various alternatives we decided that Blender was our best bet for mesh editing and joining. Blender is free, open source, cross-platform compatible, extremely robust, contains a wealth of documentation, and boasts a very helpful learning community. Since Blender contains so many features, there is a steep learning curve in getting proficient at mesh manipulation, but the time spent in learning the software is well worth it.  Thanks to the Blender, we will soon be putting up some very high quality 3D meshes of the Abiquiu House on the blog!

Picture of 3D mesh

3D mesh with texture from Abiquiu main entrance.

So to be clear for those of you who are Meshlab enthusiast and are screaming at your computer “These guys are just noobs!  Meshlab is awesome if you know how to use it!” Yes you are right, but that is exactly our point.  We ARE noobs as are all of the people who will be using this software in their work. Blender does have a steep learning curve but there is plenty of help.  We have yet to have a question about the software that wasn’t answered in print as well as with accompanying video tutorials.

So while we will keep a link to Meshlab on our equipment and software page, Blender is our recommendation for anyone editing meshes exported from Photoscan.

Week 4 Overview (August 3rd, 2012)

This week consisted solely of photogrammetry. Georgia O’Keeffe’s home is quite large and we still have a lot of images to capture of the interior; however, on August 1st we finished capturing the entire exterior of the house, not including the courtyard and studio. Every time we are on site, the process goes much smoother and quicker. For the entire perimeter of the house we have taken over 1200 photographs that need to be processed.

The exterior of the front side of Georgia O’Keeffe’s home.

Joey and Greg capture the outside of the kitchen.

Before this week, we would place two rows of dennison dots all 4 feet apart. We figured this would allow for accurate scale within the mesh. It would also help the software with finding overlapping points on the highly textured wall. This week we reduced the number of rows to one, all the same distance from the ground, and 4 feet apart.  This proved sufficient for the mesh to be able to create accurate measurements. It saved a lot of time too!

Two rows of dennison dots.

One row of dennison dots.

We began processing our images and received the highest quality mesh to date! We did several tests on what kind of image quality is necessary for our own documentation. Using Photoshop CS6 we exported all the jpegs at 10 quality. Each image was around 10-12MB. We attempted to export at 8 quality but the images were only 3-5MB and the point cloud was too scarce. We processed the images at 10 quality, and in PhotoScan we made the Geometry in medium quality. Here are the comparisons:

JPEG: Photoshop exported 10 quality
PhotoScan Geometry: Medium quality
Number of Photos: 38 pictures
Time to Process: 1 Hour

JPEG: Photoshop exported 10 quality
PhotoScan Geometry: High quality
Number of Photos: 38 pictures
Time to Process: 7 Hours

There is a significant processing difference in adjusting the geometry quality. At medium it took an hour and at high it took seven hours. We also compiled a mesh of 84 images at high and left it overnight. It was finished by the morning but we don’t know exactly how long processing took for that batch. Comparing the textured meshes in PhotoScan showed some difference. The medium was more pixelated than the high as you zoomed in. Ultra quality would take several days to process so we haven’t tested that yet.

Mesh with a medium quality geometry

Mesh with a high quality geometry.

It would be of great help if we could create a queue to process in PhotoScan. We could then leave several “chunks” to process over a weekend and clean the meshes up during the week. We will have to request this feature from Agisoft!

Here are some of the 3D renderings we got of the front of the house!

The wireframe PhotoScan compiled.

Week 2

Week 2 Overview (July 20th, 2012):

Our second week of the project ends today. On Monday the project team traveled back to O’Keeffe’s home in Abiquiu for photogrammetry capture. Dale wanted to capture a section of the exterior adobe wall. It was mid-day and the wall he originally wanted to capture was facing a garden with large bushes and a tree. The tree was casting too many shadows on the wall and there was a slight breeze. The team did not know how this would affect image processing within PhotoScan so we moved to an alternative wall without any direct shadows laying upon it (besides some plants and a ladder).

There was also a large, recently watered garden between the camera and the wall so a 100mm lens was needed. The team positioned the camera 42ft away from the wall so that it would be fully within the frame. Ideally the camera would have been closer to the wall, but certain obstacles, such as the moist soil, forced us further back than we would have liked.

To keep the front two legs of the tripod at a constant 42ft from the wall, a chalk line was used as a guide. This enabled the team to maneuver the camera horizontally and still keep accurate distance from the wall.

Along our horizontal line, a large bush sat within our line of sight. To fix this problem, we shot at a 15 degree angle outside of the bush to capture the wall the bush was blocking; PhotoScan had no problem putting the 3D image together.

Maneuvering the shot around the bush.

After all the images were captured horizontally, 90 degrees and 270 degrees they were ready to be processed in PhotoScan.

Much of the detail was lost, probably from being too far from the wall, but the mesh was mostly clear. Some of the areas where we had plants against the wall are distorted.

Point Cloud View:

Results in the point cloud view mode.

Solid View:

Results in the solid view mode.

Throughout the week we completed additional RTI’s of three paintings “Easter Sunrise”, “Pedernal”, and “Mesa and Road East” The continuous practice of RTI makes the process faster and smoother.

Week 1

Week 1 Overview (June 29th, 2012)

Dale Kronkright scheduled Mark Mudge and Carla Schroer from Cultural Heritage Imaging to spend a week training staff on Computational Imaging. Cultural Heritage Imaging’s ambition is to make digital imaging methods more utilized to preserve and monitor cultural artifacts.

Mark and Carla trained us in two methods of Computational Imaging. The first being Reflectance Transformation Imaging (RTI) and the second Stereoscopic Photogrammetry. The first day was primarily explanation of theory and how the software processed the images.

Several days were spent training RTI and how to shoot ideal images. Our speed and placement of the flash progressed after each attempt. First we shot O’Keeffe’s painting “Untitled (Cottonwood Tree)” and “Horn and Feathers.” However, “Horn and Feathers” was also shot in infrared.

After processing the images in RTIBuilder, detailed texture the human eye cannot see was made clearly visible. “Soap bubble’s,” due to humidity changes, were also made clear. In the infrared RTI image Dale was able to point out the underdrawing of O’Keeffe’s painting.

Toward the end of the training we travelled to O’Keeffe’s home in Abiquiu where we did more Photogrammetry and RTI imaging.

Overall, week 1 turned out to be very successful. CHI prepared us to continue capturing images throughout these 8 weeks and we will continue to post updates on our progress.

Week 1 (June 29, 2012):

Joey

We completed our first week of intensive training on June 29th. The first day we learned the theory and technique behind two methods of Computational Imaging: Reflectance Transformation Imaging (RTI) and Photogrammetry. Being unfamiliar with the two methods it was easy to grasp the concept but difficult to grasp how it processed. The first few days we worked primarily with RTI and how to process the images in the software. Equipment for these methods are extensive so it will take several attempts to be familiarized with them and how to set up. When the equipment was being introduced to us I was slightly overwhelmed because I was unfamiliar with most of it; it became more clear after using it once or twice. In this case learning is by doing.

We ran into some initial issues when beginning RTI. The new Canon 5D camera had an issue and had trouble communicating with the computer. After several attempts we had to switch to Mark and Carla’s camera. It worked the following day without explanation so it was a minor hiccup. Our first RTI came out successful. Since we are mimicking a dome, we spent some time trying to get the flash to the correct position. As we continued practicing, visualizing each angle became easier and much quicker.

Since there is so much equipment and things to keep track of for RTI, minor hiccups can stall a shoot from a few minutes to several hours. Batteries not being fully charged or camera malfunctions will take up time. The process will take more time than originally anticipated so some flexibility is necessary. If software or equipment is not working properly, begin with making sure all connections are stable and connected properly. After that one can start troubleshooting each individual piece of equipment from the top down. And a very important reminder is to be aware of your surroundings at all times. One bump on the tripod will result in a re-shoot. Luckily we haven’t ran into that issue yet.

Another time while doing RTI, after setting the focus, we forgot to set the focus to manual on the camera. The camera ended up changing focus right before the shoot without our knowledge. After several shots we realized the change and had to start over. One should check the focus again immediately before shooting to avoid a re-shoot.

Photogrammetry uses a separate software than that of RTI. On our Photogrammetry test we were being extremely precise on camera placement by using measurements. A chalk line or measuring tape makes the process quicker so that one can visualize distance between the subject. Our attempts at being extremely precise were taking up a lot of time, but in reality the software still manages to recognize familiar points and string them together. We had to be precise in our distance from the subject but not by the centimeter. One can successfully do Photogrammetry without a tripod, moving laterally along a chalk line. It took several long attempts to come to that conclusion.

Greg

As Mark and Carla warned us in the beginning of training, its a fire hose of information. There is lots of terminology and lots of equipment to interface with. We were able to get some great equipment troubleshooting tips from Carla and Mark during this first week because there were plenty of equipment problems. There is nothing better for familiarizing oneself with a piece of equipment than attempting to fix it when it is malfunctioning.
As for training, It is hard to visualize the angles the flash needs to take in the RTI capture process. We are told to picture an umbrella that surrounds the artwork that the camera flash has to follow. This visualization requires practice. It would be of tremendous help if there was some type of visual guide. Very steep learning curve, with software, equipment, techniques, do’s don’t, and art safety. The huge payoff was the creation of our first successful RTI image with an Okeeffe painting and an image of the front of the research center here in Santa Fe captured using photogrammetry.

What I learned from our field work is this…mosquitos stink! Bring suntan lotion and bug repellant when doing field work. Setting up is 90% of time taken in RTI and Photogrammetry. Organization is essential with equipment in specific bags, and labeled. Makes packing up and setting up easier and increases efficiency by a huge factor.