Top Posts & Pages
Photogrammetry deskt… on Cloud based Photogrammetry… Olly Clark on Hacker Public Radio blackpoolmakerspace on Make-Shift-Do 2016… Bob. on Make-Shift-Do 2016… Mike Hewitt on Burning arduino boot load…
- Arduino (1)
- Assembler (3)
- books (3)
- Cisco (1)
- Hacks (7)
- Hardware (16)
- http://schemas.google.com/blogger/2008/kind#post (24)
- Linux Mint (1)
- LUG (14)
- Makerspace (25)
- meetings (69)
- Network Hardware (1)
- photogrammetry (6)
- podcasts (3)
- projects (5)
- Reference (2)
- review (1)
- sbc (19)
- Shared (1)
- Software (1)
- tips (12)
- Tutorials (8)
- Uncategorized (195)
First tests of the light box.
When taking pictures for photogrammetry use, shadowless lighting is helpful. We are using a Raspberry pi and camera here which will be exchanged for a pi zero with wifi later to get rid of the wires. We are testing two different techniques. A stationary camera with the object to be photographed on a rotating turntable, and in the next picture the camera is on a moving gantry which goes around the stationary object.
Both techniques have pros and cons, and we will find out which one best suits our need. Either moving the object, or moving the camera(s). The plan is to have 3 pi zeros at different angles on the gantry.
Having captured the pictures, we need some software to process the picture into a 3D model and provide a file in the required format for printing on our 3D printer. We are still testing different software options based on suggestions on this site: https://pfalkingham.wordpress.com/2016/09/14/trying-all-the-free-photogrammetry/
Some of our initial software tests are here: https://blackpoolmakerspace.wordpress.com/2017/10/19/photogrammetry-desktop-software/ and more can be found using the photogrammetry tag or using the search.
The Laser cutter
Cutting a scale ruler to test and demonstrate the precision. The blue box is the extraction duct.
Cutting the ruler. The phone on top of the case is being used as a torch to illuminate the inside of the case.
Laser control panel.
Laser control software running on Linux Mint.
Laser cut and etched logo.
Testing the cutting and etching ability on different materials.
Today we opened 10.00 until 17.00 to prepare for our open day next weekend (27th and 28th).
We are running a Make:Shift:Do open day event in association with the Craft Council on Friday 27th October from 17.00 until 21.00 and Saturday 28th October 2017, 10.00 until 17.00.
The laser cutter was put through it’s paces. After a few adjustments and a modification to the extraction system, the laser cutter was pronounced fit for use next weekend. A set of guidelines for use are here:- https://github.com/lesp/BlackpoolMakerspaceLaserCutter
Tony2 took a series of pictures with a Raspberry Pi camera for testing with photogrammetry software, then went home to set up a better picture taking environment with lightbox and turntable to try again.
Tony1 donated an i7 tower unit as the basis for a workstation to run the photogrammetry software. It will need the Radeon graphics card replacing with a CUDA compatible Nvidia card with at least 1G memory on board, and the computer memory will also need upgrading.
The Colmap photogrammetry software was tested on Mike2 laptop which had CUDA compatible graphics and Colmap was found to be very fast, but a bit of a learning curve is involved to use it effectively.
The 3D printer was found to have a drive belt with not enough tension causing it to slip and loose position while printing. There is no obvious way to tighten the tension, so the belt has either stretched or it is the wrong belt. The manufacturers are going to be contacted for advice on the problem.
The proposed photogrammetry demonstration for Make:Shift:Do is ready in theory. The complete demo would involve taking pictures of an object, feeding the pictures into photogrammetry software to produce a 3D representation, converting that representation into a file suitable for use in the 3D printer, then printing out a 3D replica of the object we photographed. Unfortunately, with the 3D printer out of action, we will be unable to do the final part of the demo, which is to print the replica.
This is a list of links compiled as I searched for advice on how to take pictures to use for photogrammetry. Each link is followed by an excerpt from the post.
By far, the biggest impact on the final output file is what happens in the shooting phase. In fact, it is usually easier to re-shoot a new series of source photos of your subject than try to save a computed capture that’s not working right away. I recommend loading up the images and see how well the images align as soon as possible. If certain images are off or are confusing the software, re-shoot them while you still have access to your subject. It may be necessary to re-shoot multiple times for one model.
Technique: The first is to put the camera on a tripod, and rotate your subject using a turntable or office chair. The second is to put the subject in the center, and you then move with the camera around the outside taking pictures. Both techniques have their pros and cons. And both are appropriate for different scenarios. Always shoot extra photos to make sure you have enough angles. Too many images may overwhelm the software (especially if you don’t have enough system RAM)
The Turntable method will be easier to setup if you are using artificial lighting. It will also be faster to turn the object than move the camera. It also makes it easier to use a green screen, since it stays in one place.
The Walk-around Method requires less setup if you are shooting outdoors or otherwise don’t need to set up lights. If you are scanning a person, they will have an easier time keeping their eyes fixed if they are not spinning around on a chair.
What you’d better keep in mind when
picking your gear:
camera performance is not provided by its megapixels resolution;
high end cameras do not always lead to better result;
A good camera must allow a full light management in order to control each exposition
value. That’s why DSLR cameras (reflex) are usually recommended even if both
mirrorless and bridge camera can fit that need.
Pixels size must be higher than 2 μm; therefore it is strongly recommended to exploit
camera sensor bigger than 1/2.3″ even if smaller sensors may be used depending on
the accuracy you want to reach; You’d better draw your attention to pixels size rather than pixels mount for each picture. Full frame cameras: use lenses with focal length value between 25 and 50 mm; Aps cameras or lower sensor size: use lenses with focal length value between 18 and 35 mm considering a crop factor value between 1.5 e 1.6
You can use your phone or a cheap (or expensive) point&shoot camera. But if you want really good models, there is no way around a really good DSLR camera. A method for getting equal and good lighting onto the object is a ring flash. Not too expensive, and very useful for smaller objects, especially on a turntable.
Here’s a not-so-short guide on how to ensure that your photos will get you a good model. Often, much less effort is required, so please experiment on your own. This part 1 will give you a list of tools you need.
So, once you have the light you need, are you all set to shoot? Nope! You need to add something to the scene, or your model will be a limited usability. That something is a scale object. Something that will come out in the 3D model and has a feature of exactly known length, so that you can scale your model correctly. You can place a caliper on the turntable, a scale bar, you can add a business card, whatever! Ideally, you use something big, because there will be a measuring error. The larger the distance, the less the proportional influence of the error.
So, now finally ready to rock&roll? Theoretically, yes! But there is the issue looming over your work how you are going to combine the models you get from your several sets of photos into one 3D file! How can you register scans or photos sets?
There are basically three ways of registering scans:
- based on points found on the object itself,
- on points that you mark on the object, and
- on points on the background.
If the photos aren’t good, then it’s going to put a ceiling on the quality of your 3D model, no matter how good the software is. That’s why photogrammetry is really about taking good photos.
No Information is Better than Bad Information
Give the software only high-confidence information. If you don’t need the background, mask it out. If you can’t track a subject’s hair, cover it up. If one image isn’t aligning correctly, get rid of it. You’re smarter than the software at filtering this out before it gets to work, and you want to make its job as smooth as possible.
It’s All in The Picture
If its not in the picture, then It’s not in your mesh. Get underneath your subject to take photos, get above it as well. For heads, take a few extra pictures behind the ear. Make sure you have the coverage you need to get all the details you want, because it’s difficult to go back and reshoot in the exact same conditions.
This forced me to learn how to make good models using the software and maximizing the quality of how I took the imagery. Most of the problems in SfM come from bad imagery, not from having a cheap camera.
Step 8: What Is the Best Camera to Use?
When you are getting started the best camera to use is the best camera you have.
The DSLR is the gold standard. If you want to buy a camera for serious photo scanning, or serious photography in general, this is what you want.
But remember, what it comes down to is the quality of your photos, not the quality of your camera.
With the right skills and the right conditions you can take good photos with a bad camera. But if you don’t know what you are doing, it is easy to take bad photos with a good camera. If you want to invest in something, invest in your skills as a photographer. The camera is only as good as the photographer behind it.
Make sure you have good lighting. If you can swing it, working outdoors on an overcast day is perfect. You get lots of nice even diffuse light. If you need to shoot indoors set up as much light as you can and make it as diffuse as possible. Point your lights at a white painted ceiling or bounce cards, or those groovy silver umbrellas. The idea is to get as much light as possible with as few shadows as possible. On-camera flash is not generally useful here. It tends to cast shadows which appear in different places in each photo. Remote strobes are fine as long as they provide a very diffuse, even light. It is possible to shoot using a tripod, but it is so time consuming that it should be avoided if at all possible. The best plan is to get enough light going that you can shoot handheld. Aperture priority is the best mode to shoot with. You choose an aperture and the camera makes all the other adjustments for you.
Shutter speed plays a huge role in your quest for sharp pictures. If the exposure is longer than the reciprocal of the lens’ focal length you can’t hold the camera steady enough to get a sharp picture. In other words if you are shooting a 50mm lens you need to keep the shutter speed faster than 1/50th sec. Usually the only way to do this is by adding more light. As a last resort you can use a monopod or tripod to allow slower shutter speeds. Try to make the subject fill as much of the frame as possible. Background objects in the shot won’t hurt and they can help the software locate the camera positions if there aren’t enough features on the subject.
The quality of your scan depends entirely on the quality of your photos. If you fill the frame with all the details of the subject you will capture those details in your scan.The idea is to move around the subject taking photos from many different perspectives. Standing in one place and shooting a bunch of photos does nothing to capture the 3d shape. Don’t expect perfect scans without a lot of practice and a lot of patience. I have been working on this for more than 2 years and my scans don’t always come out, but at least I’ve learned 800 ways not the scan something.
The mesh captures the physical form of the object. This is all we care about if we will be printing the object on a single color 3d printer. If this is what you have in mind then turn off those fancy colors and take a long hard critical look at your mesh. An incomplete mesh can be repaired, but if the mesh looks like a marshmallow now, it pretty much always will. The colored skin is variously known as a color map, a diffuse map or sometimes, nonsensically, a texture. It is a regular 2d color image which is wrapped around your model. This layer is important if you want to 3d print your object in color.
When I started looking at photogrammetry as a project for the Makerspace, my first impressions were of how expensive it was going to be. Expensive commercial software, expensive high powered computer to run the software on, expensive SLR camera to take the pictures, tripod, lights, filters, turntable…… the list went on and on.
I wondered if I could get usable results with what I already had, and the answer was yes. And there are three ways to go about it.
The pictures can be taken with a mobile phone, or digital camera, if you have one. Some sites suggest that the most important part of photogrammetry is the skill and technique of the photographer not how fancy the camera is.
The type of photogrammetry I am talking about involves walking around an object and taking multiple pictures from every angle. If the object is outside on a dull overcast day, no additional lighting will be required.
(1) It is possible to manage without a computer. Just use a mobile phone to take the pictures, then upload the pictures to the cloud based Autodesk Recap for processing. You need to set up an Autodesk account using an email address. There is no cost for this service. But Autodesk intend to start charging in December 2017.
Here are the results of my first test using a basic camera and autodesk Recap: https://blackpoolmakerspace.wordpress.com/2017/10/16/cloud-based-photogrammetry-tests/
(2) Some phones are capable of processing the pictures locally on the phone and do not even need the cloud. This Android app still supports the Samsung galaxy S3. If you already have a supported phone, this is “one stop shop” photogrammetry:-
https://3dscanexpert.com/3-free-3d-scanning-apps/ “This Android-only app developed by SmartMobileVision doesn’t feature any social features yet, or even an account registration system. But after logging in with a temporary username, it does have features that make it different than the apps above. The most important one is that it doesn’t do cloud computing but instead does all calculations locally on your phone. While the speed of this greatly depends on your phone’s processor—and naturally drains the battery”. Tested camera list:-
• Nexus 5
• Nexus 9
• Nexus 10
• HTC Desire X
• HTC One
• Samsung Galaxy S5
• Samsung Galaxy S III
• Sony Xperia L
• Sony Xperia SP
Results from a Samsung galaxy s3.
I rushed these pictures, in an effort to see the results quickly. I took 40 pictures of the same object and same light conditions as the Autodesk Recap test. Unfortunately, storm ‘Brian’ blew the ribbon all over the place, which messed up the front of the object (no movement is acceptable). But look at the back, it is spot on. The best part? Processing on the phone took 40 minutes. It took 2 hours on my computer.
(3) The third and last method is to process the pictures with some free software called MVE :
which will run on most recent computers. My computer is about 4 years old, it is an i5 with 8G ram and on board intel graphics. It took 2 hours to process 50 pictures. Nowhere near as fast as the computers normally suggested for the task, but my computer did the job and produced usable results. I have not tried to find out what the lowest specification MVE will run on.
For our Craft Council Make:Shift:Do project at the end of October 2017, we want to be able to take pictures of an object and turn the pictures into a file our 3D printer can print the object from.
The following site has been testing all the free options: https://pfalkingham.wordpress.com/2016/09/14/trying-all-the-free-photogrammetry/
Summing up, the best free/open source workflow from the pfalkingham tests appears to be, COLMAP with openMVS if a powerful enough computer with CUDA support is available. If not, MVE can do a reasonable job on a less powerful computer without CUDA support.
“turns out using COLMAP for sparse reconstruction let openMVS create an awesome mesh.” https://pfalkingham.wordpress.com/2017/05/26/photogrammetry-testing-11-visualsfm-openmvs/
COLMAP needs a graphics card with CUDA support. The computer used for testing at the pfalkingham site has the following specification.
- Windows 10 64 bit
- 16gb RAM
- 128 gb SSD for the OS, 1TB HDD for main (programs and data were run from the HDD).
- nVidia GTX 970 GPU. (with 4G ram)
- Intel Core i7-4790K CPU (4 cores/8 threads, up to 4.4Ghz).
“It’s worth noting that on this machine, I ran out of memory while refining the mesh for the sparse cloud so needed to add the “resolution-level 2″ to reduce memory usage.”
Tony is loaning a machine for us to do our own tests on the COLMAP workflow.
At home, I only have an i5 with 12G ram and no CUDA card. I install COLMAP and tried it in ‘No GPU’ mode, it took the cpu up to 97% then crashed.
I have also tested MVE on my i5. MVE will run without CUDA support. I ran the various stages from the command line and the whole process took about 2 hours. Below are the results displayed by meshlab. The results are not as good as those obtained using the online autodesk recap service with the same 50 pictures as input. There are more holes in the model, more manual editing will be required before this can be turned into an stl file for the 3D printer. The Autodesk recap results for comparison are here: https://blackpoolmakerspace.wordpress.com/2017/10/16/cloud-based-photogrammetry-tests/
It is worth noting that the quality of the picture input apparently has a big effect on the output. Not just the quality of the camera itself, but also the technique used with regard to lighting and background. The picture set in use here were taken very quickly by moving around the object with a hand held camera.
Meshlab is at the end of some of the suggested workflows, and I could not face trying to compile it for Linux. Meshlab also provide a ‘universal’ snap package which would not run on Ubuntu 17.10. Ironic really, because Ubuntu are really pushing snaps now, and all the promoted software at the top of the Ubuntu software store are snaps.
The meshlab install on Windows 10 is straightforward.
The following 3 are from the .ply file produced with MVE.
The following 2 are from the .obj file produced by Autodesk recap.
There are a multitude of choices to make in order to achieve our objective. The only web based service I managed to get to work was autodesk recap which is probably ruled out because it is soon to become a subscription only service. The best desktop software options all require a powerful computer which the Makerspace does not have. On the photography side, there are just as many choices regarding lighting, background, type of camera, best technique and so on.
To be continued…..
OpenMVG with openMVS
MVS download for windows: https://github.com/cdcseacave/openMVS_sample/releases/tag/v0.7a
https://github.com/cdcseacave/openMVS/wiki/Usage :Using openMVS
https://github.com/cdcseacave/openMVS/wiki/Building :Building openMVS
https://pfalkingham.wordpress.com/2017/05/26/photogrammetry-testing-11-visualsfm-openmvs/ “So I ran COLMAP and exported the cameras + sparse cloud as a *.nvm file, then did exactly the same with openMVS as described above. ”
I used the comparison article on wikipedia to choose 3 free to use cloud based photogrammetry services.
Note: arc3d requires a desktop application to to pre-process and upload the pictures. The application is available for xp, vista, windows7, Mac os10.7 and linux source code. I could not get the source code to compile, the xp and vista applications did not work, but the win7 did. I did not try on a Mac. The software has obviously not been upgraded for a while.
I uploaded the same set of 50 pictures to each service. The pictures were taken outdoors with a hand held Panasonic camera set on manual to 100 iso and 10 megapixel resolution.
arc3d.be responded after 2 hours with an error message suggesting I contact “the developers”, no response from phov.eu after 36 hours, autodesk recap processed the 50 pictures in less than 2 hours and supplied the results as a downloadable obj file and rcm file. RCM is the Recap mesh and OBJ is a universal 3D model format. This shows that it was possible to process the 50 pictures using a cloud based service. Unfortunately, Autodesk intend to start charging for the service by subscription in December.
Also tried this with no luck. (not listed on the wiki.) http://app.selva3d.com/transform
Here are screen captures from the recap web interface:
Textured and rotated
Another way, once recap is no longer free.