Craft Council Make:Shift:Do -28/10/2017

First tests of the light box.

When taking pictures for photogrammetry use, shadowless lighting is helpful. We are using a Raspberry pi and camera here which will be exchanged for a pi zero with wifi later to get rid of the wires. We are testing two different techniques. A stationary camera with the object to be photographed on a rotating turntable, and in the next picture the camera is on a moving gantry which goes around the stationary object.

2017-10-28 15.38.33

Both techniques have pros and cons, and we will find out which one best suits our need. Either moving the object, or moving the camera(s). The plan is to have 3 pi zeros at different angles on the gantry.

2017-10-28 14.13.43

Having captured the pictures, we need some software to process the picture into a 3D model and provide a file in the required format for printing on our 3D printer. We are still testing different software options based on suggestions on this site:

Some of our initial software tests are here: and more can be found using the photogrammetry tag or using the search.

The Laser cutter

Cutting a scale ruler to test and demonstrate the precision. The blue box is the extraction duct.

2017-10-28 14.07.12

Cutting the ruler. The phone on top of the case is being used as a torch to illuminate the inside of the case.

2017-10-28 14.06.31

Laser control panel.

2017-10-28 14.01.39

Laser control software running on Linux Mint.

2017-10-28 13.56.56

Laser cut and etched logo.

2017-10-28 13.55.52

Testing the cutting and etching ability on different materials.





Photogrammetry – Taking the pictures

This is a list of links compiled as I searched for advice on how to take pictures to use for photogrammetry. Each link is followed by an excerpt from the post.

By far, the biggest impact on the final output file is what happens in the shooting phase. In fact, it is usually easier to re-shoot a new series of source photos of your subject than try to save a computed capture that’s not working right away. I recommend loading up the images and see how well the images align as soon as possible. If certain images are off or are confusing the software, re-shoot them while you still have access to your subject. It may be necessary to re-shoot multiple times for one model.

Technique:  The first is to put the camera on a tripod, and rotate your subject using a turntable or office chair. The second is to put the subject in the center, and you then move with the camera around the outside taking pictures. Both techniques have their pros and cons. And both are appropriate for different scenarios. Always shoot extra photos to make sure you have enough angles. Too many images may overwhelm the software (especially if you don’t have enough system RAM)

The Turntable method will be easier to setup if you are using artificial lighting. It will also be faster to turn the object than move the camera. It also makes it easier to use a green screen, since it stays in one place.

The Walk-around Method requires less setup if you are shooting outdoors or otherwise don’t need to set up lights. If you are scanning a person, they will have an easier time keeping their eyes fixed if they are not spinning around on a chair.

Camera sensors.
What you’d better keep in mind when
picking your gear:
camera performance is not provided by its megapixels resolution;
high end cameras do not always lead to better result;
A good camera must allow a full light management in order to control each exposition
value. That’s why DSLR cameras (reflex) are usually recommended even if both

mirrorless and bridge camera can fit that need.

Pixels size must be higher than 2 μm; therefore it is strongly recommended to exploit
camera sensor bigger than 1/2.3″ even if smaller sensors may be used depending on
the accuracy you want to reach; You’d better draw your attention to pixels size rather than pixels mount for each picture. Full frame cameras: use lenses with focal length value between 25 and 50 mm; Aps cameras or lower sensor size: use lenses with focal length value between 18 and 35 mm considering a crop factor value between 1.5 e 1.6

You can use your phone or a cheap (or expensive) point&shoot camera. But if you want really good models, there is no way around a really good DSLR camera. A method for getting equal and good lighting onto the object is a ring flash. Not too expensive, and very useful for smaller objects, especially on a turntable.

Here’s a not-so-short guide on how to ensure that your photos will get you a good model. Often, much less effort is required, so please experiment on your own. This part 1 will give you a list of tools you need.

So, once you have the light you need, are you all set to shoot? Nope! You need to add something to the scene, or your model will be a limited usability. That something is a scale object. Something that will come out in the 3D model and has a feature of exactly known length, so that you can scale your model correctly. You can place a caliper on the turntable, a scale bar, you can add a business card, whatever! Ideally, you use something big, because there will be a measuring error. The larger the distance, the less the proportional influence of the error.

So, now finally ready to rock&roll? Theoretically, yes! But there is the issue looming over your work how you are going to combine the models you get from your several sets of photos into one 3D file! How can you register scans or photos sets?

There are basically three ways of registering scans:

  1. based on points found on the object itself,
  2. on points that you mark on the object, and
  3. on points on the background.

If the photos aren’t good, then it’s going to put a ceiling on the quality of your 3D model, no matter how good the software is. That’s why photogrammetry is really about taking good photos.

No Information is Better than Bad Information

Give the software only high-confidence information. If you don’t need the background, mask it out. If you can’t track a subject’s hair, cover it up. If one image isn’t aligning correctly, get rid of it. You’re smarter than the software at filtering this out before it gets to work, and you want to make its job as smooth as possible.

It’s All in The Picture

If its not in the picture, then It’s not in your mesh. Get underneath your subject to take photos, get above it as well. For heads, take a few extra pictures behind the ear. Make sure you have the coverage you need to get all the details you want, because it’s difficult to go back and reshoot in the exact same conditions.

This forced me to learn how to make good models using the software and maximizing the quality of how I took the imagery. Most of the problems in SfM come from bad imagery, not from having a cheap camera.

Step 8: What Is the Best Camera to Use?

When you are getting started the best camera to use is the best camera you have.

The DSLR is the gold standard. If you want to buy a camera for serious photo scanning, or serious photography in general, this is what you want.

But remember, what it comes down to is the quality of your photos, not the quality of your camera.

With the right skills and the right conditions you can take good photos with a bad camera. But if you don’t know what you are doing, it is easy to take bad photos with a good camera. If you want to invest in something, invest in your skills as a photographer. The camera is only as good as the photographer behind it.

Make sure you have good lighting. If you can swing it, working outdoors on an overcast day is perfect. You get lots of nice even diffuse light. If you need to shoot indoors set up as much light as you can and make it as diffuse as possible. Point your lights at a white painted ceiling or bounce cards, or those groovy silver umbrellas. The idea is to get as much light as possible with as few shadows as possible. On-camera flash is not generally useful here. It tends to cast shadows which appear in different places in each photo. Remote strobes are fine as long as they provide a very diffuse, even light. It is possible to shoot using a tripod, but it is so time consuming that it should be avoided if at all possible. The best plan is to get enough light going that you can shoot handheld. Aperture priority is the best mode to shoot with. You choose an aperture and the camera makes all the other adjustments for you.

Shutter speed plays a huge role in your quest for sharp pictures. If the exposure is longer than the reciprocal of the lens’ focal length you can’t hold the camera steady enough to get a sharp picture. In other words if you are shooting a 50mm lens you need to keep the shutter speed faster than 1/50th sec. Usually the only way to do this is by adding more light. As a last resort you can use a monopod or tripod to allow slower shutter speeds. Try to make the subject fill as much of the frame as possible. Background objects in the shot won’t hurt and they can help the software locate the camera positions if there aren’t enough features on the subject.

The quality of your scan depends entirely on the quality of your photos. If you fill the frame with all the details of the subject you will capture those details in your scan.The idea is to move around the subject taking photos from many different perspectives. Standing in one place and shooting a bunch of photos does nothing to capture the 3d shape. Don’t expect perfect scans without a lot of practice and a lot of patience. I have been working on this for more than 2 years and my scans don’t always come out, but at least I’ve learned 800 ways not the scan something.

The mesh captures the physical form of the object. This is all we care about if we will be printing the object on a single color 3d printer. If this is what you have in mind then turn off those fancy colors and take a long hard critical look at your mesh. An incomplete mesh can be repaired, but if the mesh looks like a marshmallow now, it pretty much always will. The colored skin is variously known as a color map, a diffuse map or sometimes, nonsensically, a texture. It is a regular 2d color image which is wrapped around your model. This layer is important if you want to 3d print your object in color.




Photogrammetry on the cheap

When I started looking at photogrammetry as a project for the Makerspace, my first impressions were of how expensive it was going to be. Expensive commercial software, expensive high powered computer to run the software on, expensive SLR camera to take the pictures, tripod, lights, filters, turntable…… the list went on and on.

I wondered if I could get usable results with what I already had, and the answer was yes. And there are three ways to go about it.

The pictures can be taken with a mobile phone, or digital camera, if you have one. Some sites suggest that the most important part of photogrammetry is the skill and technique of the photographer not how fancy the camera is.

The type of photogrammetry I am talking about involves walking around an object and taking multiple pictures from every angle. If the object is outside on a dull overcast day, no additional lighting will be required.

(1)  It is possible to manage without a computer.  Just use a mobile phone to take the pictures, then upload the pictures to the cloud based Autodesk Recap for processing. You need to set up an Autodesk account using an email address. There is no cost for this service. But Autodesk intend to start charging in December 2017.

Here are the results of my first test using a basic camera and autodesk Recap:

(2) Some phones are capable of processing the pictures locally on the phone and do not even need the cloud. This Android app still supports the Samsung galaxy S3.  If you already have a supported phone, this is “one stop shop”  photogrammetry:- “This Android-only app developed by SmartMobileVision doesn’t feature any social features yet, or even an account registration system. But after logging in with a temporary username, it does have features that make it different than the apps above. The most important one is that it doesn’t do cloud computing but instead does all calculations locally on your phone. While the speed of this greatly depends on your phone’s processor—and naturally drains the battery”.  Tested camera list:-

Nexus 4
• Nexus 5
• Nexus 9
• Nexus 10
• HTC Desire X
• HTC One
• Samsung Galaxy S5
• Samsung Galaxy S III
• Sony Xperia L
• Sony Xperia SP

Results from a Samsung galaxy s3.

I rushed these pictures, in an effort to see the results quickly. I took 40 pictures of the same object and same light conditions as the Autodesk Recap test. Unfortunately, storm ‘Brian’ blew the ribbon all over the place, which messed up the front of the object (no movement is acceptable). But look at the back, it is spot on. The best part? Processing on the phone took 40 minutes. It took 2 hours on my computer.



Untitled Untitled1 Untitled2 Untitled3 Untitled4

(3) The third and last method is to  process the pictures with some free software called MVE :

which will run on most recent computers. My computer is about 4 years old, it is an i5 with 8G ram and on board intel graphics. It took 2 hours to process 50 pictures. Nowhere near as fast as the computers normally suggested for the task, but my computer did the job and produced usable results. I have not tried to find out what the lowest specification MVE will run on.

Photogrammetry desktop software

For our Craft Council Make:Shift:Do project at the end of October 2017, we want to be able to take pictures of an object and turn the pictures into a file our 3D printer can print the object from.

The following site has been testing all the free options:

Summing up, the best free/open source workflow from the pfalkingham tests appears to be,  COLMAP with openMVS  if  a powerful enough  computer with CUDA support is available. If not, MVE can do a reasonable job on a less powerful computer without CUDA support.

“turns out using COLMAP for sparse reconstruction let openMVS create an awesome mesh.”

COLMAP needs a graphics card with CUDA support.  The computer used for testing at the pfalkingham site has the following specification.

  • Windows 10 64 bit
  • 16gb RAM
  • 128 gb SSD for the OS, 1TB HDD for main (programs and data were run from the HDD).
  • nVidia GTX 970 GPU. (with 4G ram)
  • Intel Core i7-4790K CPU (4 cores/8 threads, up to 4.4Ghz).

“It’s worth noting that on this machine, I ran out of memory while refining the mesh for the sparse cloud so needed to add the “resolution-level 2″ to reduce memory usage.”

Tony is loaning a machine for us to do our own tests on the COLMAP workflow.

At home, I only have an i5 with 12G ram and no CUDA card.  I install COLMAP and tried it in ‘No GPU’ mode, it took the cpu up to 97% then crashed.

I have also tested MVE on my i5. MVE will run without CUDA support. I ran the various stages from the command line and the whole process took about 2 hours. Below are the results displayed by meshlab.  The results are not as good as those obtained using the online autodesk recap service with the same 50 pictures as input. There are more holes in the model, more manual editing will be required before this can be turned into an stl file for the 3D printer. The Autodesk recap results for comparison are here:

It is worth noting that the quality of the picture input apparently has a big effect on the output. Not just the quality of the camera itself, but also the technique used with regard to lighting and background. The picture set in use here were taken very quickly by moving around the object with a hand held camera.

Meshlab is at the end of some of the suggested workflows, and I could not face trying to compile it for Linux. Meshlab also provide a ‘universal’ snap package which would not run on Ubuntu 17.10. Ironic really, because Ubuntu are really pushing snaps now, and all the promoted software at the top of the Ubuntu software store are snaps.

The meshlab install on Windows 10 is straightforward.

The following 3 are from the .ply file produced with MVE.




The following 2 are from the .obj file produced by Autodesk recap.






There are a multitude of choices to make in order to achieve our objective. The only web based service I managed to get to work was autodesk recap which is probably ruled out because it is soon to become a subscription only service. The best desktop software options all require a powerful computer which the Makerspace does not have. On the photography side, there are just as many choices regarding lighting, background, type of camera, best technique and so on.

To be continued…..


OpenMVG with openMVS

MVS  download for windows: :Using openMVS :Building openMVS

Colmap: “So I ran COLMAP and exported the cameras + sparse cloud as a *.nvm file, then did exactly the same with openMVS as described above. ”


Cloud based Photogrammetry tests

I used the comparison article on wikipedia to choose 3 free to use cloud based photogrammetry services.

I selected:,,

Note: arc3d requires a desktop application to to pre-process and upload the pictures. The application is available for xp, vista, windows7, Mac os10.7 and linux source code. I could not get the source code to compile, the xp and vista applications did not work, but the win7 did. I did not try on a Mac. The software has obviously not been upgraded for a while.

I uploaded the same set of 50 pictures to each service. The pictures were taken outdoors with a hand held Panasonic camera set on manual to 100 iso and 10 megapixel resolution. responded after 2 hours with an error message suggesting I contact “the developers”, no response from after 36 hours, autodesk recap processed the 50 pictures in less than 2 hours and supplied the results as a downloadable obj file and rcm file. RCM is the Recap mesh and OBJ is a universal 3D model format. This shows that it was possible to process the 50 pictures using a cloud based service. Unfortunately, Autodesk intend to start charging for the service by subscription in December.

Also tried this with no luck. (not listed on the wiki.) 

Here are screen captures from the recap  web interface:



no shading


Textured and rotated



Screenshot from 2017-10-16 09-03-33


Screenshot from 2017-10-16 09-03-46

Another way, once recap is no longer free.

Make:Shift:Do project – Photogrammetry

For the Craft Council Make:Shift:Do project at the end of the October 2017, we want to build a system which can scan a component or item and produce a 3D file which can be fed into the 3D printer to manufacture an identical component or item.

here is an example of a scanner:

Here is a wikipedia article with lots of photogrammetry software, 24 of which are free. the 24, 3 are web based.
Web based systems do not require us to have software or a high powered computer. We just supply the photographs.
The trade off is the time and bandwidth required to upload the required number of high resolution pictures, and the complete lack of control over the workflow.

here is one example:     Free web powered converter.

Here is the 3DF Zephyr.  A standalone, non web based system.
The generally suggested hardware specification to run such software is:

i7, 8G min, 16G preferred memory and a CUDA capable graphics card. Depending on the software, CUDA is not always required, but slows things down if not available.

I am sure several of us have have capable hardware to test the various software, and Tony has kindly offered to loan a suitable machine to the club.
I would be interested in knowing how long such a machine would take to do the job.
Quote from ARC3D cloud based system mention above:

“Depending on the size, number and quality of the images that have been uploaded, a typical job may take from 15 minutes to 2 or 3 hours. “On the picture taking side, a lot of writeups are using the pi camera on it’s own or in multiples, but equally,  some are suggesting better results are obtained with higher quality pictures.

I have seen everything from a smartphone to a high end SLR suggested and it seems like one camera is good enough to get the job done. Multiple cameras do not seem to be required. Several members probably have high quality cameras.

In summary,  we have everything we need to start testing the various options.

Some articles describe pipeline style workflows, where defects/holes in the mesh can be identified, and extra pictures of the area can be taken and added to correct the problem.

But maybe we will be looking for “pictures in, 3D printer file out” with no intervention from the user.

Here is a good article describing the different workflows.

Written by Dr. P.L. Falkingham who wrote this white paper in 2012;

Acquisition of high resolution three-dimensional models using free, open-source, photogrammetric software: Falkingham says this about Agisoft Photoscan, one of the two suggestions Olly picked out in his first email.
“This program has become something of a standard among colleagues who use photogrammetry, and for good reason.  At $59 for the educational standard version, it’s a bargain, and it’s easy to use interface means anyone can use it. “

Next is an open source pi laser scanning kit.  Similar in concept to what we want to attempt.
FreeLSS is a free as in open source, open hardware, and open electronic design 3D printable turn table laser scanning platform based on the Raspberry Pi.

Available in kit form here:
It is written in C++ and licensed under the GPL.
The scanning software runs self-contained on the Raspberry Pi without the need for a connected computer via USB.
The user interface is completely web based and is exposed via libmicrohttpd on the Pi. Laser sensing is performed via the official 5 MP Raspberry Pi camera.
The camera can be operated in either video or still mode.
Video mode camera access is provided by the Raspicam library.
Reference designs for the electronics to control the lasers and turn table are available as Fritzing files.
Access to the GPIO pins are provided by wiringPi.
FeaturesFully 3D Printable
Point cloud export
Triangle mesh export
Assisted calibration
Support for dual laser lines (right and left)
Up to 6400 samples per table revolution (with reference electronics)
5 megapixel camera sensor
Support for camera Still mode and Video code
Configurable Image Processing Settings
Ability to generate images at different stages of the image processing pipeline for debugging
Persistant storage of previous scans
Manual control of lasers and turn table
Flexible architecture

FreeLSS can generate results in the following formats.

PLY – Colored Point Cloud
XYZ – Comma Delimited 3D Point Cloud
STL – 3D Triangle Mesh



3D Scanner Buying Guide 2016

Price: Kinect (varies) ReconstructME (Free)
Technology: RGB camera, depth sensor

This is about as DIY as it gets when it comes to building a low-cost 3D scanner. Thankfully Microsoft has released a peripheral that is really an extremely high-powered depth sensor and RGB camera, and left it open enough to be used for other applications. In this case, pairing an Xbox Kinect (You can easily find them on eBay) with free software like ReconstructMe is all you’ll need to 3D scan people or objects.

Resolution: Varies
Pros: Inexpensive, versatile, free software
Cons: Windows only, limited resolution, uneven quality



4. BQ Ciclop 3D scanner kit – $199 USD 

This open source hardware project has been released under an open source license, so all information on the mechanical design, electronics and software is available to the community to allow for continued development. The full package is roughly $199 USD. You can even download the design and 3D print it for yourself!