Photogrammetry desktop software

For our Craft Council Make:Shift:Do project at the end of October 2017, we want to be able to take pictures of an object and turn the pictures into a file our 3D printer can print the object from.

The following site has been testing all the free options: https://pfalkingham.wordpress.com/2016/09/14/trying-all-the-free-photogrammetry/

Summing up, the best free/open source workflow from the pfalkingham tests appears to be,  COLMAP with openMVS  if  a powerful enough  computer with CUDA support is available. If not, MVE can do a reasonable job on a less powerful computer without CUDA support.

“turns out using COLMAP for sparse reconstruction let openMVS create an awesome mesh.”   https://pfalkingham.wordpress.com/2017/05/26/photogrammetry-testing-11-visualsfm-openmvs/

COLMAP needs a graphics card with CUDA support.  The computer used for testing at the pfalkingham site has the following specification.

  • Windows 10 64 bit
  • 16gb RAM
  • 128 gb SSD for the OS, 1TB HDD for main (programs and data were run from the HDD).
  • nVidia GTX 970 GPU. (with 4G ram)
  • Intel Core i7-4790K CPU (4 cores/8 threads, up to 4.4Ghz).

“It’s worth noting that on this machine, I ran out of memory while refining the mesh for the sparse cloud so needed to add the “resolution-level 2″ to reduce memory usage.”

Tony is loaning a machine for us to do our own tests on the COLMAP workflow.

At home, I only have an i5 with 12G ram and no CUDA card.  I install COLMAP and tried it in ‘No GPU’ mode, it took the cpu up to 97% then crashed.

I have also tested MVE on my i5. MVE will run without CUDA support. I ran the various stages from the command line and the whole process took about 2 hours. Below are the results displayed by meshlab.  The results are not as good as those obtained using the online autodesk recap service with the same 50 pictures as input. There are more holes in the model, more manual editing will be required before this can be turned into an stl file for the 3D printer. The Autodesk recap results for comparison are here: https://blackpoolmakerspace.wordpress.com/2017/10/16/cloud-based-photogrammetry-tests/

It is worth noting that the quality of the picture input apparently has a big effect on the output. Not just the quality of the camera itself, but also the technique used with regard to lighting and background. The picture set in use here were taken very quickly by moving around the object with a hand held camera.

Meshlab is at the end of some of the suggested workflows, and I could not face trying to compile it for Linux. Meshlab also provide a ‘universal’ snap package which would not run on Ubuntu 17.10. Ironic really, because Ubuntu are really pushing snaps now, and all the promoted software at the top of the Ubuntu software store are snaps.

The meshlab install on Windows 10 is straightforward.

The following 3 are from the .ply file produced with MVE.

UntitledUntitled1

 

Untitled2

The following 2 are from the .obj file produced by Autodesk recap.

objFromRecap

 

objFromRecap1

 

Conclusion:

There are a multitude of choices to make in order to achieve our objective. The only web based service I managed to get to work was autodesk recap which is probably ruled out because it is soon to become a subscription only service. The best desktop software options all require a powerful computer which the Makerspace does not have. On the photography side, there are just as many choices regarding lighting, background, type of camera, best technique and so on.

To be continued…..

 

OpenMVG with openMVS

MVS  download for windows: https://github.com/cdcseacave/openMVS_sample/releases/tag/v0.7a

https://github.com/cdcseacave/openMVS/wiki/Usage :Using openMVS

https://github.com/cdcseacave/openMVS/wiki/Building :Building openMVS

Colmap:

https://pfalkingham.wordpress.com/2017/05/26/photogrammetry-testing-11-visualsfm-openmvs/ “So I ran COLMAP and exported the cameras + sparse cloud as a *.nvm file, then did exactly the same with openMVS as described above. ”

 

Advertisements
Posted in photogrammetry | Leave a comment

Cloud based Photogrammetry tests

https://en.wikipedia.org/wiki/Comparison_of_photogrammetry_software

I used the comparison article on wikipedia to choose 3 free to use cloud based photogrammetry services.

I selected:  http://www.arc3d.be/, http://www.phov.eu/use-phov/, https://recap.autodesk.com.

Note: arc3d requires a desktop application to to pre-process and upload the pictures. The application is available for xp, vista, windows7, Mac os10.7 and linux source code. I could not get the source code to compile, the xp and vista applications did not work, but the win7 did. I did not try on a Mac. The software has obviously not been upgraded for a while.

I uploaded the same set of 50 pictures to each service. The pictures were taken outdoors with a hand held Panasonic camera set on manual to 100 iso and 10 megapixel resolution.

arc3d.be responded after 2 hours with an error message suggesting I contact “the developers”, no response from phov.eu after 36 hours, autodesk recap processed the 50 pictures in less than 2 hours and supplied the results as a downloadable obj file and rcm file. RCM is the Recap mesh and OBJ is a universal 3D model format. This shows that it was possible to process the 50 pictures using a cloud based service. Unfortunately, Autodesk intend to start charging for the service by subscription in December.

Also tried this with no luck. (not listed on the wiki.)  http://app.selva3d.com/transform 

Here are screen captures from the recap  web interface:

shaded

gwin2shaded

no shading

gwin2noshading

Textured and rotated

gwin1textured

toolbars

Screenshot from 2017-10-16 09-03-33

toolbars

Screenshot from 2017-10-16 09-03-46

Another way, once recap is no longer free.

http://wedidstuff.heavyimage.com/index.php/2013/07/12/open-source-photogrammetry-workflow/

Posted in photogrammetry | 1 Comment

Make:Shift:Do project – Photogrammetry

For the Craft Council Make:Shift:Do project at the end of the October 2017, we want to build a system which can scan a component or item and produce a 3D file which can be fed into the 3D printer to manufacture an identical component or item.

here is an example of a scanner: http://freelss.org/

Here is a wikipedia article with lots of photogrammetry software, 24 of which are free.
https://en.wikipedia.org/wiki/Comparison_of_photogrammetry_software.Of the 24, 3 are web based.
Web based systems do not require us to have software or a high powered computer. We just supply the photographs.
The trade off is the time and bandwidth required to upload the required number of high resolution pictures, and the complete lack of control over the workflow.

here is one example:
http://www.arc3d.be/     Free web powered converter.

Here is the 3DF Zephyr.  A standalone, non web based system.
https://www.3dflow.net/technology/documents/3df-zephyr-tutorials/convert-photos-3d-models-3df-zephyr/
The generally suggested hardware specification to run such software is:

i7, 8G min, 16G preferred memory and a CUDA capable graphics card. Depending on the software, CUDA is not always required, but slows things down if not available.

I am sure several of us have have capable hardware to test the various software, and Tony has kindly offered to loan a suitable machine to the club.
I would be interested in knowing how long such a machine would take to do the job.
Quote from ARC3D cloud based system mention above:

“Depending on the size, number and quality of the images that have been uploaded, a typical job may take from 15 minutes to 2 or 3 hours. “On the picture taking side, a lot of writeups are using the pi camera on it’s own or in multiples, but equally,  some are suggesting better results are obtained with higher quality pictures.

I have seen everything from a smartphone to a high end SLR suggested and it seems like one camera is good enough to get the job done. Multiple cameras do not seem to be required. Several members probably have high quality cameras.

In summary,  we have everything we need to start testing the various options.

*******
Software.
Some articles describe pipeline style workflows, where defects/holes in the mesh can be identified, and extra pictures of the area can be taken and added to correct the problem.

But maybe we will be looking for “pictures in, 3D printer file out” with no intervention from the user.

Here is a good article describing the different workflows.
https://pfalkingham.wordpress.com/2016/09/14/trying-all-the-free-photogrammetry/

Written by Dr. P.L. Falkingham who wrote this white paper in 2012;

Acquisition of high resolution three-dimensional models using free, open-source, photogrammetric software:
http://palaeo-electronica.org/content/issue1-2012technical-articles/92-3d-photogrammetryDr. Falkingham says this about Agisoft Photoscan, one of the two suggestions Olly picked out in his first email.
“This program has become something of a standard among colleagues who use photogrammetry, and for good reason.  At $59 for the educational standard version, it’s a bargain, and it’s easy to use interface means anyone can use it. “

Next is an open source pi laser scanning kit.  Similar in concept to what we want to attempt.
http://freelss.org/
FreeLSS is a free as in open source, open hardware, and open electronic design 3D printable turn table laser scanning platform based on the Raspberry Pi.

Available in kit form here:http://store.murobo.com/atlas-3d-kit/
It is written in C++ and licensed under the GPL.
The scanning software runs self-contained on the Raspberry Pi without the need for a connected computer via USB.
The user interface is completely web based and is exposed via libmicrohttpd on the Pi. Laser sensing is performed via the official 5 MP Raspberry Pi camera.
The camera can be operated in either video or still mode.
Video mode camera access is provided by the Raspicam library.
Reference designs for the electronics to control the lasers and turn table are available as Fritzing files.
Access to the GPIO pins are provided by wiringPi.
FeaturesFully 3D Printable
Point cloud export
Triangle mesh export
Assisted calibration
Support for dual laser lines (right and left)
Up to 6400 samples per table revolution (with reference electronics)
5 megapixel camera sensor
Support for camera Still mode and Video code
Configurable Image Processing Settings
Ability to generate images at different stages of the image processing pipeline for debugging
Persistant storage of previous scans
Manual control of lasers and turn table
Flexible architecture

Formats
FreeLSS can generate results in the following formats.

PLY – Colored Point Cloud
XYZ – Comma Delimited 3D Point Cloud
STL – 3D Triangle Mesh

 

***

https://3dprint.com/138629/2016-3d-scanner-buying-guide/

Price: Kinect (varies) ReconstructME (Free)
Technology: RGB camera, depth sensor

This is about as DIY as it gets when it comes to building a low-cost 3D scanner. Thankfully Microsoft has released a peripheral that is really an extremely high-powered depth sensor and RGB camera, and left it open enough to be used for other applications. In this case, pairing an Xbox Kinect (You can easily find them on eBay) with free software like ReconstructMe is all you’ll need to 3D scan people or objects.

Resolution: Varies
Pros: Inexpensive, versatile, free software
Cons: Windows only, limited resolution, uneven quality

 

*****

https://pinshape.com/blog/the-11-best-3d-scanners-on-the-market/

4. BQ Ciclop 3D scanner kit – $199 USD 

This open source hardware project has been released under an open source license, so all information on the mechanical design, electronics and software is available to the community to allow for continued development. The full package is roughly $199 USD. You can even download the design and 3D print it for yourself!

 

 

Posted in Makerspace, photogrammetry | Leave a comment

Craft Council – Make:Shift:Do

A nationwide programme of craft innovation workshops for children and young people, 27-28 October 2017.

Makerspaces, fablabs, museums, galleries, and libraries nationwide are throwing open their doors and offering a tinker-tastic range of making workshops and activities to open your eyes to the potential of new craft.

http://www.craftscouncil.org.uk/what-we-do/makeshiftdo

 

Blackpool Makerspace will be participating in the Craft Council Make:Shift:Do event again this year on Friday evening 27th October 2017 and all day Saturday 28th October 2017. Times to be confirmed.

Links to pictures from the 2015 and 2016 Make:Shift:Do events at Blackpool Makerspace.

https://blackpoolmakerspace.wordpress.com/2015/10/25/make-shift-do-day-2-24102015/

https://blackpoolmakerspace.wordpress.com/2016/10/29/make-shift-do-2016/

 

 

Posted in Makerspace | Leave a comment

Do It Yourself Open Source Hardware and Software Hacker friendly Modular Laptop

Update for 2018:-

https://olimex.wordpress.com/2018/01/19/teres-i-diy-open-source-hardware-modular-hackers-laptop-update/

TERES-A64-BLACK - Open Source Hardware Board TERES-A64-BLACK - Open Source Hardware Board TERES-A64-BLACK - Open Source Hardware Board TERES-A64-BLACK - Open Source Hardware Board TERES-A64-BLACK - Open Source Hardware Board TERES-A64-BLACK - Open Source Hardware Board

Do It Yourself Open Source Hardware and Software Hacker friendly Modular Laptop

Price 225.00 EUR

https://www.olimex.com/Products/DIY-Laptop/KITS/TERES-A64-BLACK/open-source-hardware

Build instructions.

https://www.olimex.com/Products/DIY-Laptop/resources/TERES-I.pdf

Posted in Hacks, Hardware | Leave a comment

Last meeting of 2016

The CNC framework is built and the drive motors are installed.

 

2016-12-17-10-30-00

Posted in Makerspace | Leave a comment

Meeting 10 December 2016

The CNC router starts to take shape. Drive motors will be fitted next.

2016-12-10-11-56-02

 

Getting the Christmas tree ready for the last meeting of 2016. Next Saturday, the 17th December, 10am start.

 

2016-12-10-11-57-48

Posted in Makerspace | Leave a comment