Photogrammetry Workstation/Server

Hi everyone,

Is anyone doing multi-processor processing of aerial imagery on Pix4D or other software? I would like to up our processing capacity by acquiring low-cost (sub $6000) processing hardware that can run multiple instances of Pix4D.

Unless there have been recent updates, Pix4D is not parallelizable on one processor for all parts of the processing, requiring multiple processors.

I have been looking at SuperMicro dual- and quad-processor solutions (workstations and servers). If anyone has any experience and can advise me, I would greatly appreciate it.

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • Greetings,

    Agisoft appears to support distributed processing. There are configuration options in the preferences for additional workers. I've not used it yet so I do not know how well this works.

    Pix4D's pricing is really high. AgiSoft seems to work really well and I am interested to see what ESRI and others will do in this space.

    -David

    • David,

      We used Agisoft for quite some time, but we found that Pix4D works better for us, despite some of it's downsides. The way they stitch together the mosaic is crucial for some of our customers.

      • please tell how pix4D stitches mosaic better, do you have any comparsion (I have not seen any serious difference, but had less bizarre anomalies near bad photos or water )

        • As David K. said, Pix4D stitches images into an orthomosaic in edit-able tiles, whereas Agisoft actually uses the individual pixels. If you don't need orthomosaics without artifacts, then either is fine.

          Agisoft has problems recreating an orthomosaic accurately for complex or sharp geometries, whereas Pix4D actually uses the original images, tied together with points, rather than rebuilding from the point cloud. It's hard to describe.

          • I'd like to learn more about this, where's a good place to start?  Can you use thermal imagery for these types of surveys?  

        • Greetings,

          I flew some corn fields at 300 feet and 70% (I think) overlap. In the areas where there was a lot of uniformity, AgiSoft failed to tie the images in. Pix4D successfully did so.

          Doing a damage assessment flight, Pix4D handled a corner accurately while AgiSoft failed to capture several images in that area.

          -David

      • Greetings,

        I've noticed a few cases where Pix4D definitely handles stitching better, which is kinda crucial. That price tag reallllly hurts though.

        -David

  • $6000 is already quite a high amount for a reasonable station. I probably spent $2000 on my setup and it's doing very well, built it myself from components bought online at cheaper alternatives.

    Pix4D has a page about their recommendations:

    https://support.pix4d.com/hc/en-us/articles/203405619-Use-of-the-GP...

    A breakdown of which components affect which processing steps. This is what you want to get clear what's most important to you:

    https://support.pix4d.com/hc/en-us/articles/202559519

    Regarding memory, not all memory modules are equal. You can shave off extra seconds if you pay attention to CAS latency, where lower CAS latency is better. Obviously, having enough memory is more important than the little impact CAS makes, but if you don't use all that much, here you can optimize. For 1km^2 areas, you probably need 16GB minimum. You can also reorganize your pipeline to make use of chunking, because processing data that's closer together also reduces how much data you need to load in/to memory and in/to the CPU.

    https://en.wikipedia.org/wiki/CAS_latency

    The GPU should be considered an extra processor that's available. As they indicate, one GPU is enough, SLI makes practically no difference, so the objective should be to buy a couple of low-end with GPU rather than putting in more GPU's into the same server. That's because it's a lot easier to reach the bandwidth limits of the motherboard, HD and CPU before hitting the GPU processing limits, so there's no point plugging in 4 (and the app must support and optimize for it).

    • Single Intel Xeon will not give you much added performance (sometimes worse), but increase the price significantly.
    • Double Xeon processor may provide you more performance for step 1 (up to 30-50% faster per project), but will likely be in the range of aprox. 10'000 $ and depending the Xeon processor the speed may even be lower so it is not recommended.
    • Xeon processor may be useful for processing very large datasets (more than 50.000 MB in images) depending the Xeon processor model.

    So buying a big-ass CPU is not efficient. What I'd do is go for one of the i7 types and then select one based on its speed, L3 cache size and price. What you eventually get depends on price differences between the features they offer. If there's $500 difference, sometimes it's more economical simply buying 2 boxes instead of buying one really tough box. Also, getting more components means better failover and more easily addressable HD space.

    The L3 cache size is most important to look at. Most L1 cache is 64kb already, L2 is 256kb, but L3 has lots of options.

    What's left is matching the clock rates between memory, CPU and motherboard and making sure it's all compatible :).

    Probably if you go for "consumer built servers and workstations", you may end up paying lots of $$$ for parts of the configuration you don't need (big size screens, etc). so I think it's worth looking into getting someone to build these custom for you and getting the components instead.

    For SSD's you should use up to 75% of capacity, or it slows down dramatically.

    Multiple HD's can give you some performance increase if you figure out how to spread files that are written or read from are located on separate disks, because they do less seeking. So there's one thing where perhaps you load the source data on one drive and the output data on another.

    Another recommendation when you buy multiples is slightly change the config for some groups, so you can see which perform better. One set with 64G memory, another with very fast HD's and another where perhaps you use better GPU's. This allows you also to process different kinds of datasets. Eventually after a year you move away from one set that's not as good or suitable for the data you're processing and buy new ones of the other kind (and you don't have to renew the entire thing).

    Use of the GPU in Pix4Dmapper
      Information:  For more information about Hardware Components usage when processing with Pix4Dmapper: Hardware components usage when processing w…
    • Hey Gerard,

      I think you misunderstood. We already have a workstation for processing, but my understanding is that Pix4D can only run one instance per processor for some steps, even .... We currently are using an i7-4790k with 64 GB RAM and I'm looking to double or quadruple our processing capabilities, not by running one instance more quickly, but by simultaneously processing multiple projects (1 per processor).

      What are your inputs for using multi-processor or dual-processor systems for multiple simultaneous Pix projects?

      • David,

        So did you try to allocate resources as indicated here?

        https://support.pix4d.com/hc/en-us/articles/202560199

        The support guy doesn't seem to understand the problem outline very well, so the answer is not very precise, but in answer 1 he says it's possible to allocate # of cores and # of RAM for a processing stage.

        I think it helps in certain stages, but when you get to GPU processing I think this strategy fails because there's no explicit ability to divide GPU resources. Meaning that multiple processes probably will suffer a lot of contention, since they try to grab the GPU as a whole? So wherever GPU is needed for processing, multicore/ram settings will probably fail.

        I think it was more intended to make pix4d play nice with other types of apps that may be running, not to divide work this way. Perhaps if you set GPU to medium, you will be able to run 2 instances at the same time, assuming it means "1/2" or something.

        Not sure what "all available CPU resources are used" means and whether the setting has effect there. I'd simply allocate 1/2 the cores and run a single instance to try. If cpu goes to 50% across the entire processing stage, you know that it's only using the allocated resources, not everything in the machine.

        Didn't understand your comment, because you stated you wanted to acquire more hardware. In the end you may be better off with 3x $2000 machines vs. 1x $6000. It would certainly have more favourable bandwidth limitations on HD and memory access, although total electricity and installation complexity goes up.

        How to modify the resources (cores and RAM) assigned for processing
        By default the software will use all the Cores and all the RAM memory available. It is possible to choose how many Cores and how much RAM to use for…
This reply was deleted.

Activity

Neville Rodrigues liked Neville Rodrigues's profile
Jun 30
More…