Use GPU instead of CPU during training and classification

For any question related to the 3DMASC forum
Post Reply
SjorsJessen
Posts: 1
Joined: Wed Nov 22, 2023 9:59 am

Use GPU instead of CPU during training and classification

Post by SjorsJessen »

Hi there,

Firstly, 3DMASC is a great plugin which has delivered us accurate results so far!

The one issue we've been running into is that during training or classification the main load seems to go to the CPU, resulting in very long waiting times where often CloudCompare freezes and in some cases even our machine.

So, I was wondering if somehow it's possible to put the load on the GPU instead of the CPU? Or if there are other configuration options which would speed up the process?

Thanks!

Kind regards,
Sjors Jessen
daniel
Site Admin
Posts: 7382
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: Use GPU instead of CPU during training and classification

Post by daniel »

Nope, sadly this algorithm cannot be transferred to the GPU...

And as for the speed-up, I guess a processor with more core might help?
Daniel, CloudCompare admin
PablerasBCN
Posts: 284
Joined: Sat Jan 20, 2018 1:57 pm

Re: Use GPU instead of CPU during training and classification

Post by PablerasBCN »

SjorsJessen wrote: Wed Nov 22, 2023 12:43 pm Hi there,

Firstly, 3DMASC is a great plugin which has delivered us accurate results so far!

The one issue we've been running into is that during training or classification the main load seems to go to the CPU, resulting in very long waiting times where often CloudCompare freezes and in some cases even our machine.

So, I was wondering if somehow it's possible to put the load on the GPU instead of the CPU? Or if there are other configuration options which would speed up the process?

Thanks!

Kind regards,
Sjors Jessen
could you please share an screenshot of the type of data yo're processing?

The times I tried I failed, I was only able to classify the tile I did use for training. I guess I used not the proper params and scales. Just if you could please share the type of data and may be the geoemtric features used. tx
mletard
Posts: 3
Joined: Tue May 21, 2019 11:52 am

Re: Use GPU instead of CPU during training and classification

Post by mletard »

SjorsJessen wrote: Wed Nov 22, 2023 12:43 pm Hi there,

Firstly, 3DMASC is a great plugin which has delivered us accurate results so far!

The one issue we've been running into is that during training or classification the main load seems to go to the CPU, resulting in very long waiting times where often CloudCompare freezes and in some cases even our machine.

So, I was wondering if somehow it's possible to put the load on the GPU instead of the CPU? Or if there are other configuration options which would speed up the process?

Thanks!

Kind regards,
Sjors Jessen
Hi,
An option to speed up the process and not freeze your computer can also be to use 3DMASC in python for the training and application parts, and in the command line for feature computation.
Dimitri
Posts: 156
Joined: Mon Oct 18, 2010 9:01 am
Location: Rennes (France)
Contact:

Re: Use GPU instead of CPU during training and classification

Post by Dimitri »

Hi,

The bottleneck at training is due to the random forest algorithm not being parallelized in the openCV library that we use. It's too bad because it's an algorithm that is well suited for parallelization, but there does not seem to be a project to get this part of the openCV library improved.

Now, so far we found that we don't need a lot of samples (i.e., 1000 per class) to get very good results, and not too many scales. But as Mathilde suggests, if your proficient in Python, just call 3DMASC in command line for the feature computation, and train with scikit learn where the random forest implementation is super fast

Cheers
Post Reply