Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mlr and GPU computing? #2293

Closed
andreashandel opened this issue Jun 13, 2018 · 4 comments
Closed

mlr and GPU computing? #2293

andreashandel opened this issue Jun 13, 2018 · 4 comments

Comments

@andreashandel
Copy link

Is there a (not too complex) way of using mlr (maybe with the help of parallelMap or some other R packages) to run machine learning algorithms on a (NVIDIA) GPU? If anyone knows if/how that could be done, I'd love to get some pointers. Thanks!

@Prometheus77
Copy link

AFAIK there's nothing natively set up to do this, nor is there a way to (for example) run a randomForest or similar algorithm which wasn't built to run on CUDA on the GPU. One could certainly create a custom learner to implement a GPU-capable training algorithm, for example mxnet. There's actually a branch that has an mxnet learner implemented: https:/mlr-org/mlr/tree/mxnet

You'd simply have to modify your learner with something like learner <- setHyperPars(learner, device = mx.gpu()) in order to get it to work. (Of course it won't be that simple, there will be bugs to work through, but you get the idea).

@ja-thomas
Copy link
Contributor

The only three learner that are currently implemented that support GPU are afaik:

  • xgboost
  • lightgbm
  • mxnet

(See here: https:/mlr-org/mlr-extralearner)

@berndbischl
Copy link
Member

yes, what was said is correct. mlr itself currently has no "infrastructure" or support for this.
but mlr (in the future) potentially make it easier to support this "native" parallelizuation in the learners

@dkyleward
Copy link

@berndbischl

Just trying to make sure I understand. The xgboost package has a parameter tree_method which can take several values. One of those values is gpu_hist, which appears to toggle GPU training. For most of the parameters listed at https://xgboost.readthedocs.io/en/latest/parameter.html, I can include them with makeLearner()'s par.vals argument. If I try to use tree_method, I get an error. Is this because mlr doesn't currently have the ability to pass that parameter through?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants