-
-
Notifications
You must be signed in to change notification settings - Fork 405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mlr and GPU computing? #2293
Comments
AFAIK there's nothing natively set up to do this, nor is there a way to (for example) run a randomForest or similar algorithm which wasn't built to run on CUDA on the GPU. One could certainly create a custom learner to implement a GPU-capable training algorithm, for example mxnet. There's actually a branch that has an mxnet learner implemented: https:/mlr-org/mlr/tree/mxnet You'd simply have to modify your learner with something like learner <- setHyperPars(learner, device = mx.gpu()) in order to get it to work. (Of course it won't be that simple, there will be bugs to work through, but you get the idea). |
The only three learner that are currently implemented that support GPU are afaik:
(See here: https:/mlr-org/mlr-extralearner) |
yes, what was said is correct. mlr itself currently has no "infrastructure" or support for this. |
Just trying to make sure I understand. The |
Is there a (not too complex) way of using mlr (maybe with the help of parallelMap or some other R packages) to run machine learning algorithms on a (NVIDIA) GPU? If anyone knows if/how that could be done, I'd love to get some pointers. Thanks!
The text was updated successfully, but these errors were encountered: