Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Enable nvdashboard to show GPU memory utilization for multiple nodes #47

Open
randerzander opened this issue Feb 28, 2020 · 3 comments

Comments

@randerzander
Copy link

With a Dask LocalCUDACluster it's easy to tell how well utilized my whole "cluster" of GPUs is.

For multi-node dask clusters where each node has GPUs, it would be great if nvdashboard could work with something like Dask's client.run API to gather the same utilization metrics from all nodes.

@jakirkham
Copy link
Member

Related to issue ( #34 ) and issue ( #45 ).

@jakirkham
Copy link
Member

I think this may already be supported by dask-labextension. Not sure if you have tried that already Randy. Please let us know if you are running into issues here and we can take a look 🙂

@jacobtomlinson
Copy link
Member

Did you make any progress with the Dask lab extension @randerzander?

Nvdashboard is just for single node. Dask can show GPU metrics too which should solve your issue. One thing to note is that the lab extension only lists the GPU dashboards under certain conditions at the moment, which is frustrating.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants