GPUs: Difference between revisions
(Created page with "left|25px|line=1| This section is experimental, feel free to improve and send us comments or questions ! <br> === Description of the hardware ==...") |
No edit summary |
||
Line 1: | Line 1: | ||
[[ | |||
== About GPUs at T2B == | |||
We do not have any GPUs at T2B. That does not mean you do not have access to some. | |||
=== If you belong to a flemish university (VUB, UAntwerpen, UGent) === | |||
---- | |||
<br>You have access to all VSC clusters, with quite some choice in GPUs. | |||
Have a look at all VSC clusters available and their hardware on [https://docs.vscentrum.be/en/latest/hardware.html this page].<br> | |||
-> For instance, the '''VUB Hydra''' cluster has 4 nodes with 2 x '''Nvidia Tesla P100''' cards and 8 nodes with 2 x '''Nvidia A100'''. | |||
==== Getting an account ==== | |||
You can easily get an account valid throughout all VSC clusters, just follow their documentation [https://docs.vscentrum.be/en/latest/index.html here]. | |||
==== Access GRID ressources ==== | |||
We have checked that at least on VUB Hydra, you have access to /cvmfs, and can only use /pnfs via [[GridStorageAccess|grid commands]] (so you can't do a '''ls /pnfs'''). | |||
To get an environment similar to what is on T2B cluster, just source the following: | |||
source /cvmfs/grid.cern.ch/centos7-umd4-ui-211021/etc/profile.d/setup-c7-ui-python3-example.sh | |||
==== Support and Feedback ==== | |||
As we do not manage any of the clusters, prefer contacting their support, adding us in CC if you want. | |||
If your cluster does not have what you need (like /cvmfs, etc), feel free to inform us, we will try to discuss with the other admins how to make this possible. Please note that VSC has a strict process to new software, so it might take some time.<br> | |||
Also, as this mixed usage of our resources and GPUs from other clusters is rather new, we would appreciate any feedback you might have ! | |||
<br> | <br> | ||
=== | === If you belong to a wallonian university (ULB, UCL) === | ||
---- | |||
<br>You have access to all CECI clusters, with quite some choice in GPUs. | |||
Have a look at all CECI clusters available and their hardware on [https://www.ceci-hpc.be/clusters.html this page].<br> | |||
-> For instance, the '''UMons Dragon2''' cluster has 2 nodes with 2 x '''Nvidia Tesla V100''' cards. | |||
==== Getting an account ==== | |||
You can easily get an account valid throughout all CECI clusters, just follow their documentation [https://login.ceci-hpc.be/init/ here]. | |||
<br> | ==== Access GRID ressources ==== | ||
At this time, there is no /cvmfs access, therefore no access to /pnfs resources.<br> | |||
You would have to use rsync to transfer files to/from T2B if needed. | |||
==== Support and Feedback ==== | |||
As we do not manage any of the clusters, prefer contacting their support, adding us in CC if you want. | |||
If your cluster does not have what you need (like /cvmfs, etc), feel free to inform us, we will try to discuss with the other admins how to make this possible.<br> | |||
Also, as this mixed usage of our resources and GPUs from other clusters is rather new, we would appreciate any feedback you might have ! |
Revision as of 09:22, 28 February 2023
About GPUs at T2B
We do not have any GPUs at T2B. That does not mean you do not have access to some.
If you belong to a flemish university (VUB, UAntwerpen, UGent)
You have access to all VSC clusters, with quite some choice in GPUs.
Have a look at all VSC clusters available and their hardware on this page.
-> For instance, the VUB Hydra cluster has 4 nodes with 2 x Nvidia Tesla P100 cards and 8 nodes with 2 x Nvidia A100.
Getting an account
You can easily get an account valid throughout all VSC clusters, just follow their documentation here.
Access GRID ressources
We have checked that at least on VUB Hydra, you have access to /cvmfs, and can only use /pnfs via grid commands (so you can't do a ls /pnfs). To get an environment similar to what is on T2B cluster, just source the following:
source /cvmfs/grid.cern.ch/centos7-umd4-ui-211021/etc/profile.d/setup-c7-ui-python3-example.sh
Support and Feedback
As we do not manage any of the clusters, prefer contacting their support, adding us in CC if you want.
If your cluster does not have what you need (like /cvmfs, etc), feel free to inform us, we will try to discuss with the other admins how to make this possible. Please note that VSC has a strict process to new software, so it might take some time.
Also, as this mixed usage of our resources and GPUs from other clusters is rather new, we would appreciate any feedback you might have !
If you belong to a wallonian university (ULB, UCL)
You have access to all CECI clusters, with quite some choice in GPUs.
Have a look at all CECI clusters available and their hardware on this page.
-> For instance, the UMons Dragon2 cluster has 2 nodes with 2 x Nvidia Tesla V100 cards.
Getting an account
You can easily get an account valid throughout all CECI clusters, just follow their documentation here.
Access GRID ressources
At this time, there is no /cvmfs access, therefore no access to /pnfs resources.
You would have to use rsync to transfer files to/from T2B if needed.
Support and Feedback
As we do not manage any of the clusters, prefer contacting their support, adding us in CC if you want.
If your cluster does not have what you need (like /cvmfs, etc), feel free to inform us, we will try to discuss with the other admins how to make this possible.
Also, as this mixed usage of our resources and GPUs from other clusters is rather new, we would appreciate any feedback you might have !