GPUs: Difference between revisions

From T2B Wiki
Jump to navigation Jump to search
No edit summary
 
Line 2: Line 2:
== About GPUs at T2B ==
== About GPUs at T2B ==


We do not have any GPUs at T2B. That does not mean you do not have access to some.
We do not have any GPUs at T2B. But all members of a Belgian university have access to a GPU through their university cluster as discussed below.





Latest revision as of 10:29, 26 January 2024

About GPUs at T2B

We do not have any GPUs at T2B. But all members of a Belgian university have access to a GPU through their university cluster as discussed below.


If you belong to a flemish university (VUB, UAntwerpen, UGent)



You have access to all VSC clusters, with quite some choice in GPUs. Have a look at all VSC clusters available and their hardware on this page.
-> For instance, the VUB Hydra cluster has 4 nodes with 2 x Nvidia Tesla P100 cards and 8 nodes with 2 x Nvidia A100.


Getting an account

You can easily get an account valid throughout all VSC clusters, just follow their documentation here.


Access GRID ressources

We have checked that at least on VUB Hydra, you have access to /cvmfs, and can only use /pnfs via grid commands (so you can't do a ls /pnfs). To get an environment similar to what is on T2B cluster, just source the following:

source /cvmfs/grid.cern.ch/centos7-umd4-ui-211021/etc/profile.d/setup-c7-ui-python3-example.sh

Support and Feedback

As we do not manage any of the clusters, prefer contacting their support, adding us in CC if you want. If your cluster does not have what you need (like /cvmfs, etc), feel free to inform us, we will try to discuss with the other admins how to make this possible. Please note that VSC has a strict process to new software, so it might take some time.
Also, as this mixed usage of our resources and GPUs from other clusters is rather new, we would appreciate any feedback you might have !



If you belong to a wallonian university (ULB, UCL)



You have access to all CECI clusters, with quite some choice in GPUs. Have a look at all CECI clusters available and their hardware on this page.
-> For instance, the UMons Dragon2 cluster has 2 nodes with 2 x Nvidia Tesla V100 cards.


Getting an account

You can easily get an account valid throughout all CECI clusters, just follow their documentation here.


Access GRID ressources

At this time, there is no /cvmfs access, therefore no access to /pnfs resources.
You would have to use rsync to transfer files to/from T2B if needed.

Support and Feedback

As we do not manage any of the clusters, prefer contacting their support, adding us in CC if you want. If your cluster does not have what you need (like /cvmfs, etc), feel free to inform us, we will try to discuss with the other admins how to make this possible.
Also, as this mixed usage of our resources and GPUs from other clusters is rather new, we would appreciate any feedback you might have !