GPUs
This section is experimental, feel free to improve and send us comments or questions !
Description of the hardware
- One node with 2 NVidia Tesla M2050 (2.6GB) GPU cards, 8 cores, 12GB of ram and 160 GB of /scratch
- Two nodes with each 6 NVidia Tesla M2075 (5.3GB) GPU cards, 24 cores, 64 GB of ram and 820 GB of /scratch
Queues
We have 3 queues:
- gpu2: has the 1 node with the 2 GPUs
- gpu6: has the 2 nodes with the 6 GPUs each, so 12 GPUs in total
- gpu: has all 3 nodes
Running Jobs using the GPUs
As we are no experts and don't use the GPUs ourselves, this is really work-in-progress. Please share with us and update this page if you have more in-depth information !
Inside your jobs, you need to export the number of available GPUs to your environment.
A small script has been written to do so, so just add it in your qsub scrip:
. /swmgrs/icecubes/set_gpus.sh
This should give you something like:
env|grep CUDA CUDA_VISIBLE_DEVICES=0,1,2,3,4,5
Then just send as usual your job using one of the GPU queues:
qsub -q gpu myscript.sh