From T2B Wiki
Jump to navigation Jump to search

This section is experimental, feel free to improve and send us comments or questions !

Description of the hardware

  • One node with 2 NVidia Tesla M2050 (2.6GB) GPU cards, 8 cores, 12GB of ram and 160 GB of /scratch
  • Two nodes with each 6 NVidia Tesla M2075 (5.3GB) GPU cards, 24 cores, 64 GB of ram and 820 GB of /scratch


We have 3 queues:

  • gpu2: has the 1 node with the 2 GPUs
  • gpu6: has the 2 nodes with the 6 GPUs each, so 12 GPUs in total
  • gpu: has all 3 nodes

Running Jobs using the GPUs


As we are no experts and don't use the GPUs ourselves, this is really work-in-progress. Please share with us and update this page if you have more in-depth information !

Inside your jobs, you need to export the number of available GPUs to your environment. A small script has been written to do so, so just add it in your qsub scrip:

. /swmgrs/icecubes/

This should give you something like:

env|grep CUDA

Then just send as usual your job using one of the GPU queues:

qsub -q gpu