Faq t2b: Difference between revisions

From T2B Wiki
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
<br><br>
<br><br>
=== List of the UIs / mX machines: ===
=== List of the UIs / mX machines: ===
- m0 , m1 , m2 , m3 => 20 minutes of CPU time per process <br>
- m2 , m3 => 20 minutes of CPU time per process <br>
- m5 , m6 , m7 , m8 , m9 => 5 hours of CPU time per process
- m6 , m7 => 1 hour of CPU time per process




Line 8: Line 8:
Add option  ' '''-o ServerAliveInterval=100''' ' to your ssh command
Add option  ' '''-o ServerAliveInterval=100''' ' to your ssh command


=== Send a job to the local cluster: ===
qsub -q localgrid -o script.stdout -e script.stderr [-l walltime=<HH:MM:SS>] myscript.sh




Line 27: Line 24:
Note also that if you ask for more than one core your time in the queue will probably be longer as the scheduler needs to find the correct amount of free slots on one single machine. We advise against putting this number higher than one unless you really need it for parallel jobs.
Note also that if you ask for more than one core your time in the queue will probably be longer as the scheduler needs to find the correct amount of free slots on one single machine. We advise against putting this number higher than one unless you really need it for parallel jobs.


=== Access internet '''faster''' from the UIs ===
=== Send a job to the Old PBS local cluster: ===
Since our bandwidth with internet is limited and extremely expensive, you need to use the another one:
qsub -q localgrid -o script.stdout -e script.stderr [-l walltime=<HH:MM:SS>] myscript.sh
* For http/https traffic (uses the university traffic)
export http_proxy=http://qproxy.wn.iihe.ac.be:3128
export https_proxy=http://qproxy.wn.iihe.ac.be:3128
* For ssh traffic, through a server you have access to (example using CERN)
:: edit your '''.ssh/config''' file ===>
<pre>
host github.com
    ProxyCommand ssh MYUSERNAME@lxplus.cern.ch nc github.com 22
    User MYUSERNAME
</pre>

Revision as of 10:10, 8 December 2021



List of the UIs / mX machines:

- m2 , m3 => 20 minutes of CPU time per process
- m6 , m7 => 1 hour of CPU time per process


Keep ssh connection to UI open:

Add option ' -o ServerAliveInterval=100 ' to your ssh command


MadGraph taking all the cores of a workernode

The default settings for MadGraph is to take all the available cores. This kills the site. If the number of cores used by MadGraph is higher than 1, this needs to be asked to the job scheduler with the following directive added to qsub:

 -lnodes=1:ppn=2 

Where ppn is the number of cores you request.
To tell MadGraph the number of cores he can take per job, use the following recipe:

./bin/mg5_aMC 
set nb_core 1  #or 2 or whatever you want
save options

Note 'nb_core' and 'ppn' must alway be the same value!
Note also that if you ask for more than one core your time in the queue will probably be longer as the scheduler needs to find the correct amount of free slots on one single machine. We advise against putting this number higher than one unless you really need it for parallel jobs.

Send a job to the Old PBS local cluster:

qsub -q localgrid -o script.stdout -e script.stderr [-l walltime=<HH:MM:SS>] myscript.sh