LocalSubmission: Difference between revisions
Jump to navigation
Jump to search
Line 37: | Line 37: | ||
=== Comments and FAQ === | === Comments and FAQ === | ||
*In case you would like to access a root file you should copy it to the /scratch space on the workernode. | |||
*In case you would like to access a root file you should copy it to the '''$TMPDIR''' (=/scratch/jobid.cream02.ac.be/) space on the workernode. | |||
**/scratch is the native disk of the workernode and is several 100 GBs big. | **/scratch is the native disk of the workernode and is several 100 GBs big. | ||
**Each job is allotted a working directory that is cleaned automatically at the end of the job. This directory is | **Each job is allotted a working directory that is cleaned automatically at the end of the job. This directory is stored in the variable $TMPDIR | ||
**Do not read root files from /user. This directory is not physically located on the workernode, it is mounted from the fileserver. Doing this will put a big load on the fileserver potentially causing the UIs to be slow. | |||
**Do not read root files from / | |||
'''****** IMPORTANT *******'''<br> | '''****** IMPORTANT *******'''<br> | ||
If you use the local submission, please notice that you potentially can slow down our site. So please, copy all the files you will use during the job to | If you use the local submission, please notice that you potentially can slow down our site. So please, copy all the files you will use during the job to $TMPDIR to avoid this. <br> | ||
==== How to set CMSSW environment in a batch job ==== | |||
Add the following lines to your script : | Add the following lines to your script : | ||
Line 57: | Line 54: | ||
pwd=$PWD | pwd=$PWD | ||
source $VO_CMS_SW_DIR/cmsset_default.sh # make scram available | source $VO_CMS_SW_DIR/cmsset_default.sh # make scram available | ||
cd / | cd /user/$USER/path/to/CMSSW_X_Y_Z/src/ # your local CMSSW release | ||
eval `scram runtime -sh` # don't use cmsenv, won't work on batch | eval `scram runtime -sh` # don't use cmsenv, won't work on batch | ||
cd $pwd | cd $pwd | ||
</pre> | </pre> | ||
==== How to make your proxy available during batch jobs ==== | |||
* Create a proxy with long validity time: | |||
voms-proxy-init --valid 192:0 | |||
* Copy it to /user | |||
cp $X509_USER_PROXY /user/$USER/ | |||
* In your sh script you send to qsub, add the line: | |||
export X509_USER_PROXY=/user/$USER/x509up_$(id -u $USER) # Or the name of the proxy you copied before if you changed the name | |||
* Then technically, to copy a file made in your job in the /scratch area, you just do: | |||
gfal-copy file://$TMPDIR/MYFILE srm://maite.iihe.ac.be:8443/pnfs/iihe/MY/DIR/MYFILE | |||
=== Stop your jobs === | === Stop your jobs === |
Revision as of 14:11, 23 November 2016
Direct submission to local queue on the T2_BE_IIHE cluster
Aim
- The aim of this page is to provide a brief introduction how to submit to the localqueue.
- The localqueue allows to send executable code to the Tier2 cluster.
- This procedure can be used to run non-CMSSW code that need access to files on the Storage Element (SE) maite.iihe.ac.be.
- It is useful to use this procedure to not overload the User Interfaces (UIs) known as the mX machines.
Procedure
- Log in to a UI mX.iihe.ac.be; replace X with a number of choice. See policies about the policies on the UIs.
- Make a directory and prepare an executable.
mkdir directsubmissiontest cd directsubmissiontest/ emacs script.sh&
- Paste following code into script.sh. (see below)
- Execute the following command to submit the script to the local queue
qsub -q localgrid@cream02 -o script.stdout -e script.stderr script.sh
- Follow the progress of your job on the UI
qstat -u $USER localgrid@cream02
- Your job are finished if you don't see it anymore with qstat. You should now be able to find your output files in the directory you've create:
/user/$USER/directsubmissiontest/script.stdout /user/$USER/directsubmissiontest/script.stderr /user/$USER/directsubmissiontest/
Comments and FAQ
- In case you would like to access a root file you should copy it to the $TMPDIR (=/scratch/jobid.cream02.ac.be/) space on the workernode.
- /scratch is the native disk of the workernode and is several 100 GBs big.
- Each job is allotted a working directory that is cleaned automatically at the end of the job. This directory is stored in the variable $TMPDIR
- Do not read root files from /user. This directory is not physically located on the workernode, it is mounted from the fileserver. Doing this will put a big load on the fileserver potentially causing the UIs to be slow.
****** IMPORTANT *******
If you use the local submission, please notice that you potentially can slow down our site. So please, copy all the files you will use during the job to $TMPDIR to avoid this.
How to set CMSSW environment in a batch job
Add the following lines to your script :
pwd=$PWD source $VO_CMS_SW_DIR/cmsset_default.sh # make scram available cd /user/$USER/path/to/CMSSW_X_Y_Z/src/ # your local CMSSW release eval `scram runtime -sh` # don't use cmsenv, won't work on batch cd $pwd
How to make your proxy available during batch jobs
- Create a proxy with long validity time:
voms-proxy-init --valid 192:0
- Copy it to /user
cp $X509_USER_PROXY /user/$USER/
- In your sh script you send to qsub, add the line:
export X509_USER_PROXY=/user/$USER/x509up_$(id -u $USER) # Or the name of the proxy you copied before if you changed the name
- Then technically, to copy a file made in your job in the /scratch area, you just do:
gfal-copy file://$TMPDIR/MYFILE srm://maite.iihe.ac.be:8443/pnfs/iihe/MY/DIR/MYFILE
Stop your jobs
If for some reason, you want to stop your jobs on the server, you can use this procedure:
qstat @cream02 | grep <your user name>
This will give you a list of jobs running with thier ID's. f.i.
394402.cream02 submit.sh odevroed 0 R localgrid
Now, use the ID to kill the job with the qdel command:
qdel 394402.cream02
Your job will now be removed.
Attachments
- script.sh
#!/bin/bash ##Some general shell commands STR="Hello World!" echo $STR echo ">> script.sh is checking where it is" pwd echo ">> script.sh is checking how much disk space is still available" df -h echo ">> script.sh is listing files and directories in the current location" ls -l echo ">> script.sh is listing files and directories in userdir on storage element" ls -l /pnfs/iihe/cms/store/user/$USER ##When accessing files on the storage element it is important to execute your code on the /scratch partition of the workernode you are running on. Therefore you need to copy your executable which is accessing/writing root files onto the /scratch partition and execute it there. This is illustrated below. echo ">> go to TMPDIR" cd $TMPDIR echo ">> ls of TMPDIR partition" ls -l ##Create a small root macro echo "{ //TFile *MyFile = new TFile(\"testfile.root\",\"RECREATE\"); //MyFile->ls(); //MyFile->Close(), TFile* f=TFile::Open(\"dcap://maite.iihe.ac.be/pnfs/iihe/cms/store/user/$USER/testfile.root\"); f->ls(); f->Close(); } " > rootScript.C cat rootScript.C echo ">> set root" ##Copied a root version from /user/cmssoft into /localgrid export ROOTSYS=/localgrid/$USER/cmssoft/root_5.26.00e_iihe_default_dcap/root export PATH=$PATH:$ROOTSYS/bin export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ROOTSYS/lib export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/lib echo ">> execute root macro" root -q -l -b -n rootScript.C echo ">> ls of TMPDIR" ls -l echo "copy the file back to the /localgrid sandbox" #cp testfile.root /localgrid/jmmaes/directsubmissiontest