Getting started with the MadGraph software
Jump to navigation
Jump to search
Remark: this is a preliminary page... more to come
Environment setup
- [[[Python_2_6]] How to source Python 2.6]
Local submission
- If you have never submitted jobs without crab to the cluster, first read and execute this [[[localSubmission]] How to submit jobs with PBS]
'How to use MadWeight on the IIHE cluster
- On your computer get the madweight version you would like to have by doing:
bzr branch lp:~maddevelopers/madgraph5/<yourpreferredmadweightversion> # to install the version of madweight you would like to use, instructions tested with madweight_mc_perm but you can use another version
- Copy it to the m-machines:
scp -r <yourpreferredmadweightversion> <yourusername>@m0.iihe.ac.be:/localgrid/<yourusername>/
- Log onto an m-machine, e.g.
- Go to your /localgrid directory
cd /localgrid/<yourusername>/ # /localgrid is the only directory that can be seen from the nodes! mkdir temp/ # later in the instructions you will understand why ls
- You should see the directory <yourpreferredmadweightversion>
- Now you can use mg5 and generate your process as usual. E.g. inside the directory madweight mc perm, type
cd <yourpreferredmadweightversion> ./bin/mg5 generate p p > t t~ , ( t > b w+ , w+ > mu+ vm ), (t~ > b~ w- , w- > u~ d) output madweight MY_PROC
- For madweight5_interface,you need to add the following missing directory in MY_PROC:
cd MY_PROC cp -r ../madgraph/various bin/internal/.
- First modify MY_PROC/Cards/me5_configuration.txt
# Default Running mode # 0: single machine/ 1: cluster / 2: multicore run_mode = 1 # Cluster Type [pbs|sge|condor|lsf|ge] Use for cluster run only # And cluster queue cluster_type = pbs cluster_queue = localgrid@cream02 # Path to a directory readable from the nodes to avoid direct writing on the central disk # Note that condor cluster avoid direct writing by default (therefore this options did not modify condor cluster) cluster_temp_path = /grid_mnt/volumes<u>Z2Volume</u>localgrid/<yourusername>/temp/ # this is the directory you created earlier!
- So far these were the usual steps to use madweight on a cluster. Unfortunately, this is not enough, for two reasons:
- To submit the jobs, madweight prepares some instruction files, with the path to the executable, input files, output files. This path is constructed by identifying the current path with the command os.getcwd(). But this last comment returns the real path /localgrid mnt/localgrid/..., which does not point to any real location on the nodes. Instead the path that needs to be given to the nodes is the symbolic link /localgrid/.... The routine madweight_interface.py has been changed by Pierre Artoisenet to solve this problem.
2. The other problem is that the default environment on the nodes does not allow to launch the fortran executable comp_madweight that calculates the weights. I modified the routine cluster.py to solve this problem.
- So you need to overwrite the two previously-mentioned scripts by those attached at the bottom of this wiki page:
cp madweight_interface.py MY_PROC/bin/internal/. cp cluster.py MY_PROC/bin/internal/.
- Modify the walltime in cluster.py at line 784 to a sensible value. The hard-coded default is 1 minute now, the shorter the walltime, the higher the priority your jobs will have, however, if it is too short, your job will be killed without reaching the end of its calculation.
command = ['qsub','-o', stdout, '-N', me_dir, '-e', stderr, '-l walltime=HH:MM:SS', # HH:MM:SS needs to be a sensible value, a bit more than the actual running time needed, because the job will be killed after it exceeded this walltime '-V']
- In Cards/MadWeight_card.dat you can specify the number of integration points and the number of events per job (nb_events_per_node) and more options.
- Proceed with running madweight as usual
- To adjust transfer functions in the directory MY_PROC, do:
./bin/mw_options define_transfer_fct
- Run madweight as:
/bin/madweight -1 # to generate all the cards for the evaluation of the weights /bin/madweight -2 # to create the phase-space generator that will be used for the evaluation of the weights /bin/madweight -3 # to compile the code /bin/madweight -4 # to parse the input event file. If an input.lhco is available in MY_PROC/Events/ this will be taken, if not, you will be asked to specify the path to an lhco file and it will be copied to MY_PROC/Events/. If you want to run on a different lhco file you have to remove or replace the input.lhco file that is in MY_PROC/Events/. /bin/madweight -6 # to launch the jobs on the cluster.
- Check the status of your jobs at
http://mon.iihe.ac.be/jobview/overview.html
- After a while, you get a message that all jobs are done. At this stage, you can type
/bin/madweight -8 # to collect the results
- If for some reason you want to stop your jobs you can do:
qstat @cream02 | grep <yourusername>
- This will give you a list of jobs running with their ID, e.g.:
394402.cream02 submit.sh <yourusername> 0 R localgrid
- Kill the job:
qdel 394402.cream02