CorrectWorkflow: Difference between revisions
(Created page with " '''This page describes what we consider a correct workflow.<br> '''It details how to efficiently use storage and compute resources at T2B, to enable fast analysis turnaround, but also avoid situations where a workflow will put stress on the cluster infrastructure and impact your fellow users.<br> === Introduction === You will see here a few different points and what we, with our knowledge of the underlying infrastructure but also just experience and user returns, know...") |
No edit summary |
||
Line 1: | Line 1: | ||
'''!! THIS PAGE IS UNDER CONSTRUCTION !!''' | |||
'''This page describes what we consider a correct workflow.<br> | '''This page describes what we consider a correct workflow.<br> |
Revision as of 14:37, 5 May 2023
!! THIS PAGE IS UNDER CONSTRUCTION !!
This page describes what we consider a correct workflow.
It details how to efficiently use storage and compute resources at T2B, to enable fast analysis turnaround, but also avoid situations where a workflow will put stress on the cluster infrastructure and impact your fellow users.
Introduction
You will see here a few different points and what we, with our knowledge of the underlying infrastructure but also just experience and user returns, know is the preferred way to do things.
When you start using the cluster, you have to juggle reading files from /pnfs, loading software from /cvmfs, sending jobs to the cluster, and writing your results to either /pnfs or /user. While with a small number of jobs, if any of those steps within your workflow is done in a bad way, it is unlikely to impact the cluster and to have effects on others. That is a totally different discussion when you start sending O(100) or O(1000) jobs ! There, how you do things can first make your jobs inefficient, but can also have direct consequences for everyone else using the cluster resources !
You can imagine that while having a few jobs reading the same file on for instance /pnfs, can't stress the system, having thousands of jobs reading the same file (that resides on 1 harddisk of 1 server) is not the same scale at all! Or jobs using 3 times more memory than expected, would not harm worker nodes if there's only a few, but could gobble up all free memory if sent by thousands, whith then a lot of them on each worker node.
This is why some steps can be done in a scale-proof way when starting to conceptualize your workflows ! Making sure that you can send thousands of jobs to the cluster without making your jobs crash, and forcing us to remove all your jobs to preserve the cluster ...
Please go through each of the points and see if you should not adapt your method ! Note that we are always open to discussions, and if you have questions or want details on some steps before sending a big production or analysis, do not hesitate to contact us ! We are here to help, and much prefer controlled big submissions than having a lot of resources wasted and cleaning-up the mess afterwards.
I/ Reading files from /pnfs
Preferred way to read files
Since a while ago, /pnfs has been accessible in reading and writing in nfs mode using posix commands (ls, cp, rm, mkdir, etc). While that is certainly very practical, it is not an efficient way to read data on /pnfs.
The best workflow when you need to work on files is to first copy it locally in the job TMPDIR, then read it from this local disk copy rather than /pnfs. And the best way to copy files FROM /pnfs is to use (replacing <MYFILE>):
dccp dcap://maite.iihe.ac.be/pnfs/iihe/.../<MYFILE> $TMPDIR/<MYFILE>
Prefer big files rather than a lot of small ones
/pnfs is a grid-accessible mass storage system. That means that it is meant for access with grid tools (and authentification), and mainly for storing file in O(1-10GB). It is not meant for storing efficiently thousands of small text files.