Strumenti Utente

Strumenti Sito


roberto.depietri:user:tramontana

Differenze

Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.

Link a questa pagina di confronto

Entrambe le parti precedenti la revisioneRevisione precedente
Prossima revisione
Revisione precedente
roberto.depietri:user:tramontana [15/01/2013 11:46] roberto.depietriroberto.depietri:user:tramontana [16/02/2013 18:40] (versione attuale) roberto.depietri
Linea 1: Linea 1:
 +====== Usage of Tramontana ======
 +
 +The main source of information about Tramontana is provided by the PISA WIKI located at the web pages http://wiki.infn.it/cn/csn4/calcolo/csn4cluster/home while more dettailled information on cluster are [[http://wiki.infn.it/strutture/pi/datacenter/cluster_gruppo_iv/csn4cluster/home|HERE]]. Fast access to the cluster status can be visualized:
 +
 +  * [[http://farmsmon.pi.infn.it/?r=hour&s=descending&c=Tramontana| USAGE]]
 +  * [[http://farmsmon.pi.infn.it/lsfmon/| LSF Monitoring]]
 +
 +
 +The Tramontana cluster can be accessed also using a simplified access to the grid provided by the web-portal created by CNAF [[https://portal.italiangrid.it/web/guest|IGI grid portal]]
 +
 +The first step is to be authenticate to its usage
 +
 +  voms-proxy-init -voms theophys:/theophys/IS_OG51/Role=parallel  --valid 24:00
 +  myproxy-init -d -n  -c 3880
 +
 +An easy access to the file on the cluster can be achieved using the "uberftp" protocol
 +
 +  alias UBER='uberftp -D 0 gridce3.pi.infn.it '
 +  UBER
 +
 +As "theophys" user we do have to three-storage areas whose actual usage can be tested looking at the following files
 +
 +  uberftp -D 0 gridce4.pi.infn.it "cat /gpfs/ddn/csn4home/.disk_usage"
 +  uberftp -D 0 gridce4.pi.infn.it "cat /gpfs/ddn/srm/theophys/.disk_usage"
 +  uberftp -D 0 gridce4.pi.infn.it "cat /gpfs/ddn/theompi/chk/.disk_usage"
 +
 +  * The first storage area is for creating local job tree and usually is content is erased after job completion
 +  * The second storage area is for permanent STORM storage and its content can also accessed using POSIX standard
 +  * The third area is a scratch area were file may live for ever and can be used to save JOB checkpoint or partials job results. We save our works on the sub-tree "/gpfs/ddn/theompi/chk/OG51/Parma" and in its subdirectory "BarModeRuns" all the temporary file produced by the run project "BarMode". Similar convention will be used for any other project.
 +
 +For project involving Cactus runs we decided that  (let us suppose that the main directory of the project is: "/gpfs/ddn/theompi/chk/OG51/Parma/BarModeRuns") that this tree will contain a 
 +
 +  * **./par**  a par directory were the parameter file used to run the simulations are stored.
 +  * ./**[simname]** A directory containing the results obtained running the par file  ./**par/[simname].par** 
 +  * ./**[simname]**/CHECKPOINT   The checkpoint of the various restarts  
 +  * ./**[simname]**/output-0000  The output of the first start of the runs   
 +  * ./**[simname]**/output-0001  The output of the second start of the runs (from checkpoints)   
 +  * ./**[simname]**/output-....
 +  * ./**[simname]**/output-N     The output of the Nth-start of the runs   
 + 
 +
 +
 +
 +
 +===== Running Cactus =====
 +
 +We have prepared a ... THE BEST WAY to manage jobs is using the following commands (SUBMIT/STATUS/GET results)
 +
 +   glite-wms-job-submit -a -o A100M15b265_r50_8.job A100M15b265_r50_8.jdl                   [SUBMIT]
 +   glite-wms-job-status -i A100M15b265_r50_8.job                                            [STATUS]
 +   glite-wms-job-cancel -i A100M15b265_r50_8.job                                           [CANCEL]
 +   glite-wms-job-output --noint  -i A100M15b265_r50_8.job --dir  ./A100M15b265_r50_8.dir    [GET OUTPUT]
 +
 +That's all folks....
 +
 +
 +
 +===== Compiling Cactus =====
 +
 +Now the compilation on "Tramontana" is in two steps. The first step is to create 
 +a tar ball of the compilation tree and transfer it on the PISA SRM (Storage Resource Manager).
 +The second step is to submit a compilation job on cluster. This stage will be revised in near 
 +future to do compilation on the pisa user interface.
 +
 +To check which exe is on the SRM one can gives the commands:
 +
 +  lcg-ls -l srm://gridsrm.pi.infn.it/theophys/IS_OG51/Parma/Cactus/exeTRAMONTANA/
 +  lcg-ls -l lfn:/grid/theophys/IS_OG51/Parma/Cactus/exeTRAMONTANA 
 +  
 +  lcg-ls -l  srm://gridsrm.pi.infn.it/theophys/IS_OG51/Parma/SRC/
 +  lcg-ls -l  lfn:/grid/theophys/IS_OG51/Parma/SRC/
 +
 +
 +The one with name ending with "_MP" will refer to executable with OpenMP.
 +
 +All the script and data need to make compilations are stored in the directory: "/storageQ01/BarModeProject/TramontanaSubmitions/Compilation"
 +on einstein.pr.infn.it.  
 +
  

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki