Instructions for the scripts to perform Mokka simulation and Marlin reconstruction on the Grid.
Prerequisite for the scripts usage is to own a grid certificate. Desy Grid Web Page
An introduction to the Grid can be found in the flcwiki. Grid Start
Since the scripts set on a mysql database, a mysql tutorial might be useful (though knowing mysql is not strictly necessary to use the scripts). Mysql tutorial
Any feedback is welcome.
Mysqlcc: accessing the database
What the jobs do
Inserting a new generator file in the database
Execution of the Mokka scripts
Execution of the Marlin scripts
Extras for big or mass productions
Create new binaries
Downloading the scripts
Have a look also here before running the scripts:
Size of the generator files
Also working in preload burdens the dcache
About Kerberos credentials and letting the scripts loop alone
Some useful MYSQL queries
Database Structure Back to the index
The scripts set on a mysql database, provided with web interface. The scripts automatically interact with the database to register all the information about the generator, Mokka and Marlin files. The tables of the database, called MC, are:
This table, provided with web interface, is the table containing all the information about the generator files: location, number of events, cross sections, ...
This table, not provided with web interface, is the table containing all the temporary information about the Mokka jobs to be submitted or already submitted to the Grid. Every row of this table corresponds to one job. Location of the input file used, number of events to be simulated, name of the output slcio file, ... are reported. The "Status" column informs whether the job is:
This table, provided with web interface, is the table containing all the information about the simulated files: grid and storage element location, number of events, process, ...
This table, not provided with web interface, is the table contaning all the temporary information about the Marlin jobs to be submitted or already submitted to the Grid. Every row of this table corresponds to one job. Location of the input simulated files, number of events to be reconstructed, name of the output slcio file, ... are reported. There is a "Status" column as for the Grid_jobs table.
This table, provided with web interface, is the table containing all the information about the Marlin files: grid and storage element location, number of events, process, ...
This extra table, provided with web interface, is a table registering the activity of the Mokka jobs in all the Grid Computing Elements used. The scripts automatically interact with it to update it, whenever new jobs are run.
Mysqlcc: accessing the database Back to the index
The web interface of the database is meant to be looked at by the user that needs to access the information of the files already produced. When using the scripts it might be useful to have a direct look at the database (for example to look at some information which are not reported in the web interface of the database, in order not to burden it; or to look at the two tables of the database which report the temporary information about the grid jobs and are not provided with web interface). This can be done using myqlcc (mysql control center), which provides a user friendly monitor of mysql databases. Follow these intructions:
Name: any name, not relevant.
Host Name: flcweb01.desy.de, the machine where the database is hosted.
User Name: MCRead. This role allows only to read the database, not to apport any change to it. This role requires no password. If you need to modify something in the database use other roles which allow that, and fill also the Password field.
What the jobs do Back to the index
Difference between Mokka and Marlin jobs: the ILC software grid installation.The way the scripts work is slightly different for the simulation Mokka jobs and the recostruction/digitization Marlin jobs. The Mokka jobs are meant to be run mostly everywhere - where possible -, since the simulation is slow, and has to be shared between all the resources. At the link CE monitor there is a list of all the Computing Elements which have been used to run Mokka jobs. All the details are reported: number of successfully completed jobs, number of failures, reasons of the failures. This list is automatically updated while running the scripts. The Mokka jobs don't use local Grid installations of the ILC software in the Computing Element where the job is running. This is due to "historical" reasons from one side: the Mokka scripts have been set-up before the grid installation of the ILC software was available. Another reason is that, as explained, the Mokka jobs are meant to be run everywhere and for a new release of the ILC software an installation cannot be immediately available on all or many of the Computing Elements supporting the ilc VO. Moreover Mokka changes sometimes very fast, while building and testing a new detector model for example, and the grid installation of the ILC software cannot be updated everywhere so often. For this reason it has been decided to keep the Mokka jobs indipendent from the grid installation of the ILC software.
On the other hand the reconstruction is faster. It would be not so convenient to ship all the Mokka simulated files needed in input for a reconstruction to the Computing Element where the job runs, since this wouldn't take anymore an almost irrelevant time with respect to the reconstruction (as for the simulation). So it has been decided to run the reconstruction jobs only locally at the DESY Computing Element, working in "preload". The input Mokka files don't need to be copied to the Computing Element, but are read directly from the Storage Element. For this reason, since they are meant to be run only in our Computing Element, the Marlin jobs use the Grid ILC software installation. Note that Also working in preload burdens the dcache.
Mokka jobs: copying the input files to the Computing Elements.
So the Mokka jobs have to copy to the local Computing Element where they are running, together with the generator input file needed, also the binaries of the necessary software: Mokka, of course, but also mysql, the dump of the Mokka database (which cannot be accessed directly from the Computing Elements), the data for Geant4. For details about these input files see the section Create new binaries. Note that the scripts can access any input file, in case no problem occurs, also from other storage elements than the DESY one. When possible, it is particularly useful to replicate the files needed in input to the Mokka jobs also to other Storage Elements. In case the copying of one file from one Storage Element fails, the scripts try also the other replicas on the other Storage Elements (in case of repeated failures all the replicas are tried more times, with sleeping intervals in between). So having the files replicated increases notably the efficiency of the Mokka jobs. Another reason to replicate the input files (in particular the generator files, which are usually the biggest input files) is not to burden too much one single Storage Element. For example, for the first mass production all the Slac SM generator sample has been replicated also to the Zeuthen and the CNRS Storage Elements. The access of these generator files by the Mokka jobs has been shared between these three Storage Elements in an equal way, avoiding eccessive burden of one single storage element. See also Size of the generator files.
Output. The output of the Mokka jobs consists in one simulated .slcio file and one .tar.gz archive. The archive contains the standard error and output files of the job, the log file of the Mokka simulation, the Geant macro and the steering files used and the Gear geometry output from Mokka.
The Marlin jobs output consists also in one .tar.gz archive, and both DST and full REC .slcio reconstructed files. The full REC .slcio file is automatically splitted by the job in case it exceeds a decided size, so there can be more full REC .slcio files in output of one job. The .tar.gz archive contains the standard error and output files of the job, the Marlin log file and the steering file used.
The jobs try to copy their output to the DESY storage element. In case this should fail, they try more times with sleeping intervals in between. In case of repeated failures the job fails. No alternative storage element is tried. The scripts can be improved allowing the jobs to copy the output also to other storage elements than the DESY one, though this is not the main cause of failure of the jobs as shown by the CE monitor.
As soon as the simulation/reconstruction is concluded, the job checks that all the events expected have been actually scanned by Mokka/Marlin. If this is true it creates a directory on the grid catalogue, with a specific name, which is searched for by the checking scripts, once the job is concluded. When the checking scripts have found this directory on the grid catalogue and checked the job, the directory is deleted. The command of writing a directory (lfc-mkdir) is the last command that could ever fail, and is extremely fast. I decided for this way of checking the quality of the simulation/reconstruction, since it is the fastest but still absolutely safe. It would be much more time consuming - and without any advantage - to copy locally all the tar.gz archives of the concluded jobs on the machines where the scripts are running, open them, and look at the log files of Mokka/Marlin to check whether all the events desired have been scanned. Looking for the existence of the catalogue directory is faster and convinient also in the case the scripts should be changed and allowed to store their output also in other storage elements than the DESY one. In this case copying the tar archives to DESY and open them to check them would make even slower the checking routine. The checking routine checks also for the exitence on the expected locations (both storage element and grid catalogue) of the .tar.gz archive and the .slcio files (which have to have non-null size). The name of the "checking" directory also tells to the Marlin checking scripts, how many full REC .slcio files to expect, since their number is not fixed (the output processor splits them in a size-dependent way).
In case of job failure anything eventually left over by the job is deleted and the job is automatically resubmitted. The database keeps track of the number of times a job has been resubmitted to the Grid. The maximum number of times that a job is allowed to be resubmitted can be easily configured in the checking-scripts.
Inserting a new generator file in the database Back to the index
The scripts can be used to simulate only generator files which are present in the database (Input_Files table). Two scripts are provided which can help to enter new generator files in the Input_Files table: Fill4DESY.sh and getrealnumberDESY.sh.
dccp filename /dcachedirectory/filename
filename1 process tag polarization_e- polarization_e+ cross_section generator
filename2 process tag ...each row corresponding to one file. The "tag" identifies the production to which the file belongs. The existing tags are reported in the Tag Summary.
For the file name the structure adopted should be the following:
where the first part of the name reports the beam polarizations, the second the process name, and the third the index of the file, in case there should be more files for the process but also in case there is only one file you should put the index _01 in the filename since more events might be decided to be simulated later on in a second file. The scripts expect this index.
the "_" means decaying to: stau stau decays to neutralino1 tau neutralino1 tau. "Electron positron decaying to" i.e. e1e1_ at the beginning has always been implied, together with the charge coniugation, so that e1, for example, is used both for positron and electron, NOT e1 and E1. The case in which the initial beam particles are replaced with a photon is instead reported in the process name which would then be:
ae1_stau1stau1_neu1e3neu1e3if for example only the beam electron is replaced with a photon (e1a_ for the positron) , or
aa_stau1stau1_neu1e3neu1e3if both beams particles are replaced with a photon.
NOTE. If the location of the files is something like: /pnfs/desy.de/flc/mc-2008/generated/Desy_sps1ap/ report the location /pnfs/desy.de/ilc/mc-2008/generated/Desy_sps1ap/ with "flc" replaced with "ilc". The grid location is the catalogue directory where the file will be registered. It should be specular to the storage element location, as in the example.
Fill4DESY.sh list.listthe information for the new files will be filled in the Input_Files table.
Then simply launch the script:
Execution of the Mokka scripts Back to the index
The scripts to run a Mokka job are mostly completely automated, just the first script, which schedule new jobs, needs to be configured. They are:
every row is a request with name of the generator file to simulate, number of events to simulate, first event of the file from which to start (in general this will be simply 0, since you might have no reason not to start from the first event of the file, but it can be a different value - as 90 in the second example - if, for example, some events of the file have already been simulated in previous jobs and you don't want to risimulate them). As explained in Inserting a new generator file in the database only files present in the Input_Files table of the database can be simulated. In case of big or mass productions, in which it is not possible to fill this kind of text file by hand, look at Extras for big or mass productions. In the following explanation this text file will be called example.txt.
Go to the part of the script where it tells you to "Select the following factors", as shown in the picture, and choose the different variables. The ones that you will usually have to change are only the detector model to use (detectormodel) and the directory where to save the produced files. The grid catalogue path and the storage element path should be always specular. The script, as a consequence, expects that the storage element path starts with
/pnfs/desy.de/ilcand that the grid catalogue path starts with
/grid/ilcand that the following parts of the paths are exactly the same, so it tells you to give at the variable "outputdirrelative" only the path starting from /pnfs/desy.de/ilc or /grid/ilc. This is made to prevent using grid and storage element paths non specular, which can in general be avoided. So if you want to store the output in:
outputdirrelative="/production/simulatedfiles/".The output will be then put in /pnfs/desy.de/ilc/production/simulatedfiles/ and the corresponding grid catalogue path will be /grid/ilc/production/simulatedfiles/. Note that the storage element path is recommended to be /pnfs/desy.de/ilc/something, though in our machines it will be reachable as /pnfs/desy.de/flc/something. If you don't like to give only the relative path, just ignore what's given to the "outputdirrelative" variable and give the Storage Element path directly to the variable "storageelement" and the catalogue location to the "outputdir" variable. You can then choose:
There is just one addendum to this configuration, in case you introduce a new tag in the database. As explained in Inserting a new generator file in the database the input files are marked with a "tag" like "Slac_SM", "Desy_sps1ap", ... for obvious reasons. The way the filename is built is different depending on the tag of the generator file to be used as input for the simulation. For single particle files, for example, it is meaningless to enter the beam polarizations in the filename. To avoid entering in the database simulated files with non-smart names, the script does not allow to schedule jobs for input files, for whose tag the proper building of the filename has not been checked yet. To make the script work for a new tag, first of all you should add this tag in the number of the known ones.
Go to the part of the script shown in the screen shot (just search for "yourtag" to reach it), and subsitute "yourtagname" with the name of the new tag. Now the script won't complain anymore for the new tag. Second and last step is to choose the proper building of the output name for this tag.
Go to the part of the script shown in the screen shot (just search for "yourtag" again to reach it), and subsitute "yourtagname" with the name of the new tag. Uncomment the three lines under "#Building up the filename for a new tag" cancelling the "#" at the beginning, and play with the variable stringout to obtain the proper name for the output.
Fillgrid.sh example.txt 200 80
where example.txt is the text file with the requests as explained before, while 200 and 80 are respectively the splitting and merging values. The splitting value is the maximum number of events that you want in a job. 3000 events requested with 200 splitting value, means, for example, 15 jobs with 200 events each. Suppose now that the request is of 3007 events instead of 3000. There would be then 15 jobs with 200 events and one with only 7 events. This is not very smart, that's way I introduced a merging value (80 in the example). A merging value of 80, for example, tells the script that if only 7 events are left in one job, since 7 < 80, they will be added to the previous job: there will be 14 jobs with 200 events and one job with 207 events. If the request is of 3100 events, instead, there will be 15 jobs with 200 events and 1 job with 100 events, since 100 > 80.
How to choose the merging and splitting values? 200 events of splitting value is reasonable for a fully hadronic finale state to be simulated. Simulation of hadronic events is slow, and the Grid queues are not infinite. There are also other reasons not to increase too much the number of events in one job (increasing size of the output file, waste of CPU time if in the end, after the simulation, the job should fail). For a leptonic event, or an event with only gammas and neutrinos in the final state you can choose a higher value of the splitting factor.
The script has to be configured to copy the proper tar ball, which has to be in the tool directory specified in Fillgrid.sh. Select the name of the right tar archive to be used in the LFN variable, which appears three lines under "Select here for the Geant data" in the script.
The proper path to the correct data has also to be set. The two screen shots show the example for the 9.0 version of Geant4.You can now launch:
ReadParamsAUTO.shand the scheduled jobs will be submitted to the grid, and their status changed to "submitted". You can change at the beginning of the script which Computing Elements to use, how many jobs to submit to a certain Computing Element, ... The submission is organized by the script in a way such that most of the jobs is submitted to a few, known, usually stable CEs (that I will call "priority" Computing Elements). A smaller amount of jobs is distributed to the remaining CEs (that I will call "non-priority" CEs). Don't trust the way the script is pre-configured, since the CEs are never stable. You will have to play from time to time with the way the jobs are distributed.
Select at the beginning of the script:
autocheck.shThis script checks the jobs which are submitted to the grid. If a job has failed its status will be changed again to "new", so the next time you will launch ReadParamsAUTO.sh, it will be resubmitted again. The job won't be resubmitted forever, after a certain number of failures it won't be resubmitted, it will be deleted from the Grid_jobs table and all its information written to the "Problems.list" file. You can choose the limit of failures configuring the script checkJOBS.sh (variable "maxresub" to be configured at the beginning of the script). The echo of the script on the monitor for a successfully checked job will be something like:
with exit code 0
Checking File /grid/ilc/mc-2008/simulated/LDC01_06Sc/Desy_sps1ap_ppr002/M06-06-p03_ppr002_se1rse1r_e1e1neu1neu1_500_LDC01_06Sc_LCP_ep-1.0_em+1.0_Desy_sps1ap_0389
checking the storage element /pnfs/desy.de/flc/mc-2008/simulated/LDC01_06Sc/Desy_sps1ap_ppr002/M06-06-p03_ppr002_se1rse1r_e1e1neu1neu1_500_LDC01_06Sc_LCP_ep-1.0_em+1.0_Desy_sps1ap_0389.slcio
===== Slcio file for M06-06-p03_ppr002_se1rse1r_e1e1neu1neu1_500_LDC01_06Sc_LCP_ep-1.0_em+1.0_Desy_sps1ap_0389 exists...
...Ok directory found!
...Tar ball also found!
Already 14254 successes for this CE lcgce02.gridpp.rl.ac.uk:2119/jobmanager-lcgpbs-grid500M
Everything good deleting ok directory!
ok directory properly deleted, updating the status to checked
writeMC.sheventual comments to add in the database can be configured in this script ("Comments" variable). Note that also the magnetic field to be reported in the database is hardcoded in this script. The script is configured for a magnetic field of 4 for the LDC-non-Prime detector models, 3.5 for the Prime detector models and 3 for the GLD like detector models (eventually change respectively the variables "BField", "fieldprime" and "fieldgld").
Execution of the Marlin scripts Back to the indexThe Marlin scripts are much easier to use than the Mokka ones. First of all the input simulated files are already in the database and don't need to be entered in it as the generator files needed by Mokka. Moreover, the Marlin jobs are run only in the Desy CE, so there are no complications in organizing the submission of the jobs to different Computing Elements. Finally, they make use of the grid installation of the ILC software, so no binary for the software has to be created.
/pnfs/desy.de/ilcand that the grid catalogue path starts with
/grid/ilcand that the following parts of the paths are exactly the same, so it tells you to give at the variable "directoryout" only the path starting from /pnfs/desy.de/ilc or /grid/ilc. This is made to prevent using grid and storage element paths non specular, which can in general be avoided. So if you want to store the output in:
outputdirrelative="/production/reconstructedfiles/".The output will be then put in /pnfs/desy.de/ilc/production/reconstructedfiles/ and the corresponding grid catalogue path will be /grid/ilc/production/reconstructedfiles/. Note that the storage element path is recommended to be /pnfs/desy.de/ilc/something, though in our machines it will be reachable as /pnfs/desy.de/flc/something. If you don't like to select only the relative path of the output, just ignore what is assigned to the variable "directoryout" and select directly the storage element location to the variable "storageelement" and the catalogue directory to "outputdir". Choose then the software version to be used ("softwareversion") and the contact data.
Extras for big or mass productions Back to the index
Select the maximum number of simulated files to reconstruct in one job ("splitreco"). Select the tag identifying the simulated files you want to reconstruct ("tag"). Select the software version for the reconstruction ("ilcsoft"). "percent" allows you to select only a percentage of the simulation (leave it "1" to reconstruct all the production). Exclude some processes from the reconstruction using the "query_exclude" query. Finally for the Standard Model you can exclude some final states (ex. select ee_6f=0 to exclude all the 6 fermion final state). For non Standard Model productions, leave all the final states included (set to 1). The script produces the toreco.txt text file which is required in input by the FillgridReco.sh script.Note that the script is non-user-specific. If someone else but you has already reconstructed with the same ILC software version the same files you want to reconstruct, the script will consider these files already reconstructed and won't schedule them again for a new reconstruction.
Create new binaries Back to the index
mokkadump.sh dumpname.sqlthe dump of the current Mokka database will be created with the name dumpname.sql (a good rule is to include the date in the name of the dump).
Downloading the scripts Back to the index
Configuring the scripts.After the download untar the scripts in a directory. Make the scripts executable launching, from the directory where you have untared them: chmod u+x *.sh Before using them you should configure them. Since the database is one and more users can use it, it is important that every user has an identifier, so that everybody submits, writes, checks his or her own jobs.
The identifier can be any string for example your name, without white spaces in it. Once you have chosen it, simply run just once from the directory where you have untared the tar ball with the scripts:
configure.sh identifierAfter this command is executed the scripts are ready to be used. NOTE: what does the identifier does? It simply modifies the name which is given to the status of the jobs. The "submitted" jobs will be called "submittedidentifier", ecc...
About Kerberos credentials and letting the scripts loop alone Back to the indexThe scripts can be let running for a couple of days (ex. during the weekend) without assistance. This might create problems with the Kerberos credentials: if you don't make a lock-unlock screen, after 25 hours you will not be allowed anymore to write in your afs directories or read private files from them. The easiest way to avoid this problem might simply be to move the scripts from your afs directory, for example in your /data/ directory, and let them loop from there.
NOTE: also the proxy has to be moved in this case from your afs! Before creating a new proxy with voms-proxy-init you have to export:
You can simply add this line in your environment script. But be careful! If you work with the Grid, you will probably source in your environment script also the Grid environment, with a line like:source /afs/desy.de/project/glite/UI/etc/profile.d/grid-env.sh
In this case you have to export X509_USER_PROXY after sourcing the Grid environment, otherwise the Grid environment script will set again your proxy location to afs. So in your environment script do:
source /afs/desy.de/project/glite/UI/etc/profile.d/grid-env.sh export X509_USER_PROXY=/any-not-afs-location/k5-ca-proxy.pemin this precise order! After having sourced in this way the X509_USER_PROXY variable, make voms-proxy-init. The proxy will be created in the non-afs-location specified.
Some useful MYSQL queries Back to the indexLearning some few, basic commands from the Mysql tutorial, might be very useful to check the production. Just the Select command itself can be very helpful in avoiding mistakes. The sintax of this command is the following:
mysql -h flcweb01.desy.de -u MCRead -B MC -e"Select FIELD1 from TABLE where FIELD2 = 'something'"where:
mysql -h flcweb01.desy.de -u MCRead -B MC -e"Select Sum(Number_of_events_per_file) from MC_Data where Detector_Model = 'LDCPrime_02Sc'"to select the sum of the number of simulated events for the detector model LDCPrime_02Sc.
mysql -h flcweb01.desy.de -u MCRead -B MC -e"Select Sum(Number_of_events_per_file) from MC_Data where MC_Tag ='Slac_SM_LDCPrime_02Sc_ppr002'"If you have reconstructed these files with the version of ilcsoft 01-04, the tag for the reconstructed files will be: Rec01-04_Slac_SM_LDCPrime_02Sc_ppr002. So you can count the events of the reconstructed files with the query:
mysql -h flcweb01.desy.de -u MCRead -B MC -e "Select Sum(Number_of_Events) From RECO_Datawhere RECO_Tag = 'Rec01-04_Slac_SM_LDCPrime_02Sc_ppr002'"The two queries are supposed to give in output the same number, if the reconstruction of the full sample is completed.
It is a good rule to check before starting a reconstruction that the events scheduled to be reconstructed correspond to those effectively expected. Let's consider the production of the previous example. The number of simulated events to reconstruct is given, again, by the query:
mysql -h flcweb01.desy.de -u MCRead -B MC -e"Select Sum(Number_of_events_per_file) from MC_Data where MC_Tag ='Slac_SM_LDCPrime_02Sc_ppr002'"suppose this query gives you 10000 events. After you have completed the execution of the FillgridReco.sh script, you can check that you have scheduled in the Grid_Reco_jobs table, the same number of events in total. If you want to reconstruct these files with the version of ilcsoft 01-04, the tag for the reconstructed files will be: Rec01-04_Slac_SM_LDCPrime_02Sc_ppr002. So you can count the events scheduled with the query:
mysql -h flcweb01.desy.de -u MCRead -B MC -e"Select Sum(Last_Event) from Grid_Reco_jobs where Tag ='Rec01-04_Slac_SM_LDCPrime_02Sc_ppr002'"
in the example, this query should also give you 10000, otherwise you have made some mistake.