This wiki is obsolete, see the NorduGrid web pages for up to date information.
Testing/ARC-CE
Coordination effort for each RC will be performed at the links shown in the Testing page.
Test report
To be attached here.
EMI Component Description and Version
- Savannah task: https://savannah.cern.ch/task/index.php?20927
- Modules and Components: A-REX, ARC Grid Manager, CE-Cache, CE-staging, LRMS modules, infoproviders, Janitor, JURA, nordugridmap
ARC-CE components are responsible for accepting requests containing a description of generic computational jobs and executing them in the underlying computing resource.
Code analysis
- Tester: Marek
- due: for every RC
- Sloccount:
- CCCC metrics:
Later the results will be split between components.
Unit tests
- Tester: Anders
- due: for every RC
Link to Results of unit test code coverage for entire ARC code
The text description of results will come later.
Later the results will be split between components.
Deployment tests
CE-D1: clean installation
- Tester: Marek
- due: for every RC
CE-D2: upgrade installation
- Tester: Marek
- due postponed
System Tests
Regression tests
- Tester: Marek
- due: for every tester
See Savannah task for RfCs
Functionality tests
CE-F1: job management with invalid input (WS interface)
For all functions/operations of the ARC-CE interface check the invalid input. Invalid output should throw an exception as documented in XXX?
CE-F2: job management with invalid credentials
For all functions/operations of the ARC-CE interface check the invalid, non-authorized credentials. Use ordinary and VOMS proxies. Invalid/non-authorized credentials should throw security related exceptions and return reasonable error messages. Both pre-WS and WS interface.
CE-F3: simple job execution
Test the job submission for simple job. Test/use all supported (on the server side) job description languages. production interface.
CE-F4: data stage-in job
Test the job submission for simple job with input files being staged-in by both the client and the CE itself. Test cache functionality.
- submission of job uploading one file
- submission of job uploading many files
- submission of job staging in one file from gsiftp SE
- submission of job staging in many files from gsiftp SE
- submission of job staging in one file from http SE
- submission of job staging in many files from http SE
- submission of job staging in one file from ftp SE
- submission of job staging in many files from ftp SE
- submission of job staging in one file from srm SE
- submission of job staging in many files from srm SE
- submission of job staging in one file from lfc SE
- submission of job staging in many files from lfc SE
cache functionality
- caching of staged in/uploaded files
- caching of staged in file from Unixacl
CE-F5: data stage-out job
Test the job submission for simple job with output files being staged-out both the client (download) and the CE itself (server uploads to SE). Test all kind of protocols and index service registrations.
- job results retrieval (retrieve of job with one output file)
- job results retrieval (retrieve of job with many output files)
- submission of job staging out one file to gsiftp SE
- submission of job staging out many files to gsiftp SE
- submission of job staging out one file to http SE
- submission of job staging out many files to http SE
- submission of job staging out one file to ftp SE
- submission of job staging out many files to ftp SE
- submission of job staging out one file to srm SE
- submission of job staging out many files to srm SE
- submission of job staging out one file to lfc SE
- submission of job staging out many files to lfc SE
CE-F6: job management via pre-WS
Check the main operations of the ARC-CE production interface: cancel, kill, so on... Use ordinary and VOMS proxies.
- simple job submission
- simple job migration
- migration of job with input files
- job status retrieval
- job catenate retrieval
- killing job
- job cleaning
- job results retrieval (retrieve of job with one output file)
CE-F7: parallel job support
Check that more than one slots are request-able and allocated to a job when the corresponding job description element is used.
CE-F9: job management through WS-interface
- simple job submission
- Submission of simple job described in JDL
arcsub -c ARC1:https://pgs03.grid.upjs.sk:50000/arex jdl_hostname.jdl jdl_hostname.jdl [ Executable = "/bin/hostname"; StdOutput = "std.out"; StdError = "std.err"; OutputSandbox = {"std.out","std.err"}; OutputSandboxDestURI = { "gsiftp://localhost/std.out", "gsiftp://localhost/std.err" }; ]
- Submission of simple job described in XRSL
arcsub -c ARC1:https://pgs03.grid.upjs.sk:50000/arex xrsl_hostname.xrsl xrsl_hostname.xrsl &(executable = "/bin/hostname") (stdout = "stdout.txt") (jobName= "hostname-test")
- Submission of simple job described in JSDL
arcsub -c ARC1:https://pgs03.grid.upjs.sk:50000/arex jsdl_hostname.xml jsdl_hostname.xml <?xml version="1.0" encoding="UTF-8"?> <JobDefinition xmlns="http://schemas.ggf.org/jsdl/2005/11/jsdl" xmlns:posix="http://schemas.ggf.org/jsdl/2005/11/jsdl-posix" xmlns:arc="http://www.nordugrid.org/ws/schemas/jsdl-arc"> <JobDescription> <JobIdentification> <JobName>JSDL-TEST</JobName> </JobIdentification> <Application> <posix:POSIXApplication> <posix:Executable>/bin/hostname</posix:Executable> <posix:Output>out.txt</posix:Output> <posix:Error>err.txt</posix:Error> </posix:POSIXApplication> </Application> <DataStaging> <FileName>out.txt</FileName> <DeleteOnTermination>false</DeleteOnTermination> <DownloadToCache>false</DownloadToCache> </DataStaging> <DataStaging> <FileName>err.txt</FileName> </DataStaging> </JobDescription> </JobDefinition>
- submission of job uploading one file
- Submission of simple job described in XRSL
arcsub -c ARC1:https://pgs03.grid.upjs.sk:50000/arex xrsl_shell.xrsl xrsl_shell.xrsl &(executable = "shell.sh") (inputFiles = ("shell.sh" "")) (outputFiles = ("stdout.txt" "") ("stderr.txt" "") ) (stdout = "stdout.txt") (stderr = "stderr.txt") (jobName= "shell-test")
- submission of job uploading many files
- simple job migration
- migration of job with input files
- job status retrieval
- job catenate retrieval
- killing job
- job cleaning
- job results retrieval (retrieve of job with one output file)
- job results retrieval (retrieve of job with many output files)
CE-F10: LRMS support
Test pbs/maui, SGE, Condor, SLURM, fork, .....
- correct job status identification
- correct identification of running/pending/queueing jobs
- correct CE information propagation (part of Glue2 tests)
CE-F11: Janitor tests
- Static
- Dynamic using Janitor component
- classic RTE tests
- submission of jobs requiring different types of RTEs
CE-F12: gridmapfile
- retrieval of proper DN lists
- example authorization scenarios (vo groups)
CE-F13: infopublishing: nordugrid schema
- Purpose: Check that the CE properly publishes cluster, queue, user and job information according to nordugrid schema.
Needed files: arc.conf, testngs.pl, nsnames.txt, testldap.xrsl, pbs.conf
in Media:ARC_CE-F13-F16.tar.gz
Description of the test:
- Setup the testbed as described above using the given arc.conf, restart a-rex and grid-infosys;
- From a remote machine with ng* or arc* clients installed, submit at least 4 jobs using
testldap.xrsl
, wait until the jobs are in any INLRMS:Q and INLRMS:R status. - from a remote machine, run the command:
ldapsearch -h gridtest.hep.lu.se -p 2135 -x -b 'Mds-Vo-Name=local,o=grid' > nordugrid_ldif.txt
- in the same directory as the file generated above, place attached
testngs.pl
andnsnames.txt
- run
./testngs.pl
Testbed:
Description of testbed here
Expected result:
- the output of testngs.pl should contain at least 89 published objects, and these should be at least:
nordugrid-job-reqcputime nordugrid-queue-maxqueuable nordugrid-cluster-support nordugrid-job-stdin nordugrid-info-group nordugrid-job-execcluster nordugrid-queue-architecture nordugrid-cluster-lrms-type nordugrid-authuser-sn nordugrid-queue-localqueued nordugrid-cluster-architecture nordugrid-cluster-sessiondir-lifetime nordugrid-job-gmlog nordugrid-cluster-runtimeenvironment nordugrid-queue-totalcpus nordugrid-cluster-name nordugrid-queue-defaultcputime nordugrid-cluster-cache-total nordugrid-job-cpucount nordugrid-queue-name nordugrid-cluster-sessiondir-total nordugrid-cluster-prelrmsqueued nordugrid-cluster-issuerca-hash nordugrid-cluster-opsys nordugrid-queue-maxwalltime nordugrid-cluster-sessiondir-free nordugrid-cluster-comment nordugrid-queue-gridrunning nordugrid-job-completiontime nordugrid-queue-comment nordugrid-queue-prelrmsqueued nordugrid-queue-nodememory nordugrid-authuser-diskspace nordugrid-cluster-cpudistribution nordugrid-job-usedmem nordugrid-job-submissionui nordugrid-cluster-middleware nordugrid-queue nordugrid-job-status nordugrid-queue-homogeneity nordugrid-queue-defaultwalltime nordugrid-cluster-trustedca nordugrid-job-sessiondirerasetime nordugrid-job-usedwalltime nordugrid-cluster-issuerca nordugrid-queue-mincputime nordugrid-cluster-owner nordugrid-queue-gridqueued nordugrid-cluster-nodecpu nordugrid-job-reqwalltime nordugrid-cluster-contactstring nordugrid-cluster-localse nordugrid-info-group-name nordugrid-cluster-benchmark nordugrid-job-stdout nordugrid-job-executionnodes nordugrid-queue-running nordugrid-cluster-usedcpus nordugrid-job-globalid nordugrid-cluster-totaljobs nordugrid-queue-opsys nordugrid-job-stderr nordugrid-cluster nordugrid-authuser-name nordugrid-queue-maxrunning nordugrid-queue-status nordugrid-job-runtimeenvironment nordugrid-queue-nodecpu nordugrid-cluster-credentialexpirationtime nordugrid-authuser nordugrid-job-proxyexpirationtime nordugrid-job-globalowner nordugrid-cluster-totalcpus nordugrid-queue-benchmark nordugrid-job-execqueue nordugrid-cluster-cache-free nordugrid-authuser-queuelength nordugrid-cluster-homogeneity nordugrid-job-usedcputime nordugrid-job-exitcode nordugrid-job-queuerank nordugrid-queue-maxcputime nordugrid-queue-schedulingpolicy nordugrid-authuser-freecpus nordugrid-job-jobname nordugrid-cluster-lrms-version nordugrid-job nordugrid-cluster-aliasname nordugrid-job-submissiontime
Result: : PASSED/FAILED
CE-F14: infopublishing: glue1.2 schema
- Purpose: Check that the CE properly publishes resource info according to glue-1.2 schema.
attached files: arc.conf, textldap.xsrl
in Media:ARC_CE-F13-F16.tar.gz
Description of the test:
- Setup the testbed using the given arc.conf, restart a-rex and grid-infosys;
- On a remote machine, setup of the EMI glue validator that can be found here: [1]
- From a remote machine with ng* or arc* clients installed, submit at least 4 jobs using
testldap.xrsl
, wait until the jobs are in any INLRMS:Q and INLRMS:R status. - from a remote machine, run the command:
ldapsearch -h gridtest.hep.lu.se -p 2135 -x -b 'Mds-Vo-Name=resource,o=grid' > glue12_ldif.txt
- run the glue validator on the resulting file:
glue-validator -t glue1 -f glue12_ldif.txt
Testbed: Describe testbed here Expected result:
glue12_ldap.txt
validates with no relevant errors using EMI validator.
Result: : X out of Y tests PASSED/FAILED
Comments:
CE-F15: infopublishing: glue2 LDAP schema
- Purpose: Check that the CE properly publishes resource info according to the glue2 LDAP schema. Use EMI validator
Description of the test:
- Pick a testbed machine (i.e. testbed-emi4.grid.upjs.sk) that has a-rex running (service a-rex start) and ldap information system running (nordugrid-arc-slapd and nordugrid-arc-bdii running);
- Setup of the EMI glue validator that can be found here: [2]
- run the command:
ldapsearch -h testbed-emi4.grid.upjs.sk -p 2135 -x -b 'o=glue' > glue2_ldif.txt
- run the glue validator on the resulting file:
glue-validator -t glue2 -f glue2_ldif.txt
Testbed: Describe testbed here Expected result:
glue2_ldap.txt
validates with no relevant errors using EMI validator.
Result: X out of Y test PASSED/FAILED
CE-F16: infopublishing: glue2 xml schema
- Purpose: Check that the CE properly publishes resource info according to the glue2 xml schema. Use some validator
Performance tests
CE-P1: service reliability
Services run by the component must maintain a good performance and reliability over long periods of time with normal operation. Long running unattended operation test measuring performance of the product. Service must not show performance degradation during a 3-day period.
example provided by: NSC medium-size resource: ~100 nodes running ~1000 jobs each 5 hours, which would give around < 5000 jobs in infosys that would give peak submission at 200 jobs per hour large-size resource: ~10k nodes running ~100k jobs each 5 hours, which would give around < 500k jobs in infosys that would give peak submission at 20 000 jobs per hour
CE-P2: load test
stress test the CE with
- massive amount of synchronous job submission
- massive amount of synchronous info query
CE-P3: job submission failure rate
run a properly configured CE and exercise that with e.g. series of 100 job submissions. check the failure rate
script many_jobs.sh:
#!/bin/bash for i in `seq 1 100` do arcsub -d VERBOSE -f tasks.xrls -c ARC1:https://pgs02.grid.upjs.sk:50000/arex & #arcsub -d VERBOSE -f tasks.xrls -c ARC1:https://pgs02.grid.upjs.sk:50000/arex done
script start.sh:
#!/bin/sh gcc $1 -o prog ./prog $2 $3 $4 if test $? = 0 then exit 0 else exit 1 fi
file prog.c:
#include<stdlib.h> #include<stdio.h> int main(int argc, char *argv[]) { int i; for (i=1;i < argc;i++) printf("%s\n",argv[i]); return 0; }
fite tasks.xrls:
&(executable="start.sh") (arguments= "prog.c" "arg1" "arg2" "arg3" ) (inputfiles= ("start.sh" "start.sh") ("prog.c" "prog.c") ) (outputfiles=("/" " ") ) (stdout="out.txt") (stderr="err.txt") (jobName="prog") (cpuTime="70") (gmlog=".log") (disk="10")
copy files in directory:
many_jobs.sh prog.c start.sh tasks.xrls
to run test type:
./many_jobs.sh
There are two possibilities to run test in sequence or quasi parallel option (& running command in background). Uncomment appropriate line in script many_jobs.sh.
Scalability tests
WILL NOT BE DONE FOR RC1
Standard compliance/conformance tests
see the XYZ functionality test for glue2
CE-ICT Inter-component tests
ARC CE integration tests defined at https://twiki.cern.ch/twiki/bin/view/EMI/EmiJra1TaskForceIntegrationTesting
CE-ICT9 Integration Test 9 for ARC
Make sure you have access to at least three compute elements running A-REX, CREAM CE, and UNICORE/X, respectively, with support for the EMI-ES. You may e.g. use the EMI Test Bed. Note that the binary ARC package is not built with direct UNICORE support, but it may work with BES, as suggested by Martin.
Acquire a proxy certificate
arcproxy -S testers.emi-eu.eu:all
or using another appropriate VO.
Prepare a job submission script ict9.jsdl
<?xml version="1.0" encoding="UTF-8"?> <JobDefinition xmlns="http://schemas.ggf.org/jsdl/2005/11/jsdl"> <JobDescription> <Application> <ApplicationName>ict9</ApplicationName> <POSIXApplication xmlns="http://schemas.ggf.org/jsdl/2005/11/jsdl-posix"> <Executable>ict9.sh</Executable> <Output>ict9.out</Output> <Error>ict9.err</Error> <WallTimeLimit>600</WallTimeLimit> </POSIXApplication> </Application> <DataStaging> <FileName>ict9.env</FileName> <Target></Target> <DeleteOnTermination>false</DeleteOnTermination> </DataStaging> </JobDescription> </JobDefinition>
and an executable job script ict9.sh:
#! /bin/sh date --rfc-3339=seconds date --rfc-3339=seconds >ict9.env env >>ict9.env
Submit the job to a CE of each kind using
arcsub -S org.ogf.glue.emies.activitycreation -c ce.example.org ict9.jsdl
Follow the submitted jobs with arcstat jobid. Fetch completed jobs with arcget jobid, and check the result files.
Add sleep 900 or similar to the end of the ict9.sh script. Repeat the above, but instead of fetching the jobs, try to cancel the jobs with arckill jobid before they complete.
CE-ICT10 Integration Test 10
Summary Top BDII to consume GLUE2 LDAP from ARIS
- 1) Pick any testbed machine, and note the hostname, let's call it <hostname>.
- Selected machines were testbed-emi5.grid.upjs.sk, pgs03.grid.upjs.sk
Configure it with LDAP GLUE2 enabled, that means that:
- 1a)the [infosys] block must contain at least:
[infosys] ... provider_loglevel="5" infosys_nordugrid="enable" infosys_glue12="enable" infosys_glue2_ldap="enable" ...
- 1b) configure the [infosys/admindomain] block with at least this info:
[infosys/admindomain] name="emitestbed"
- 2) Configure emir-serp to submit to some EMIR.
- 3) start the services gridftpd, arex, nordugrid-arc-slapd, nordugrid-arc-bdii, emir-serp. (in the listed order)
- 4) Choose a EMI testbed top-bdii
that is bootstrapped via the EMIR you registered to.
Note: Bootstrapping with EMIR was not done.
- selected top level BDIIs where: emi3rc-sl5-bdii.cern.ch , emi3rc-sl6-bdii.cern.ch
Add the following two lines to /var/cache/glite/top-urls.conf:
pgs03.grid.upjs.sk ldap://pgs03.grid.upjs.sk:2135/GLUE2DomainID=urn:ad:emitestbed,o=glue testbed-emi5 ldap://testbed-emi5.grid.upjs.sk:2135/GLUE2DomainID=urn:ad:emitestbed,o=glue
- 5) wait for 1 minute, then run a ldapsearch on a top-bdii:
ldapsearch -x -h <topbdiihostname> -p 2170 -b 'GLUE2GroupID=services,GLUE2DomainID=urn:ad:emitestbed,GLUE2GroupID=grid,o=glue' | grep <hostname>
- 6) If the above search return some values, then the test is PASSED, otherwise FAILED.
- 7) test PASSED. Results:
$ ldapsearch -x -h emi3rc-sl5-bdii.cern.ch -p 2170 -b 'GLUE2GroupID=services,GLUE2DomainID=urn:ad:emitestbed,GLUE2GroupID=grid,o=glue' '(objectclass=GLUE2ComputingService)' GLUE2EntityName # extended LDIF # # LDAPv3 # base <GLUE2GroupID=services,GLUE2DomainID=urn:ad:emitestbed,GLUE2GroupID=grid,o=glue> with scope subtree # filter: (objectclass=GLUE2ComputingService) # requesting: GLUE2EntityName # # urn:ogf:ComputingService:pgs03.grid.upjs.sk:arex, services, urn:ad:emitestb ed, grid, glue dn: GLUE2ServiceID=urn:ogf:ComputingService:pgs03.grid.upjs.sk:arex,GLUE2Group ID=services,GLUE2DomainID=urn:ad:emitestbed,GLUE2GroupID=grid,o=glue GLUE2EntityName: pgs03 # urn:ogf:ComputingService:testbed-emi5.grid.upjs.sk:arex, services, urn:ad:e mitestbed, grid, glue dn: GLUE2ServiceID=urn:ogf:ComputingService:testbed-emi5.grid.upjs.sk:arex,GLU E2GroupID=services,GLUE2DomainID=urn:ad:emitestbed,GLUE2GroupID=grid,o=glue GLUE2EntityName: testbed-emi5 # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 s$ ldapsearch -x -h emi3rc-sl6-bdii.cern.ch -p 2170 -b 'GLUE2GroupID=services,GLUE2DomainID=urn:ad:emitestbed,GLUE2GroupID=grid,o=glue' '(objectclass=GLUE2ComputingService)' GLUE2EntityName # extended LDIF # # LDAPv3 # base <GLUE2GroupID=services,GLUE2DomainID=urn:ad:emitestbed,GLUE2GroupID=grid,o=glue> with scope subtree # filter: (objectclass=GLUE2ComputingService) # requesting: GLUE2EntityName # # urn:ogf:ComputingService:pgs03.grid.upjs.sk:arex, services, urn:ad:emitestb ed, grid, glue dn: GLUE2ServiceID=urn:ogf:ComputingService:pgs03.grid.upjs.sk:arex,GLUE2Group ID=services,GLUE2DomainID=urn:ad:emitestbed,GLUE2GroupID=grid,o=glue GLUE2EntityName: pgs03 # urn:ogf:ComputingService:testbed-emi5.grid.upjs.sk:arex, services, urn:ad:e mitestbed, grid, glue dn: GLUE2ServiceID=urn:ogf:ComputingService:testbed-emi5.grid.upjs.sk:arex,GLU E2GroupID=services,GLUE2DomainID=urn:ad:emitestbed,GLUE2GroupID=grid,o=glue GLUE2EntityName: testbed-emi5 # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2
Integration Test 11
Summary Execute jobs with data stage-in/stage-out using the EMI SEs and LFC.
This test submits a job which requires 3 files from LFC, whose replicas are in the three different SEs. The job generates a file which is then uploaded by the CE to the three different SEs and registered in LFC under the same LFN.
Testbed resources: From EMI-3 RC testbed: ARC CE, DPM, LFC, StoRM, dCache, ARC Clients. The 3 input files specified in the job description were created on 5/2/12 and are assumed to still exist.
Job description ARC-CE-Integration-Test-11.xrsl:
& ("jobname" = "ARC-CE-Integration-Test-11") ("executable" = "ARC-CE-Integration-Test-11.sh") ("walltime" = "30" ) ("cputime" = "30" ) ("stdout" = "stdout") ("stderr" = "stderr") ("gmlog" = "gmlog") ("rerun" = "3") ("inputfiles" = (* Caching is explicitly turned off to force downloads *) ("test.file.dcache" "lfc://cvitblfc1.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/test.dcache.1" "cache=no") ("test.file.dpm" "lfc://cvitblfc1.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/test.dpm.1" "cache=no") ("test.file.storm" "lfc://cvitblfc1.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/test.storm.1" "cache=no") ) (* The list of output files is created by the job *) ("outputfiles" = ("@output" ""))
Job executable ARC-CE-Integration-Test-11.sh:
#!/bin/sh # generates a file and an output files list to upload it /bin/dd of=test1 if=/dev/urandom count=100 GUID=`uuidgen` cat > output <<EOF test1 lfc://srm://emi3rc-sl6-dpm.cern.ch/dpm/cern.ch/home/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID.dpm@cvitblfc1.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID test1 lfc://srm://vm-dcache-deploy5.desy.de/data/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID.dcache@cvitblfc1.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID test1 lfc://srm://emitestbed39.cnaf.infn.it/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID.storm@cvitblfc1.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID EOF
To run test:
- Save the above two files in the same directory
- Generate a VOMS proxy for the VO testers.eu-emi.eu
- Submit job to ARC CE: arcsub -c pgs03.grid.upjs.sk ARC-CE-Integration-Test-11.xrsl
- Poll status with arcstat
- If the job finished successfully the output files can be checked by listing LFC: arcls -lL lfc://cvitblfc1.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/output
CE-ICT12 Integration Test 12
Summary Run a basic functionality test to see if BDII backend of ARIS runs and info is there
- 1) Configure any testbed machine with ldap enabled. The following shows a sample arc.conf that tries to enable
most of the information:
[common] hostname="piff.hep.lu.se" x509_user_key="/etc/grid-security/hostkey.pem" x509_user_cert="/etc/grid-security/hostcert.pem" x509_cert_dir="/etc/grid-security/certificates" gridmap="/etc/grid-security/grid-mapfile" lrms="fork" [grid-manager] daemon="yes" user="root" controldir="/tmp/jobstatus" sessiondir="/var/grid" drain debug="5" logfile="/tmp/grid-manager.log" pidfile="/tmp/grid-manager.pid" mail="admin@testbed.emi.eu" joblog="/tmp/gm-jobs.log" shared_filesystem="/var/grid" arex_mount_point="https://piff.hep.lu.se/arex" cachedir="/var/grid/cache" runtimedir="/var/grid/SOFTWARE/" enable_emies_interface="yes" # gridftp server config [gridftpd] user="root" debug="5" logfile="/tmp/gridftpd.log" logsize="100000 2" pidfile="/tmp/gridftpd.pid" port="2811" allowunknown="no" globus_tcp_port_range="9000,9300" globus_udp_port_range="9000,9300" # job submission interface via gridftp [gridftpd/jobs] path="/jobs" plugin="jobplugin.so" allownew="yes" # openldap server config [infosys] port="2135" debug="3" slapd_loglevel="0" provider_loglevel="5" bdii_debug_level=DEBUG infosys_nordugrid="enable" infosys_glue12="enable" infosys_glue2_ldap="enable" [infosys/admindomain] name="emitestbed" #################################################################### # # This block holds information that is needed by the glue 1.2 # generation. This is only necessary if infosys_glue12 is enabled. [infosys/glue12] # These three variables need to be set if infosys_glue12 is enabled. # Example: "Lund, Sweden" resource_location="Lund, Sweden" # Example: "55.75000" resource_latitude="55.7000" # Example: "12.41670" resource_longitude="13.1833" # Example 2400 cpu_scaling_reference_si00="2400" # Example Cores=3,Benchmark=9.8-HEP-SPEC06 processor_other_description="Cores=4" # Example http://www.ndgf.org glue_site_web="http://www.nordugrid.org" # Example NDGF-T1 glue_site_unique_id="LundTestSite3" # This variable decides if the GlueSite should be published. In case # you want to set up a more complicated setup with several publishers # of data to a GlueSite, then you may wish to tweak this parameter. provide_glue_site_info="true" #################################################################### # # [infosys/site/sitename] Site BDII configuration block, this block is # used to configure ARC to generate a site-bdii that can be registered # in GOCDB etc to make it a part of a gLite network. The sitename # part is to be declarative of the site-bdii being generated. [infosys/site/LundTestSite3] # The unique id used to identify this site, eg "NDGF-T1" unique_id="LundTestSite3" # The url is on the format: # ldap://host.domain:2170/mds-vo-name=something,o=grid and should # point to the resource-bdii url="ldap://localhost:2135/mds-vo-name=resource,o=grid" # infosys view of the computing cluster (service) [cluster] cluster_alias="Performance Test Server" comment="This server is used for infoproviders performance tests" homogeneity="True" architecture="i386" nodeaccess="inbound" nodeaccess="outbound" opsys="adotf" nodecpu="3" nodememory="256" #defaultmemory="128" #middleware= localse="gsiftp://piff.hep.lu.se/media/" localse="gsiftp://piff.hep.lu.se/media2/" lrms_config="Single job per processor" clustersupport="florido.paganelli@hep.lu.se" cluster_location="SE-22100" cluster_owner="University of Lund" benchmark="specfp2000 333" authorizedvo="ATLAS" authorizedvo="LundTesters" # infosys view of the queue behind the computing service, # every CE needs at least one queue [queue/fork] name="fork" #fork_job_limit="cpunumber" #homogeneity="True" #scheduling_policy="FIFO" #comment="This queue is nothing more than a fork host" #nodecpu="3" #architecture="i386" #authorizedvo="SPECIALQUEUEVO" #nodememory="600" [queue/batch] name="batch" #homogeneity="True" #scheduling_policy="MAUI" #comment="simple pbs batch queue" #nodecpu="adotf" # Example Cores=3,Benchmark=9.8-HEP-SPEC06 processor_other_description="Cores=4" # Example http://www.ndgf.org glue_site_web="http://www.nordugrid.org" # Example NDGF-T1 glue_site_unique_id="LundTestSite3" # This variable decides if the GlueSite should be published. In case # you want to set up a more complicated setup with several publishers # of data to a GlueSite, then you may wish to tweak this parameter. provide_glue_site_info="true" #################################################################### # # [infosys/site/sitename] Site BDII configuration block, this block is # used to configure ARC to generate a site-bdii that can be registered # in GOCDB etc to make it a part of a gLite network. The sitename # part is to be declarative of the site-bdii being generated. [infosys/site/LundTestSite3] # The unique id used to identify this site, eg "NDGF-T1" unique_id="LundTestSite3" # The url is on the format: # ldap://host.domain:2170/mds-vo-name=something,o=grid and should # point to the resource-bdii url="ldap://localhost:2135/mds-vo-name=resource,o=grid" # infosys view of the computing cluster (service) [cluster] cluster_alias="Performance Test Server" comment="This server is used for infoproviders performance tests" homogeneity="True" architecture="i386" nodeaccess="inbound" nodeaccess="outbound" opsys="adotf" nodecpu="3" nodememory="256" #defaultmemory="128" #middleware= localse="gsiftp://piff.hep.lu.se/media/" localse="gsiftp://piff.hep.lu.se/media2/" lrms_config="Single job per processor" clustersupport="florido.paganelli@hep.lu.se" cluster_location="SE-22100" cluster_owner="University of Lund" benchmark="specfp2000 333" authorizedvo="ATLAS" authorizedvo="LundTesters" # infosys view of the queue behind the computing service, # every CE needs at least one queue [queue/fork] name="fork"
- 2) start the services gridftp, arex, nordugrid-arc-slapd, nordugrid-arc-bdii (in this order)
- 3.1) wait several minutes and check slapd+bdii didn't die by testing service status:
- # service nordugrid-arc-bdii status
- BDII Running [ OK ]
- 3.2) check that slapd is running:
[root@piff ~]# ps aux | grep slapd ldap 1649 0.1 51.6 4665640 1059792 ? Ssl Feb29 2:02 /usr/sbin/slapd -f /var/run/arc/bdii/bdii-slapd.conf -h ldap://*:2135 -u ldap root 13860 0.0 0.0 61192 764 pts/0 S+ 10:47 0:00 grep slapd
- if slapd is not running, the test is FAILED.
- 4) run a ldapsearch on all the trees (can also be run on the same machine):
ldapsearch -x -h testbed-emi5.grid.upjs.sk -p 2135 -b 'mds-vo-name=local,o=grid' ldapsearch -x -h testbed-emi5.grid.upjs.sk -p 2135 -b 'mds-vo-name=resource,o=grid' ldapsearch -x -h testbed-emi5.grid.upjs.sk -p 2135 -b 'o=glue'
- 5) If all the three researches above return some values, then the test is PASSED, otherwise FAILED.
- 6) test passed. Result summary:
$ ldapsearch -x -h testbed-emi5.grid.upjs.sk -p 2135 -b 'mds-vo-name=local,o=grid' | tail -n20 eue-name=gridlong,nordugrid-cluster-name=testbed-emi5.grid.upjs.sk,Mds-Vo-nam e=local,o=grid Mds-validto: 20130201184938Z objectClass: Mds objectClass: nordugrid-authuser Mds-validfrom: 20130201184838Z nordugrid-authuser-freecpus: 1 nordugrid-authuser-sn: [REMOVED FOR PRIVACY] nordugrid-authuser-diskspace: 6042 nordugrid-authuser-name: [REMOVED FOR PRIVACY>] nordugrid-authuser-queuelength: 0 # search result search: 2 result: 0 Success # numResponses: 655 # numEntries: 654 $ ldapsearch -x -h testbed-emi5.grid.upjs.sk -p 2135 -b 'mds-vo-name=resource,o=grid' | tail -n20 objectClass: GlueSchemaVersion GlueCEStateFreeJobSlots: 0 GlueSchemaVersionMinor: 2 GlueCEStateEstimatedResponseTime: 0 GlueCEStateWorstResponseTime: 2000 GlueChunkKey: GlueCEUniqueID=testbed-emi5.grid.upjs.sk:2811/nordugrid-torque-g ridlong GlueCEInfoDefaultSE: 0 GlueSchemaVersionMajor: 1 GlueCEInfoApplicationDir: unset GlueCEStateRunningJobs: 0 GlueCEStateWaitingJobs: 0 GlueCEStateTotalJobs: 0 # search result search: 2 result: 0 Success # numResponses: 9 # numEntries: 8 $ ldapsearch -x -h testbed-emi5.grid.upjs.sk -p 2135 -b 'o=glue' | tail -n20 f:ComputingService:testbed-emi5.grid.upjs.sk:arex, services, glue dn: GLUE2ApplicationEnvironmentID=urn:ogf:ApplicationEnvironment:testbed-emi5. grid.upjs.sk:rte:0,GLUE2GroupID=ApplicationEnvironments,GLUE2ManagerID=urn:og f:ComputingManager:testbed-emi5.grid.upjs.sk:pbs,GLUE2ServiceID=urn:ogf:Compu tingService:testbed-emi5.grid.upjs.sk:arex,GLUE2GroupID=services,o=glue objectClass: GLUE2ApplicationEnvironment GLUE2ApplicationEnvironmentAppName: ENV/LOCALDISK GLUE2ApplicationEnvironmentState: installednotverified GLUE2ApplicationEnvironmentID: urn:ogf:ApplicationEnvironment:testbed-emi5.gri d.upjs.sk:rte:0 GLUE2ApplicationEnvironmentComputingManagerForeignKey: urn:ogf:ComputingManage r:testbed-emi5.grid.upjs.sk:pbs GLUE2ApplicationEnvironmentAppVersion: 1000 # search result search: 2 result: 0 Success # numResponses: 36 # numEntries: 35
Integration Test 13
Summary Do authorization filtering based on VOMS attributes and ARGUS authorization.
The deployment include the ARC CE (A-REX) from Kosice (testbed-emi5.grid.upjs.sk, see: https://twiki.cern.ch/twiki/bin/view/EMI/EMITestbedInventory#EMI_3_RC_Testbed_resources; although the planned ARC CE for testing is on testbed-emi5, the current successful ARC CE is on testbed-emi4.grid.upjs.sk, because this one is auto-deployed and includes a code update (svn commit number 26875, about the Argus plugin) which will be included in the final release) and Argus PDP service from INFN (emitestbed45.cnaf.infn.it). The policy in Argus service should include the VO name --- testers.eu-emi.eu, so that the clients to ARC CE are forced to get voms proxy from emi's testing voms server.
- Access control policy that needs to add on the Argus PAP's policy repository, according to Argus's policy definition:
resource "https://testbed-emi4.grid.upjs.sk:60000/arex" { action ".*" { rule permit { vo="testers.eu-emi.eu" } rule permit { emi-vo="testers.eu-emi.eu" } } }
- Configuration change on the service (A-REX) side (arc.conf):
1) if there is no related *.lsf configured, alternatively, on the [common] section, please add:
voms_processing="standard" voms_trust_chain="/C=IT/O=INFN/OU=Host/L=CNAF/CN=emitestbed01.cnaf.infn.it"
2) on the [grid-manager] section, add:
arguspdp_endpoint="https://emitestbed45.cnaf.infn.it:8152/authz"
- Prerequisite on the client side:
The client should obtain a voms proxy from emi testbed's voms server, for which you should have the following information configured in the vomses file:
"testers.eu-emi.eu" "emitestbed07.cnaf.infn.it" "15002" "/C=IT/O=INFN/OU=Host/L=CNAF/CN=emitestbed07.cnaf.infn.it" "testers.eu-emi.eu" "testers.eu-emi.eu" "emitestbed01.cnaf.infn.it" "15002" "/C=IT/O=INFN/OU=Host/L=CNAF/CN=emitestbed01.cnaf.infn.it" "testers.eu-emi.eu"
You may use arcproxy to generate a voms proxy which is supposed to have "testers.eu-emi.eu" as the vo name in the extension part.
Then you may use the proxy to access the A-REX service via arc client utilities (e.g. arcsub).
The arc.conf of testbed node (testbed-emi4.grid.upjs.sk) is shown at:
http://testbed-emi4.grid.upjs.sk/arc.conf
The fresh log of grid-manager is shown at:
http://testbed-emi4.grid.upjs.sk/logs/grid-manager.log
Integration Test 26
Summary Submit and monitor jobs to ARC CE using LCAS for authorization and LCMAPS for user mapping.
Integration Test 9 for EMI-3
Summary Execute jobs with data stage-in/stage-out using the EMI SEs and LFC.
This test submits a job which requires 3 files from LFC, whose replicas are in the three different SEs. The job generates a file which is then uploaded by the CE to the three different SEs and registered in LFC under the same LFN.
Testbed resources: From EMI-3 RC testbed: ARC CE, DPM, LFC, StoRM, dCache, ARC Clients. The 3 input files specified in the job description were created on 5/2/12 and are assumed to still exist.
Job description ARC-CE-Integration-Test-09.xrsl:
& ("jobname" = "ARC-CE-Integration-Test-09") ("executable" = "ARC-CE-Integration-Test-09.sh") ("walltime" = "30" ) ("cputime" = "30" ) ("stdout" = "stdout") ("stderr" = "stderr") ("gmlog" = "gmlog") ("rerun" = "3") ("inputfiles" = (* Caching is explicitly turned off to force downloads *) ("test.file.dcache" "lfc://emi3rc-sl5-lfc.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/test.dcache.1" "cache=no") ("test.file.dpm" "lfc://emi3rc-sl5-lfc.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/test.dpm.1" "cache=no") ("test.file.storm" "lfc://emi3rc-sl5-lfc.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/test.storm.1" "cache=no") ) (* The list of output files is created by the job *) ("outputfiles" = ("@output" ""))
Job executable ARC-CE-Integration-Test-09.sh:
#!/bin/sh # generates a file and an output files list to upload it /bin/dd of=test1 if=/dev/urandom count=100 GUID=`uuidgen` cat > output <<EOF test1 lfc://srm://emi3rc-sl6-dpm.cern.ch/dpm/cern.ch/home/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID.dpm@emi3rc-sl5-lfc.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID test1 lfc://srm://vm-dcache-deploy3.desy.de:8443/data/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID.dcache@emi3rc-sl5-lfc.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID test1 lfc://srm://emitestbed39.cnaf.infn.it/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID.storm@emi3rc-sl5-lfc.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/output/$GUID EOF
To run test:
- Save the above two files in the same directory
- Generate a VOMS proxy for the VO testers.eu-emi.eu
- Submit job to ARC CE: arcsub -c testbed-emi5.grid.upjs.sk ARC-CE-Integration-Test-09.xrsl
- Poll status with arcstat
- If the job finished successfully the output files can be checked by listing LFC: arcls -lL lfc://emi3rc-sl5-lfc.cern.ch/grid/testers.eu-emi.eu/ARC-CE-IntegrationTest/output
Integration Test 17 for EMI-3
Summary Do a third-party copy of a file between an dCache and DPM SE.
Prerequisites:
- VOMS proxy with testers.eu-emi.eu extension
- The following packages must be installed:
- nordugrid-arc-client
- nordugrid-arc-plugins-globus
- nordugrid-arc-plugins-gfal
- gfal2-all
arccp localfile srm://emi2rc-sl5-dpm.cern.ch:8446/srm/managerv2?SFN=/dpm/cern.ch/home/testers.eu-emi.eu/ARC_test.1 arccp -3 -i srm://emi2rc-sl5-dpm.cern.ch:8446/srm/managerv2?SFN=/dpm/cern.ch/home/testers.eu-emi.eu/ARC_test.1 \ srm://vm-dcache-deploy2.desy.de:8443/srm/managerv2?SFN=/data/testers.eu-emi.eu/ARC_test.1 arccp srm://vm-dcache-deploy2.desy.de:8443/srm/managerv2?SFN=/data/testers.eu-emi.eu/ARC_test.1 localfile2 md5sum localfile localfile2
Integration Test 20 for EMI-3
Summary AREX CAR accounting. AREX jobs executed and properly reported via JURA and CAR records in APEL.
- APEL test server's information:
- host: test-msg02.afroditi.hellasgrid.gr
- port: 6163
- destination/topic: /queue/global.accounting.cputest.CENTRAL
- 1) Enable the accounting publisher in the ARC CE
[grid-manager] ... jobreport="APEL:https://test-msg02.afroditi.hellasgrid.gr:6163" jobreport_publisher="jura" jobreport_options="archiving:/tmp/archive,topic:/queue/global.accounting.cputest.CENTRAL"
- 2) Allow the machine's DN (/C=SK/O=SlovakGrid/O=UPJS/CN=host/pgs03.grid.upjs.sk) by the APEL server Administrator (apel-admins@stfc.ac.uk).
- 3) Send job(s) to the ARC CE and the records will be send ones per hour in one aggregated message.
- 4) Check the SSM sender's log at /var/spool/arc/ssm/ssmsend.log.
Current outputs are same:
2013-02-07 10:50:24,392 - ssmsend - INFO - ======================================== 2013-02-07 10:50:24,398 - ssmsend - INFO - Starting sending SSM version 2.0.0. 2013-02-07 10:50:24,398 - ssmsend - INFO - No server certificate supplied. Will not encrypt messages. 2013-02-07 10:50:24,638 - stomp.py - INFO - Established connection to host test-msg02.afroditi.hellasgrid.gr, port 6163 2013-02-07 10:50:24,640 - ssm2 - INFO - Will send messages to: /queue/global.accounting.cputest.CENTRAL 2013-02-07 10:50:24,680 - ssm2 - INFO - Connected. 2013-02-07 10:50:24,741 - ssm2 - INFO - Found 83 messages. 2013-02-07 10:50:24,748 - ssm2 - INFO - Sending message: 00000000/20130130155608 2013-02-07 10:50:24,801 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:24,913 - ssm2 - INFO - Broker received message: 00000000/20130130155608 2013-02-07 10:50:25,318 - ssm2 - INFO - Sending message: 00000000/20130130165715 2013-02-07 10:50:25,327 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:25,402 - ssm2 - INFO - Broker received message: 00000000/20130130165715 2013-02-07 10:50:25,843 - ssm2 - INFO - Sending message: 00000000/20130130175901 2013-02-07 10:50:25,851 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:25,927 - ssm2 - INFO - Broker received message: 00000000/20130130175901 2013-02-07 10:50:26,362 - ssm2 - INFO - Sending message: 00000000/20130130190009 2013-02-07 10:50:26,371 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:26,446 - ssm2 - INFO - Broker received message: 00000000/20130130190009 2013-02-07 10:50:26,882 - ssm2 - INFO - Sending message: 00000000/20130130200129 2013-02-07 10:50:26,891 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:26,968 - ssm2 - INFO - Broker received message: 00000000/20130130200129 2013-02-07 10:50:27,422 - ssm2 - INFO - Sending message: 00000000/20130130210217 2013-02-07 10:50:27,430 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:27,506 - ssm2 - INFO - Broker received message: 00000000/20130130210217 2013-02-07 10:50:27,933 - ssm2 - INFO - Sending message: 00000000/20130130213628 2013-02-07 10:50:27,942 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:28,054 - ssm2 - INFO - Broker received message: 00000000/20130130213628 2013-02-07 10:50:28,449 - ssm2 - INFO - Sending message: 00000000/20130130223736 2013-02-07 10:50:28,457 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:28,532 - ssm2 - INFO - Broker received message: 00000000/20130130223736 2013-02-07 10:50:28,964 - ssm2 - INFO - Sending message: 00000000/20130130233805 2013-02-07 10:50:28,973 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:29,048 - ssm2 - INFO - Broker received message: 00000000/20130130233805 2013-02-07 10:50:29,482 - ssm2 - INFO - Sending message: 00000000/20130131003804 2013-02-07 10:50:29,491 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:29,566 - ssm2 - INFO - Broker received message: 00000000/20130131003804 2013-02-07 10:50:29,999 - ssm2 - INFO - Sending message: 00000000/20130131013804 2013-02-07 10:50:30,007 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:30,083 - ssm2 - INFO - Broker received message: 00000000/20130131013804 2013-02-07 10:50:30,526 - ssm2 - INFO - Sending message: 00000000/20130131023804 2013-02-07 10:50:30,568 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:30,643 - ssm2 - INFO - Broker received message: 00000000/20130131023804 2013-02-07 10:50:31,077 - ssm2 - INFO - Sending message: 00000000/20130131033804 2013-02-07 10:50:31,086 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:31,161 - ssm2 - INFO - Broker received message: 00000000/20130131033804 2013-02-07 10:50:31,595 - ssm2 - INFO - Sending message: 00000000/20130131043806 2013-02-07 10:50:31,603 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:31,678 - ssm2 - INFO - Broker received message: 00000000/20130131043806 2013-02-07 10:50:32,111 - ssm2 - INFO - Sending message: 00000000/20130131054004 2013-02-07 10:50:32,120 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:32,196 - ssm2 - INFO - Broker received message: 00000000/20130131054004 2013-02-07 10:50:32,627 - ssm2 - INFO - Sending message: 00000000/20130131064004 2013-02-07 10:50:32,643 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:32,719 - ssm2 - INFO - Broker received message: 00000000/20130131064004 2013-02-07 10:50:33,157 - ssm2 - INFO - Sending message: 00000000/20130131074004 2013-02-07 10:50:33,182 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:33,263 - ssm2 - INFO - Broker received message: 00000000/20130131074004 2013-02-07 10:50:33,690 - ssm2 - INFO - Sending message: 00000000/20130131080249 2013-02-07 10:50:33,699 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:33,774 - ssm2 - INFO - Broker received message: 00000000/20130131080249 2013-02-07 10:50:34,206 - ssm2 - INFO - Sending message: 00000000/20130131091004 2013-02-07 10:50:34,215 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:34,290 - ssm2 - INFO - Broker received message: 00000000/20130131091004 2013-02-07 10:50:34,720 - ssm2 - INFO - Sending message: 00000000/20130131101004 2013-02-07 10:50:34,728 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:34,804 - ssm2 - INFO - Broker received message: 00000000/20130131101004 2013-02-07 10:50:35,234 - ssm2 - INFO - Sending message: 00000000/20130131102803 2013-02-07 10:50:35,243 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:35,318 - ssm2 - INFO - Broker received message: 00000000/20130131102803 2013-02-07 10:50:35,746 - ssm2 - INFO - Sending message: 00000000/20130131112859 2013-02-07 10:50:35,754 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:35,829 - ssm2 - INFO - Broker received message: 00000000/20130131112859 2013-02-07 10:50:36,269 - ssm2 - INFO - Sending message: 00000000/20130131123008 2013-02-07 10:50:36,278 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:36,390 - ssm2 - INFO - Broker received message: 00000000/20130131123008 2013-02-07 10:50:36,789 - ssm2 - INFO - Sending message: 00000000/20130201090050 2013-02-07 10:50:36,797 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:36,909 - ssm2 - INFO - Broker received message: 00000000/20130201090050 2013-02-07 10:50:37,301 - ssm2 - INFO - Sending message: 00000000/20130201090142 2013-02-07 10:50:37,310 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:37,385 - ssm2 - INFO - Broker received message: 00000000/20130201090142 2013-02-07 10:50:37,816 - ssm2 - INFO - Sending message: 00000000/20130201100254 2013-02-07 10:50:37,938 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:37,976 - ssm2 - INFO - Broker received message: 00000000/20130201100254 2013-02-07 10:50:38,461 - ssm2 - INFO - Sending message: 00000000/20130203164305 2013-02-07 10:50:38,469 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:38,544 - ssm2 - INFO - Broker received message: 00000000/20130203164305 2013-02-07 10:50:38,974 - ssm2 - INFO - Sending message: 00000000/20130203175011 2013-02-07 10:50:38,983 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:39,096 - ssm2 - INFO - Broker received message: 00000000/20130203175011 2013-02-07 10:50:39,490 - ssm2 - INFO - Sending message: 00000000/20130203185111 2013-02-07 10:50:39,498 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:39,574 - ssm2 - INFO - Broker received message: 00000000/20130203185111 2013-02-07 10:50:40,002 - ssm2 - INFO - Sending message: 00000000/20130203195309 2013-02-07 10:50:40,011 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:40,086 - ssm2 - INFO - Broker received message: 00000000/20130203195309 2013-02-07 10:50:40,516 - ssm2 - INFO - Sending message: 00000000/20130203205417 2013-02-07 10:50:40,524 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:40,600 - ssm2 - INFO - Broker received message: 00000000/20130203205417 2013-02-07 10:50:41,033 - ssm2 - INFO - Sending message: 00000000/20130203215537 2013-02-07 10:50:41,042 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:41,118 - ssm2 - INFO - Broker received message: 00000000/20130203215537 2013-02-07 10:50:41,546 - ssm2 - INFO - Sending message: 00000000/20130203225614 2013-02-07 10:50:41,555 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:41,630 - ssm2 - INFO - Broker received message: 00000000/20130203225614 2013-02-07 10:50:42,063 - ssm2 - INFO - Sending message: 00000000/20130203235753 2013-02-07 10:50:42,072 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:42,148 - ssm2 - INFO - Broker received message: 00000000/20130203235753 2013-02-07 10:50:42,578 - ssm2 - INFO - Sending message: 00000000/20130204005848 2013-02-07 10:50:42,586 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:42,661 - ssm2 - INFO - Broker received message: 00000000/20130204005848 2013-02-07 10:50:43,093 - ssm2 - INFO - Sending message: 00000000/20130204015956 2013-02-07 10:50:43,101 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:43,177 - ssm2 - INFO - Broker received message: 00000000/20130204015956 2013-02-07 10:50:43,620 - ssm2 - INFO - Sending message: 00000000/20130204030022 2013-02-07 10:50:43,628 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:43,704 - ssm2 - INFO - Broker received message: 00000000/20130204030022 2013-02-07 10:50:44,138 - ssm2 - INFO - Sending message: 00000000/20130204040049 2013-02-07 10:50:44,147 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:44,222 - ssm2 - INFO - Broker received message: 00000000/20130204040049 2013-02-07 10:50:44,657 - ssm2 - INFO - Sending message: 00000000/20130204050246 2013-02-07 10:50:44,666 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:44,741 - ssm2 - INFO - Broker received message: 00000000/20130204050246 2013-02-07 10:50:45,168 - ssm2 - INFO - Sending message: 00000000/20130204060435 2013-02-07 10:50:45,177 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:45,252 - ssm2 - INFO - Broker received message: 00000000/20130204060435 2013-02-07 10:50:45,684 - ssm2 - INFO - Sending message: 00000000/20130204070450 2013-02-07 10:50:45,692 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:45,768 - ssm2 - INFO - Broker received message: 00000000/20130204070450 2013-02-07 10:50:46,201 - ssm2 - INFO - Sending message: 00000000/20130204080646 2013-02-07 10:50:46,209 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:46,285 - ssm2 - INFO - Broker received message: 00000000/20130204080646 2013-02-07 10:50:46,717 - ssm2 - INFO - Sending message: 00000000/20130205085250 2013-02-07 10:50:46,725 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:46,800 - ssm2 - INFO - Broker received message: 00000000/20130205085250 2013-02-07 10:50:47,234 - ssm2 - INFO - Sending message: 00000000/20130205095826 2013-02-07 10:50:47,244 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:47,356 - ssm2 - INFO - Broker received message: 00000000/20130205095826 2013-02-07 10:50:47,755 - ssm2 - INFO - Sending message: 00000000/20130205105921 2013-02-07 10:50:47,764 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:47,876 - ssm2 - INFO - Broker received message: 00000000/20130205105921 2013-02-07 10:50:48,272 - ssm2 - INFO - Sending message: 00000000/20130205120029 2013-02-07 10:50:48,281 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:48,356 - ssm2 - INFO - Broker received message: 00000000/20130205120029 2013-02-07 10:50:48,804 - ssm2 - INFO - Sending message: 00000000/20130205130127 2013-02-07 10:50:48,812 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:48,888 - ssm2 - INFO - Broker received message: 00000000/20130205130127 2013-02-07 10:50:49,318 - ssm2 - INFO - Sending message: 00000000/20130205140257 2013-02-07 10:50:49,327 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:49,402 - ssm2 - INFO - Broker received message: 00000000/20130205140257 2013-02-07 10:50:49,834 - ssm2 - INFO - Sending message: 00000000/20130205150322 2013-02-07 10:50:49,843 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:49,918 - ssm2 - INFO - Broker received message: 00000000/20130205150322 2013-02-07 10:50:50,349 - ssm2 - INFO - Sending message: 00000000/20130205160325 2013-02-07 10:50:50,358 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:50,434 - ssm2 - INFO - Broker received message: 00000000/20130205160325 2013-02-07 10:50:50,865 - ssm2 - INFO - Sending message: 00000000/20130205170518 2013-02-07 10:50:50,874 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:50,988 - ssm2 - INFO - Broker received message: 00000000/20130205170518 2013-02-07 10:50:51,384 - ssm2 - INFO - Sending message: 00000000/20130205180526 2013-02-07 10:50:51,392 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:51,468 - ssm2 - INFO - Broker received message: 00000000/20130205180526 2013-02-07 10:50:51,901 - ssm2 - INFO - Sending message: 00000000/20130205190723 2013-02-07 10:50:51,910 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:51,985 - ssm2 - INFO - Broker received message: 00000000/20130205190723 2013-02-07 10:50:52,428 - ssm2 - INFO - Sending message: 00000000/20130205200916 2013-02-07 10:50:52,437 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:52,512 - ssm2 - INFO - Broker received message: 00000000/20130205200916 2013-02-07 10:50:52,943 - ssm2 - INFO - Sending message: 00000000/20130205210916 2013-02-07 10:50:52,951 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:53,027 - ssm2 - INFO - Broker received message: 00000000/20130205210916 2013-02-07 10:50:53,457 - ssm2 - INFO - Sending message: 00000000/20130205220916 2013-02-07 10:50:53,466 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:53,541 - ssm2 - INFO - Broker received message: 00000000/20130205220916 2013-02-07 10:50:53,989 - ssm2 - INFO - Sending message: 00000000/20130205230916 2013-02-07 10:50:53,998 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:54,073 - ssm2 - INFO - Broker received message: 00000000/20130205230916 2013-02-07 10:50:54,506 - ssm2 - INFO - Sending message: 00000000/20130206000916 2013-02-07 10:50:54,515 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:54,590 - ssm2 - INFO - Broker received message: 00000000/20130206000916 2013-02-07 10:50:55,019 - ssm2 - INFO - Sending message: 00000000/20130206010916 2013-02-07 10:50:55,028 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:55,103 - ssm2 - INFO - Broker received message: 00000000/20130206010916 2013-02-07 10:50:55,534 - ssm2 - INFO - Sending message: 00000000/20130206020916 2013-02-07 10:50:55,543 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:55,618 - ssm2 - INFO - Broker received message: 00000000/20130206020916 2013-02-07 10:50:56,047 - ssm2 - INFO - Sending message: 00000000/20130206030917 2013-02-07 10:50:56,056 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:56,137 - ssm2 - INFO - Broker received message: 00000000/20130206030917 2013-02-07 10:50:56,563 - ssm2 - INFO - Sending message: 00000000/20130206040916 2013-02-07 10:50:56,572 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:56,647 - ssm2 - INFO - Broker received message: 00000000/20130206040916 2013-02-07 10:50:57,129 - ssm2 - INFO - Sending message: 00000000/20130206050916 2013-02-07 10:50:57,138 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:57,213 - ssm2 - INFO - Broker received message: 00000000/20130206050916 2013-02-07 10:50:57,652 - ssm2 - INFO - Sending message: 00000000/20130206060916 2013-02-07 10:50:57,661 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:57,736 - ssm2 - INFO - Broker received message: 00000000/20130206060916 2013-02-07 10:50:58,171 - ssm2 - INFO - Sending message: 00000000/20130206070916 2013-02-07 10:50:58,180 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:58,255 - ssm2 - INFO - Broker received message: 00000000/20130206070916 2013-02-07 10:50:58,686 - ssm2 - INFO - Sending message: 00000000/20130206080916 2013-02-07 10:50:58,695 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:58,770 - ssm2 - INFO - Broker received message: 00000000/20130206080916 2013-02-07 10:50:59,212 - ssm2 - INFO - Sending message: 00000000/20130206170057 2013-02-07 10:50:59,221 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:59,296 - ssm2 - INFO - Broker received message: 00000000/20130206170057 2013-02-07 10:50:59,730 - ssm2 - INFO - Sending message: 00000000/20130206180123 2013-02-07 10:50:59,739 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:50:59,852 - ssm2 - INFO - Broker received message: 00000000/20130206180123 2013-02-07 10:51:00,245 - ssm2 - INFO - Sending message: 00000000/20130206190312 2013-02-07 10:51:00,254 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:00,329 - ssm2 - INFO - Broker received message: 00000000/20130206190312 2013-02-07 10:51:00,765 - ssm2 - INFO - Sending message: 00000000/20130206200317 2013-02-07 10:51:00,773 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:00,849 - ssm2 - INFO - Broker received message: 00000000/20130206200317 2013-02-07 10:51:01,280 - ssm2 - INFO - Sending message: 00000000/20130206210324 2013-02-07 10:51:01,289 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:01,364 - ssm2 - INFO - Broker received message: 00000000/20130206210324 2013-02-07 10:51:01,797 - ssm2 - INFO - Sending message: 00000000/20130206220522 2013-02-07 10:51:01,806 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:01,881 - ssm2 - INFO - Broker received message: 00000000/20130206220522 2013-02-07 10:51:02,309 - ssm2 - INFO - Sending message: 00000000/20130206230719 2013-02-07 10:51:02,318 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:02,393 - ssm2 - INFO - Broker received message: 00000000/20130206230719 2013-02-07 10:51:02,826 - ssm2 - INFO - Sending message: 00000000/20130207000726 2013-02-07 10:51:02,835 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:02,910 - ssm2 - INFO - Broker received message: 00000000/20130207000726 2013-02-07 10:51:03,342 - ssm2 - INFO - Sending message: 00000000/20130207010732 2013-02-07 10:51:03,350 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:03,426 - ssm2 - INFO - Broker received message: 00000000/20130207010732 2013-02-07 10:51:03,887 - ssm2 - INFO - Sending message: 00000000/20130207020840 2013-02-07 10:51:03,895 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:03,971 - ssm2 - INFO - Broker received message: 00000000/20130207020840 2013-02-07 10:51:04,402 - ssm2 - INFO - Sending message: 00000000/20130207030916 2013-02-07 10:51:04,411 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:04,486 - ssm2 - INFO - Broker received message: 00000000/20130207030916 2013-02-07 10:51:04,912 - ssm2 - INFO - Sending message: 00000000/20130207040916 2013-02-07 10:51:04,920 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:04,996 - ssm2 - INFO - Broker received message: 00000000/20130207040916 2013-02-07 10:51:05,429 - ssm2 - INFO - Sending message: 00000000/20130207050916 2013-02-07 10:51:05,438 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:05,513 - ssm2 - INFO - Broker received message: 00000000/20130207050916 2013-02-07 10:51:05,943 - ssm2 - INFO - Sending message: 00000000/20130207060916 2013-02-07 10:51:05,951 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:06,026 - ssm2 - INFO - Broker received message: 00000000/20130207060916 2013-02-07 10:51:06,453 - ssm2 - INFO - Sending message: 00000000/20130207070916 2013-02-07 10:51:06,462 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:06,537 - ssm2 - INFO - Broker received message: 00000000/20130207070916 2013-02-07 10:51:06,964 - ssm2 - INFO - Sending message: 00000000/20130207080916 2013-02-07 10:51:06,973 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:07,048 - ssm2 - INFO - Broker received message: 00000000/20130207080916 2013-02-07 10:51:07,474 - ssm2 - INFO - Sending message: 00000000/20130207095023 2013-02-07 10:51:07,482 - ssm2 - INFO - Waiting for broker to accept message. 2013-02-07 10:51:07,558 - ssm2 - INFO - Broker received message: 00000000/20130207095023 2013-02-07 10:51:07,983 - ssmsend - INFO - SSM run has finished. 2013-02-07 10:51:07,983 - ssm2 - INFO - SSM connection ended. 2013-02-07 10:51:07,983 - ssmsend - INFO - SSM has shut down. 2013-02-07 10:51:07,983 - ssmsend - INFO - ========================================