This wiki is obsolete, see the NorduGrid web pages for up to date information.

ARC1/Infosys/2011Review

From NorduGrid
< ARC1‎ | Infosys
Jump to navigationJump to search

This page is meant to keep track of the status of the infosystem as per November 2011. The investigation is carried by Florido and Balazs. The page is updated regularily as we made progress, so please check history.

Schema

The infosys startup script /etc/init.d/grid-infosys is rensponsible to gather the schema locations and feed them to the infoproviders.

Lines:

This table shows a summary of the schema status.

Schema name rendering Files Package information shipped versions
nordugrid ldap
/usr/share/arc/ldap-schema/nordugrid.schema
nordugrid-arc-aris we have no schema version. We'll put the last update date
Glue 1.X ldap
  /etc/ldap/schema/Glue-CE.schema
  /etc/ldap/schema/Glue-CESEBind.schema
  /etc/ldap/schema/Glue-CORE.schema
  /etc/ldap/schema/Glue-MDS.schema
  /etc/ldap/schema/Glue-SE.schema
  
in EMI and EPEL:
glue-schema-2.0.8.noarch.rpm

in Ubuntu/Debian official repos:
glue-schema_2.0.6-1_all.deb
this refers to version 2.0.8 of the package.
  • Glue-CE.schema - v.1.3 rev 1.1 2007/01/18
  • Glue-CESEBind.schema - v.1.2 rev 1.8 2008/12/11
  • Glue-CORE.schema - v.1.2 rev 1.1 2007/01/18
  • Glue-MDS.schema - no version. Defines the root of the BDII three, but I din't find any call to this in our scripts.
  • Glue-SE.schema - v.1.2 rev 1.2 2007/05/31
GLUE2 ldap
/etc/ldap/schema/GLUE20.schema
in EMI and EPEL:
glue-schema-2.0.8-1.el5.noarch.rpm

in Ubuntu/Debian official repos:
glue-schema-2.0.6.deb
no version information on the rendering
not in sync with github. We need a proper release numbering for rendering versions otherwise is always about making diffs
GLUE2 XML The XML structure is defined by the module GLUE2xmlPrinter.pm No package. current declared:
  'xmlns' => "http://schemas.ogf.org/glue/2009/03/spec/2/0",
  'xmlns:xsi' => "http://www.w3.org/2001/XMLSchema-instance",
  'xsi:schemaLocation' => "http://schemas.ogf.org/glue/2009/03/spec/2/0 pathto/GLUE2.xsd"
for github april 2011 version, code should be:
  'xmlns' => "http://schemas.ogf.org/glue/2009/03/spec_2.0_r1",
  'xmlns:xsi' => "http://www.w3.org/2001/XMLSchema-instance",
  'xsi:schemaLocation' => "https://raw.github.com/OGF-GLUE/XSD/master/schema/GLUE2.xsd"
BDII ldap grid-infosys searches in the following locations:
  "/etc/bdii/BDII.schema"
  "${bdii_location}/etc/BDII.schema"
  "${ARC_LOCATION}/share/arc/ldap-schema/BDII.schema"

On all the deployments I have, it is only in

"/etc/bdii/BDII.schema"
EMI, maintainer Lawrence
bdii-5.2.3-1.el5.noarch  in EMI-1-base OLD
bdii-5.2.5-2.el5.noarch  in EMI-1-updates LATEST

Maintainer is Mattias for all the following
EPEL:

bdii-5.2.5-1.el5.noarch LATEST

Debian6 in official stable repos:

bdii_5.1.7-1_all.deb OLD

Ubuntu, in the Universe official repo

    * maverick (net): 5.1.7-1: OLD
    * natty (net): 5.1.9-1: OLD
    * oneiric (net): 5.2.3-2: OLD
    * precise (net): 5.2.5-2: LATEST
schema carries no version information

Tasks

  • [ON HOLD] Nordugrid schema Delayed after GLUE2 completion
    • check completeness of information, what is published and what not for ng schema (check the tests I did)
    • introduce versioning
    • check documentation if is in sync with the schema file
    • problem arised while playing with integration tasks placing nordugrid schema in a pure bdii config generates errors on some fields. This is probably what Balasz and Mattias meant by incompatibility. Needs further investigation, delayed by now.
  • GLUE2 LDAP schema
    • [ONGOING] document GLUE2 Tasks delayed after GLUE2 ( reminder: old backends doc has some stuff)
  • [ONGOING] XML schema: there is no latest EMI. Maybe a open a ggus ticket?
In particular, the following are interesting:
Metric Description Comment
FailedDeletes The number of delete statements which failed useful to spot publication problems
UpdateTime total update time in seconds The total time of running the bdii-update script tasks
DBUpdateTime The time taken to update the database in seconds The time it takes to run ldap-add, ldap-modify, ldap-delete against slapd db and run a query for the "shadow"
ReadTime The time taken to read the LDIF sources in seconds For some reason this is always 0. We give no static ldifs to BDII. everything is generated by the arc-nordugrid-bdii-ldif metascript and the arc-default.ldif.pl that generates the roots.
ProvidersTime The time taken to run the information providers in seconds This is NOT the time our infoproviders run. This is the time it takes to execute the provider script generated by ARC infoproviders.

grid-infosys startup script notes

The grid-infosys script can be divided in 15 conceptual blocks of code. Looking at the code it grow more or less in a disordered manner, making it quite hard to understand by reading it. In the following I'll try to divide the conceptual blocks, give a description of the subroutines, explain the workflow on the start(), stop() and status() functions

Conceptual Blocks

A longer document with line numbers referring to a specific SVN changeset can be found here.

The following is a brief description to get the idea.

  1. INIT INFO preamble
    preamble at the beginning of the script indicates how the rc and lsb system must handle the startup script. This information comes as a comment (# in front of it)
  2. INIT and lsb related functions
    init and lsb system-specific routines for services startup are sourced here. A few logging functions are set.
  3. Some default variables
    Default variables such as this script name and a RETVAL used to contain the exit values are set.
  4. sysconfig (RedHat) or /etc/default (Debian) settings
    sysconfig only exists in RedHat based systems. Debian systems have /etc/default instead. The systems don't work the same so the script as to set relevant information here.
  5. Definition of several helper functions
    Here the functions debug_echo, error_echo, std_header, printregldif, chech_cwd are defined
  6. set ARC_LOCATION
    ARC standard location is configured. This depends on the build, but in most cases is /usr (the ./configure prefix)
  7. Load configuration parser
    the arc.conf configuration parser routines are sourced.
  8. set ARC_CONFIG
    sets the path to arc.conf
  9. check and fix for an OpenLDAP bug in RHEL4
  10. definition of config_set_default()
    the subroutine sets defaults for the infosys REGARDLESS of arc.conf
  11. export pkgdatadir, parses arc.conf
    pkgdatadir contains the path to the parser. Then calls configuration parser on arc.conf
  12. Settings for infosys:
    1. Parses [common] section, parses [infosys] section
    2. Defines check_ownership and get_ldap_user
    3. loads some slapd-related values from arc.conf
    4. creates log dirs
    5. sets ldap user (uses get_ldap_user) sets bdii-related values from arc.conf
    6. parses enabled schemas
    7. sets some timing for infoprovider updates
    8. defines pid files for slapd and bdii-update
    9. copes with debian lack of /var/lock/subsys
    10. check bdii/slapd runtime dirs permissions (logs, /var/run, /var/tmp)
    11. does some checks depending on the old or new infosys scripts (infosys_compat)
    12. searches for location of ldap core and glue schemas
    13. searches for system LDAP
    14. if gris/giis modules are not compiled in LDAP, some variables will be added
    15. clears, sets and performs checks for glue1.x
    16. sets BDII config file location and exports it; creates giis-fifo
  13. Defines several subroutines.
    • create_bdii_conf
      will create bdii.conf file. This is the bdii file that sets bdii related variables: where to get ldifs, what is the user running bdii-update, logfiles...
      usually in /var/run/arc/infosys/bdii.conf
    • create_arc_slapd_conf
      will create bdii-slapd.conf. This is the slapd configuration file, and will be filled with schema inclusions, slapd module paths, and other parameters (maybe some of those should be rechecked)
      usually located in /var/run/arc/bdii/bdii-slapd.conf
    • add_info_service
      adds information to slapd configuration created above: references to databases for each root dn.
    • create_default_ldif
      will create a perl script that generates part of ldif files
      This script generates ldif root structure and adds validfrom and validto stuff
      usually located in /var/tmp/arc/bdii/provider/arc-default.ldif.pl
    • create_arc_ldif_generator_compat
      creates a perl script that generates ldif trees in compat mode, by running cluster.pl and se.pl
      usually located in /var/tmp/arc/bdii/provider/arc-nordugrid-bdii-ldif
    • create_arc_ldif_generator
      creates a perl script that generates ldif trees in A-REX infoproviders mode. It waits for A-REX infoproviders to generate data, collects it, and runs se.pl if there is any SE.
      usually located in /var/tmp/arc/bdii/provider/arc-nordugrid-bdii-ldif
    • create_registration_config_file
      Creates the registration config file
      usually located in /var/run/arc/infosys/grid-info-resource-register.conf
    • add_index_services
      generates bdii config file information for index services.
      uses printregldif and creates/append /var/run/arc/infosys/grid-info-resource-register.conf above.
    • create_glue_ldif_generator
      Will create glue1x ldif generator script
      usually located in /var/tmp/arc/bdii/provider/arc-glue-bdii-ldif (compat)
      usually located in/var/run/arc/infosys/arc-glue-bdii-ldif (A-REX)
    • create_directory
      subroutine to create a dir in a smart way. Removesit if exists and checks permissions.
    • create_bdii_config_files
      runs the previously defined subroutines to create all the bdii related config files.
      It also creates the site-bdii block if the option is present in arc.conf
      It calls, in order: create_bdii_conf, create_arc_slapd_conf, create_default_ldif, if compat enabled runs create_arc_ldif_generator_compact and create_arc_ldif_generator, otherwise calls create_arc_ldif_generator (A-REX infoproviders). Creates site-bdii info, then calls create_registration_config_file, add_index_services, add_info_services
    • notify_about_bdii
      print some info about where to find bdii logs
    • check_clean_status
      check status of the infosys, if unclean shutdown occurred, clean up
  14. defines start(),stop(),status()
  15. main case for the above function and exit RETVAL

Main script workflows

At every call the script always loads everything until point 15).

At point 15 of the above conceptual description, the case loop will capture and executed one of the start(), stop() and status() functions.

start()

  1. check_clean_status
  2. notify_about_bdii
  3. check_cwd
  4. create var/tmp/arc directories with create_directories
  5. create_bdii_config_files
  6. create slapd db directory
  7. create db directory structure in /var/run/arc/bdii/db
  8. create archive directory /var/run/arc/bdii/archive
  9. chown above directories to slapd/bdii user
  10. creates password for slapd db
  11. starts infoindex server
  12. starts slapd
  13. starts bdii-update
  14. starts registration scripts

stop()

  1. check_cwd
  2. stop bdii-udpate
  3. stop slapd
  4. stop infoindex (sends a STOP command to pipe)
  5. clean /var/tmp/arc and /var/run/arc arc dirs

status()

  1. check slapd lockfile/pidfile
  2. check bdii-update lockfile/pidfile

Tasks

  • [ONGOING] for now, only restarting grid-infosys solves this problem. Maybe a bdii issue? BDII doesn't clean up old IDs, dead objects still there, why?
  • [NOT DONE] add some logic to validate the tree (validating content of ldif-provider.sh script) before bdii-update starts
Clues: some validation is performed on arc.conf values and on some data gathered by infoproviders. The latter is mostly XML checks.
a dryrun of CEinfo.pl might be the way of doing this, but CEinfo.pl must be run with the same uid,gid as a-rex, or it will generate files that cannot be accessed by the infosys later
  • [NOT DONE] delayed, must have splitting startup script for local and index startup script. This has become more important as it triggers dependencies.
  • [NOT DONE] cleanup for BDII4 Note: BDII team has changed directories once again, so this must be done carefully.

ARC Endpoints and Services

See ARC GLUE2 LDAP Tree.

  • Missing/don't know if needed: Index services
    • LDAP EGIIS endpoint/interface

Interface information and jobs

  • Interface information is stored in controldir job.#.local
  • one job queried by many interfaces: JobID depends on interface. Infosystem is already aware of which interface the jobs come from.

Attribute completeness

Naming Conventions

Local information system

ServiceTypes:

AREX Computing ServiceType: org.nordugrid.execution.arex discontinued. Changed to org.nordugrid.arex

ARIS Information ServiceType: org.nordugrid.information.aris discontinued. ARIS will not be shown as a Service anymore.

ServiceCapability: calculated as the union of endpoints capabilities (must be calculated at runtime)

Endpoints:

AREX GRIDFTP job management interface (formerly ARC0):

InterfaceName: org.nordugrid.gridftpjob
Capability: executionmanagement.jobexecution, executionmanagement.jobmanager, executionmanagement.jobdescription

AREX XBES (a-rex wsrf and eXtended BES interface, formerly ARC1): Note that for backward compatibility we kept the bes thing, a client should check interfaceExtension to know what spefic extension is supported.

InterfaceName: org.ogf.bes
InterfaceExtension: urn:org.nordugrid.xbes (GLUE2 mandates this MUST be a URI)
Capability: executionmanagement.jobexecution, executionmanagement.jobmanager, executionmanagement.jobdescription

AREX EMIES:

Please see EMI-ES specification: https://twiki.cern.ch/twiki/pub/EMI/EmiExecutionService/EMI-ES-Specification_v1.15.odt

ARIS LDAP GLUE2:

InterfaceName: org.nordugrid.ldapglue2
Capability: information.discovery.resource

ARIS LDAP GLUE1.2/1.3:

InterfaceName: org.nordugrid.ldapglue1
Capability: information.discovery.resource

ARIS LDAP nordugrid schema:

InterfaceName: org.nordugrid.ldapng
Capability: information.discovery.resource

ARIS WSRF:

InterfaceName: org.nordugrid.wsrfglue2
Capability: information.discovery.resource

ARIS EMIES:

InterfaceName: org.ogf.emies
Capability: information.discovery.resource

Index/Registry level

ServiceTypes:

EGIIS ServiceType: org.nordugrid.information.egiis

EMIR ServiceType: org.nordugrid.information.emir

Note: A decision on which namespace must be assigned to EMIR is not yet taken. We will assume it in the org.nordugrid.* namespace for the time being.

ServiceCapability: calculated as the union of endpoints capabilities (must be calculated at runtime)

Endpoints:

EGIIS ldap custom nordugrid interface (formerly part of ARC0 targetRetriever):

InterfaceName: org.nordugrid.ldapegiis
Capability: information.discovery.registry

EMIR RESTful interface:

InterfaceName: org.nordugrid.emir
Note: This is gonna change into InterfaceName: org.ogf.glue.emir
Capability: information.discovery.registry

Problems: uniqueness and persistence of IDs

  • Uniqueness: universal uniquenes is somewhat addressed by using FQDN in the IDs.
  • Persistence: since infoproviders run each quantum of time, ID creation is performed every run. This makes persistence hard to do. We don't want to use any file/database for persistent objects, we don't want the infoproviders to waste time creating IDs.
Simple but not satisfying solution is to assign numbers to multiple interfaces with the exact same ID prefix (i.e. benchmarks or contacts)

Note: Problems with this approach: For instance if a service goes down or is modified for some reason, persistence is lost with sequence number, because they don't depend on anything and are dynamically assigned by infosys scripts on each run. The name must refer to something unique, no sequential numbers. Sequential numbers are not unique by themselves, combination of strings is more likely to be. (maybe the order could help here?)

Notation

  • <serviceTypeName> is the last part of the GLUE2 serviceType string. Some examples:
Example: GLUE2ServiceType: org.nordugrid.information.aris ⇒ <serviceTypeName> is aris
Example GLUE2ServiceType: org.nordugrid.execution.arex ⇒ <serviceTypeName> is arex


  • temporary means that the solution is not satisfying but is a good compromise right now.
  • <execenvName> is execenv<sequential number>
  • <rteName> is rte<sequential number>

ID Conventions adopted

  • AdminDomain: urn:ogf:<objectclass>:<DomainName> Taken from configfile


  • Services: urn:ogf:<ObjectClass>:<FQDN>:<serviceTypeName> temporary


  • Endpoints: urn:ogf:<ObjectClass>:<FQDN>:<GLUE2EndpointInterfaceName>:<endpointURL>|<port> temporary


  • RTEs (ApplicationEnvironments): urn:ogf:<ObjectClass>:<FQDN>:<serviceTypeName>:rte<sequential number>


  • Jobs: urn:ogf:<ObjectClass>:<FQDN>:<job ID taken from A-REX controldir>


  • Manager: urn:ogf:<ObjectClass>:<FQDN>:<managerName>


  • ExecutionEnviroments: urn:ogf:<ObjectClass>:<FQDN>:execenv<sequential number>


  • Contact: urn:ogf:<ObjectClass>:<FQDN>:<Service|ComputingService|AdminDomain>:<serviceTypeName|DomainName>:con<sequential number>


  • Location: urn:ogf:<ObjectClass>:<FQDN>:<Service|ComputingService|AdminDomain>:<serviceTypeName|domainName> (there can be at most one Location record so this is safe here-- unless domainName and serviceName are the same...


  • Shares: urn:ogf:<ObjectClass>:<FQDN>:<share name>


  • Benchmark: urn:ogf:<ObjectClass>:<FQDN>:<managerName|execenvName>:<benchmark type>


  • UserDomain: urn:ogf:<ObjectClass>:<domainName>:<sequential number>


  • AccessPolicy: urn:ogf:<ObjectClass>:<FQDN>:<endpointType>:<endpointURL>:<sequential number>


  • MappingPolicy: urn:ogf:<ObjectClass>:<FQDN>:<shareName>:<sequential number>


  • ApplicationHandle: urn:ogf:<ObjectClass>:<FQDN>:<rteName>:ah<sequential number>


Further choices: Glue2ComputingActivity.IDFromEndpoint

IDFromEndpoint attribute used to be filled with a URL similar to the one used by clients. It is decided that we will simply use the job ID string that a-rex writes in the controldir. GLUE2 mandates a URI for this value. my suggestion is to use:

  • IDFromEndpoint: urn:idfe:<whatever the ID from endpoint is>


Further choices: GLUE2ComputingActivity Submission interface

We decided to enrich the ComputingActivity record with an interface tag, to highlight what is the interface the Activity was originarily submitted through.

  • OtherInfo: SubmittedVia=(org.nordugrid.gridftpjob|org.nordugrid.xbes|org.ogf.emies)
Example: OtherInfo: SubmittedVia=org.nordugrid.gridftpjob


Tasks

  • [DONE] Policies authorizedvo= content for EMI2 initial release.
  • [DONE] called it [infosys/admindomain] Create configuration block for domain information. call it [domain]
Example section:
[infosys/admindomain]
name="TestDomainName"
otherinfo=Test Other info
description="this is a parsing test for the AdminDomain block"
www="http://www.lu.se/"
distributed=no
owner=florido.paganelliEMAILhep.lu.se
  • [DONE] Update attribute publishing status after GLUE2 redesign
  • [ONGOING] Laurence asked for a tree restructuring. communicate differences in the LDAP tree to other people and come with an agreement
  • [ONGOING] decide a schema for unique IDs
  • [ONGOING] Florido gathers examples of job states in EMI-ES
state attributes are NOT substates. It is fine grained information that lives together with job state.
here's an example of how emies states and attributes can live together:
a job in PROCESSING can have both attributes CLIENT-STAGEIN and SERVER-PENDING at the same time.
Hence if we follow the namespace:state:substate that GLUE2 mandates, we would have different choices:
1) consider attributes as substates. then we will have at the same time in the GLUE2ComputingActivityState attribute:
Example:
emies:PROCESSING:CLIENT-STAGEIN
emies:PROCESSING:SERVER-PENDING

Note: Aleksandr's comment on this is that we will have repetition of values and painful parsing to do.
Plus, an open enumeration of all the possible combinations of state:attribute, that is also very bad and might change.

2) CURRENTLY ADOPTED NOT YET PUBLISHED BY INFOSYS Aleksandr's suggestion: consider attributes to belong to another name space emieasattr. Then the record attribute would contain:
Example:
emies:PROCESSING
emiesattr:CLIENT-STAGEIN
emiesattr:SERVER-PENDING

Note: The bad thing here is that emiesattr is not a job state, yet is contained in the job state record.

3) Florido suggestion: create a GLUE2 extension object just to integrate the attributes in the computing activity. The bad of this is that each emies job will have an additional object just for the status.
4) best solution: ask the GLUE2 wg to modify the GLUE2 spec to include job attributes.
  • [ON HOLD] UserDomain Messy situation in ConfigCentral.pm as adrian expected this information to be in the XML config file as well.
  • [ON HOLD] Check what GOCDB publishes, see if we can use that to fill AdminDomain Location and Contacts. GOCDB will have his own interface.
  • [ON HOLD] investigate use case for multiple domain ownership Does it make sense for a Cluster to register to multiple AdminDomains?
Clues: the GLUE2 errata contains the explicit association between a Domain and a Service. it says the association must be exactly 1. [1]
  • [NOT DONE] ToStorageElement: how is it filled? understand -- it needs a StorageServiceID, so there MUST be a storage element with such data accessible.
Clues:
  • Might be attached to a gridftpd service. Would that be the stageIn service? Should it be in the computing element? BAD - needs a real StorageService not a StageIn service
  • Let the sysadmin decide what to publish, by entering the StorageServiceID. Would be cool if infoproviders can fetch the information by themselves by requesting to the storage element.
  • [NOT DONE] write an information provider for gridftpd data service and fill it with storage data GLUE2 LDAP. Might be related to the above
  • [NOT DONE] Balasz checks job state in EMI-ES
  • [NOT DONE] check what happens on glue2 when parsing arched XML

Related Documents

Registrations

  • to EGIIS done by grid-infosys script
  • to EMIR done by arched for services running in the container. Not decided yet for other services. Choices: emird
  • to ISIS

Tasks

  • [ONGOING] find out how registration is configured and performed for EMIR, ISIS
  • arched is being modified to support serviceIDs
  • emird is not in good state, multiple endpoints registration is not possible. Ivan is fixing it
  • Shiraz is verifing multiple endpoints are accepted
  • aleksandr doesn't like yet another binary

Indexes/Registries

It is clear that there will be a transition period during which EMIR and EGIIS will coexist.

EMIR

  • Emir registration configuration must be as simple as possible. Only one block with validity and period that applies for all EMIR urls and all endpoints, at cluster level.
  • By default: publish all possible endpoint information. The sysadmin can then disable this registration for each endpoint
  • if NO [registration/emir] block exists, DON'T start the registration process

a sample configuration block:

  [registration/emir]
  emirurls= url1, url2 .... # list of urls separated by commas
  validity=                 # number. format must be checked
  period=                   # number. format must be checked
  # list of endpoints follow
  disablereg_gridftpjob=yes|no
  disablereg_xbes=yes|no
  disablereg_emies=yes|no
  disablereg_aris=yes|no
  ... eventually more disablereg ... 
  • Who performs registration? arched can do it, but eventually emird can be used for services not in arched.
  • arched can only register services running in the container. That is, ARIS and gridftp interface are not registered.
  • emird needs configuration files generated by infoproviders

EGIIS

  • Transition period: keep registration to egiis configuration as it is not to screw up sysadmin's knowledge.
  • Future: change registration to egiis to be as simple as emir, eventually phase out.


Questions

  • should we have different startup scripts for EGIIS and ARIS? NO for the moment being this is too much work.
  • should we enable running it on a different port?

Tasks

Integration with other middlewares/infosys

  • CE is not yet visible in the top-bdii.

Tasks

  • [ONGOING] Make our CE visible in a top-bdii, with no site-bdii service but some Site concept.
  1. [ON HOLD] restructure tree with suggestions from laurence
  2. [NOT DONE] Send package with modifications to Ulf (managing Finnish NGI, CEs) is waiting for some green light to go on with CE GLUE2 publishing and direct inclusion to top-bdii.

Future: ERIS

  • stand alone storage element with se.pl

Infosys Documentation

Existing docs:

Tasks

  • [DONE] add information on GLUE2 to sysadmin guide
  • [ONGOING] Abstract submitted, write an article about glue2 implementation status in ARC
  • [NOT DONE] review/create developer docs
    • [NOT DONE] update inforproviders README

Other relevant info

Information indexes and bdii stuff for integration: https://www.egi.eu/indico/materialDisplay.py?contribId=1&materialId=slides&confId=654

Pictures of LDAP/XML trees

PDF: Media:Trees.pdf.tar.gz

VYM: Media:Trees.vym.tar.gz VYM is a mindmapping drawing tool, it's available for all main distros. [2]

If anybody has better suggestions for easy trees drawing, please tell.

Notes that didn't fit anywhere else

  • Our glue12 publishing completely lacks services. See

https://bugzilla.nordugrid.org/show_bug.cgi?id=2581

  • grid-infosys uses config_parser_compat.sh to parse arc.conf. This file differs from config_parser.sh, which is used by a-rex startup script. These files are all different from the Infosys ConfigCentral.pm parser that is able to parse arc.conf, INI, XML configurations. Moreover, in the ldap-infosys dir in svn, there is some unknown ConfigParser.pm that is not called by anyone but seems to do the same things as the other scripts. Is there a way to unify this mess???

SE

  • write SE infoproviders for GLUE2 LDAP

Tasks (General)

  • [NOT DONE] logs must show if the code is run to generate xml or ldif
  • [NOT DONE] ConfigCentral must output logs in such a way is clear if it's parsing XML, ini or arc.conf configuration items