This wiki is obsolete, see the NorduGrid web pages for up to date information.
NOX/SysAdminManual
Note: Nox 1.0.0 is released on 30th November 2009.
This release integrates and distributes the result of the 3-years ARC development done within KnowARC project. The release contains the new ARC components developed within arc1/trunk branch of ARC code repository.
Get it
Source
SVN tag
Source tarball
Binaries
Binaries for Linux, MS Windows and Mac OS:
- http://download.nordugrid.org/software/nordugrid-arc-nox/releases/1.0.0/
- For Mac OS X, check also insructions
Solaris
Please use source code tarball. Solaris specific instructions can be found in INSTALL.Solaris file in the top level directory.
Installation
Dependencies
The core part of middleware is written in C/C++. Building the software from source or installing a pre-compiled binary requires different external packages, furthermore the client and server packages have different dependencies too. Below a list of the explicit requirements is shown:
Mandatory dependencies (and their versions) have been chosen carefully and should for the most part be available as part of the operating system distribution. This is especially true for the Linux platforms. In particular it should not be required to install a special version of a software package on the various platforms. The versions that are distributed as part of the operating system should be sufficient. In some rare cases it may result in decreased functionality, but this should be weighted against having special versions of software components that are already present as part of the operating system or provided by other community recognised third-party vendors.
Mandatory
o GNU make, autotools (autoconf>=2.56) (automake>=1.8) (build) o gettext (build) o e2fsprogs (build, run) o C++ compiler and library (build) o libtool (build) o pkg-config (build) o gthread-2.0 version 2.4.7 or later (build, run) o glibmm-2.4 version 2.4.7 or later (build, run) o glib2 o libxml-2.0 version 2.4.0 or later (build, run) o openssl version 0.9.7a or later (build, run) o doxygen (build) o GNU gettext (build, run)
Optional
o GNU time (run) (A-rex) o Perl, libxml-simple-perl package (run) (A-rex) o gsoap 2.7.2 (build,run) (HED) o swig version 1.3.28 or later (build) (Chelonia, bindings) o python 2.4 or higher (build, run) (Chelonia, bindings) o Berkeley DB C++ interface (build, run) (ISIS) o xmlsec1 1.2.4 or higher (build, run) (Security) o LHC File Catalog (build, run) (LFC DMC) o VOMS (run) (LFC DMC) o open-ldap (build, run) (LDAP DMC) o Grid Packaging Tools (GPT) (build) (arclib) o globus-common 4 (build, run) (arclib) o globus-gssapi-gsi 4 (build, run) (arclib) o globus-rls-client 4 (build, run) (arclib) o globus-ftp-client 4 (build,run) (arclib) o globus-ftp-control 4 (build, run) (arclib) o globus-io 4 (build, run) (arclib) o globus-openssl (build, run) (arclib) o CppUnit for unit testing (build) o librdf-perl (run) (Janitor) o Log4perl (run) (Janitor) o wget (run) (Janitor)
Please note, that depending on the operating system, development versions of
above mentioned packages may be required as well.
Notes
- If you have Python version older than 2.4 it will not be possible to run the Storage service
- Note: RHEL4 only has Python 2.3
- If you have OpenSSL version older than 0.9.7g A-REX, CHARON and ECHO can not be configured with TLS
- Use http instead of https
Installation from source
After downloading the the tar ball unpack it and cd into the created directory
tar -zxvf nordugrid-arc-nox-1.0.0.tar.gz cd nordugrid-arc-nox-1.0.0
If the code was obtained from the Subversion repository, just cd into the trunk directory. Then run autogen script
./autogen.sh
and configure the code
./configure --prefix=PLACE_TO_INSTALL_ARC
Choose installation prefix wisely and according to the requirements of your OS and/or personal preferences. ARC should function properly from any location. By default installation goes into /usr/local. For some modules of ARC to work properly one may need to set up the ARC_LOCATION environment variable after installation
export ARC_LOCATION=PLACE_TO_INSTALL_ARC
Configuration step allows to specify very detaily which components of ARC are to be built. Please check ./configure --help for details.
On some systems autogen.sh may produce some warnings. Ignore them as long as configure passes without errors. But in the case of problems during configure or compilation, collect them and present while reporting problems at http://bugzilla.nordugrid.org. If the previous commands finish without errors, compile the code
make
optionally, check the code via implemented unit tests.
make check
and install ARC
make install
On some systems gmake may be needed instead of make. Depending on chosen installation location, the last command may require to be run from a root account.
Non Linux platforms
OS specific instructions on how to build ARC Nox on Windows, Mac OS X and Solaris can be found in OS specific INSTALLs in the toplevel of the source code directory.
Installation from Binaries
Windows
For the Windows native installer coming with all the necessary run time dependencies is available
Mac OS X
Mac users are welcome to try out
- http://download.nordugrid.org/software/nordugrid-arc-nox/releases/1.0.0/macosx/nordugrid-nox-1.0.0-leopard.mpkg.zip
- http://download.nordugrid.org/software/nordugrid-arc-nox/releases/1.0.0/macosx/nordugrid-nox-1.0.0-snow-leopard.mpkg.zip
Linux
The linux binaries (relocatable RPMs and DEB packages) are divided into the following modules:
o nordugrid-arc-nox - Shared libraries o nordugrid-arc-nox-hed - Hosting Environment Daemon (HED) o nordugrid-arc-nox-arex - A-REX service o nordugrid-arc-nox-client - Client programs o nordugrid-arc-nox-dev - Development files o nordugrid-arc-nox-doc - Documentation o nordugrid-arc-nox-plugins-base - Base plugins o nordugrid-arc-nox-plugins-globus - Globus dependent plugins o nordugrid-arc-nox-python - Python wrapper and the Chelonia storage system o nordugrid-arc-nox-isis - Information system service o nordugrid-arc-nox-hopi - Simple http service o nordugrid-arc-nox-janitor - A-REX plugin for Dynamic RTE management o nordugrid-arc-nox-charon - Policy decision service
- Yum and apt repositories are available, see repositories configuration information
- Code tarbals, source RPMs and binaries can be downloaded from:
Repositories
The preferred way to install ARC is via the NorduGrid yum or apt repositories. In this case the installation is simpler. See configration instructions at:
http://download.nordugrid.org/repos.html
Yum
The group installation as indicated in the repository setup page is not yet available for the NOX release. But the installation is still simple. One RedHat and Fedora one can use yum and specify a list of packages or simply install all:
yum install nordugrid-arc-nox nordugrid-arc-nox-hed nordugrid-arc-nox-arex nordugrid-arc-nox-client nordugrid-arc-nox-isis nordugrid-arc-nox-charon nordugrid-arc-nox-hopi
or
yum install nordugrid-arc-nox*
Apt
The commands for the Debian/Ubuntu tool apt-get are similar:
apt-get install nordugrid-arc-nox nordugrid-arc-nox-hed nordugrid-arc-nox-arex nordugrid-arc-nox-client nordugrid-arc-nox-isis nordugrid-arc-nox-charon nordugrid-arc-nox-hopi
or
apt-get install nordugrid-arc-nox*
Setup and Usage
Examples
Using following examples you can quickly set up your freshly installed ARC Nox installation. Most of the following examples relies on xml based configuration. When configuration is provided via xml file, arched is to be started as follows
arched -c <xml_file>
ECHO
Echo is simple test service that comes as part of HED. It is meant mostly for testing purposes. There are two clients coming with the Nox arcecho and perftest. For more info, please see corresponding man pages.
ECHO service configuration
ISIS
ISIS stands for information and indexing service. It comes with p2p capabilities and provides information backbone of ARC Nox.
ISIS service configuration
CHARON
CHARON is policy decision service. It accepts soap queries and returns authorization decisions based on local policy.
charon service example policy configuration
A-REX
A-REX is the flagship service of the WS-based next generation ARC middleware. A-REX implements a general purpose Computing Element (CE) offering standard-compliant interfaces. It supports multiple LRMS. Here are examples for fork, pbs/torque and condor. The fourth example configuration enables A-REX with janitor - ARC plugin for dynamic RTE management.
A-REX with fork LRMS
A-REX with fork LRMS
A-REX with pbs LRMS
A-REX with torque LRMS
Chelonia storage server
The Chelonia storage cloud is a distributed system for storing replicated files on several storage nodes and managing them in a global namespace. The cloud exhibits self-healing capability because the system has a built-in automatic replication mechanism which ensures that a file managed by Chelonia always has the requested number of copies even if a storage node is lost or a replica becomes corrupt.
Chelonia service with cetralized a-hash database. chelonia corresponding profile
This example uses ini style configuration. It means we have one ini file and one .xml file which serves as profile. The arched with ini style configuration is started with -i option i.e.
arched -i <inifile>
Configuring A-REX
- Download two configuration files:
- A-REX, ECHO and CHARON service
- A-REX, ECHO and CHARON service (when OpenSSL v. 0.9.7g or older is installed)
- HOPI and STORAGE service.
- In all files check if path to modules (<ModuleManager><Path></Path></ModuleManager>) is correctly:
<ModuleManager> <Path>XXX</Path> (default is /usr/local/lib/arc/ when PREFIX was not set in other case it shall be @PREFIX@/lib/arc/ or @PREFIX@/lib64/arc/ for installation on 64bit, ...) </ModuleManager>
- Change in files the 'localhost' to the fully qualified hostname of your cluster in both files
- In config file related to A-REX:
- change the 'nobody' (twice in the config) to user which shall be used for unix mapping
- set in <charon:Location Type="file">XXX</charon:Location> the absolute location of your charon_policy.xml. The example policy can be found in $ARC_LOCATION/share/doc/arc directory (location of example config files may differe on different distributions)
- For the HOPI and STORAGE services:
- Make sure that the ports 60000 and 50000 are open for incoming connections in your firewall
- set the PYTHONPATH env variable:
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.4/site-packages/:/usr/lib/python2.4/site-packages/ # (set python version to yours )
- Create a directory /tmp/httpd and create some random file(s) there - this is your storage element
mkdir /tmp/httpd echo "<html><head><title>welcome to hopi</title></head><body><h1>Welcome to Hopi</h1></body></html>" > /tmp/httpd/index.html chmod -R go+r /tmp/httpd/index.html chmod 755 /tmp/httpd
- After you installed ARC1 a file named arc_arex.conf in /etc directory was created. In this file update the [common] block with proper information related to your lrms. Similarly for [cluster] and [queue/fork]. All configuration options can be found here.
- Note: Do not remove or change the existing [grid-manager] block!
Start HED with the services
arched -c arched -c arex-charon-echo.xml arched -c storage-hopi.xml
Services endpoints
If everything went well you should be running A-REX, CHARON, HOPI, ECHO and STORAGE service now. Their endpoints should be:
A-REX: https://example.org:60000/arex CHARON: https://example.org:60000/Charon ECHO https://example.org:60000/Echo HOPI http://example.org:50000/hopi/ STORAGE: http://example.org:50000/Bartender
Clients
Job submission and management
- generate your proxy certificate with arcproxy utility (see --help for usage). The following certificate/proxy environment setup may be useful if the command-line options are not preferred:
export X509_USER_CERT=$HOME/.globus/usercert.pem export X509_USER_KEY=$HOME/.globus/userkey.pem export X509_USER_PROXY=`mktemp /tmp/x509up.XXXXXX`
- in any case make sure that the X509_USER_PROXY env variable points to the location of your proxy certificate. grid-proxy-init requires the following setting:
export X509_USER_PROXY=/tmp/x509up.u`id -u`
- create a job description in JSDL form (e.g. http://vls.grid.upjs.sk/testing/job_descriptions/get_hostname.html) and save it as .xml. Please note that the current release requires that the executable (if locally staged) is explicitly listed as an inputfile.
- submit a job to your A-REX service (you have to know your A-REX endpoint) using arcsub command (see man arcsub for more details)
lynx --dump http://vls.grid.upjs.sk/testing/job_descriptions/get_hostname.html | sed '1,5d' > your_job.xml arcsub -c ARC1:https://example.org:60000/arex your_job.xml # (a jobID will be returned)
- monitor the status of your job with arcstat (see man arcstat for more details)
arcstat -a arcstat jobID
- get the results of finished job using arcget command (see man arcget for more details)
arcget -a arcget jobID
- clean your job on a cluster using arcclean command (see man arcclean for more details)
arcclean jobID
- kill your running job using arckill command (see man arckill for more details)
arckill jobID
Storage service
- To be added
HOPI service (simple HTTP server)
- point your favourite web browser to your HOPI endpoint url (e.g. http://example.org:50000/httpd). You should see the content of /tmp/httpd directory.
- use wget or curl to PUT, GET the files to/from your HOPI service (again use the same endpoint url here)
CHARON service
- To be added
Notes
- Certificates. To test most of the functionality of ARC you need to have a valid X509 host certificate, if you don't have it already you can generate one using KnowARC instant certificate authority (CA) running at https://vls.grid.upjs.sk/CA. You may find useful this short video demonstrating use of the instant CA. The generation of certificates for both user and server using web interface is presented in the video.
- LRMS. To test A-REX service you have to setup one of the supported Local Resource Management Systems (PBS/Torque, SGE, Condor, fork,...) and you have your host certificates on default location(/etc/grid-security).