This release integrates and distributes the result of the 3-years ARC development done within KnowARC project. The release contains the new ARC components developed within arc1/trunk branch of ARC code repository.
current status: RC7 was released on 25th November
- 1 Get it
- 2 Installation
- 3 Setup and Usage
- 4 Notes
Mac OS X
Instructions are here: http://wiki.nordugrid.org/index.php/NOX/MacOSX Packages are here:
WARNING: Installing this package copies several libraries into /opt/local possibly overwriting the existing files inside, so if you ever used MacPorts, this could break things!
The Leopard version for some reason installs the python libraries and services to /Library/Python/2.5. Both versions put a file called a-rex into /Library/LaunchDaemons - I don't know what is this file.
These are the MPKG files created by MacPorts (in ZIP archive):
- for Mac OS X Leopard (tested with 10.5.8): nordugrid-nox-1.0.0-rc7-leopard.mpkg.zip (50 MB)
- for Mac OS X Snow Leopard (tested with 10.6.2): nordugrid-nox-1.0.0-rc7.mpkg.zip (40 MB)
Please use source code tarball. Solaris specific instructions can be found in README.Solaris file in the top level directory.
The core part of middleware is written in C/C++. Building the software from source or installing a pre-compiled binary requires different external packages, furthermore the client and server packages have different dependencies too. Below a list of the explicit requirements is shown:
Mandatory dependencies (and their versions) have been chosen carefully and should for the most part be available as part of the operating system distribution. This is especially true for the Linux platforms. In particular it should not be required to install a special version of a software package on the various platforms. The versions that are distributed as part of the operating system should be sufficient. In some rare cases it may result in decreased functionality, but this should be weighted against having special versions of software components that are already present as part of the operating system or provided by other community recognised third-party vendors.
o GNU make, autotools (autoconf>=2.56) (automake>=1.8) (build) o gettext (build) o e2fsprogs (build, run) o C++ compiler and library (build) o libtool (build) o pkg-config (build) o gthread-2.0 version 2.4.7 or later (build, run) o glibmm-2.4 version 2.4.7 or later (build, run) o glib2 o libxml-2.0 version 2.4.0 or later (build, run) o openssl version 0.9.7a or later (build, run) o doxygen (build) o GNU gettext (build, run)
o GNU time (run) (A-rex) o Perl, libxml-simple-perl package (run) (A-rex) o gsoap 2.7.2 (build,run) (HED) o swig version 1.3.28 or later (build) (Chelonia, bindings) o python 2.4 or higher (build, run) (Chelonia, bindings) o Berkeley DB C++ interface (build, run) (ISIS) o xmlsec1 1.2.4 or higher (build, run) (Security) o LHC File Catalog (build, run) (LFC DMC) o VOMS (run) (LFC DMC) o open-ldap (build, run) (LDAP DMC) o Grid Packaging Tools (GPT) (build) (arclib) o globus-common 4 (build, run) (arclib) o globus-gssapi-gsi 4 (build, run) (arclib) o globus-rls-client 4 (build, run) (arclib) o globus-ftp-client 4 (build,run) (arclib) o globus-ftp-control 4 (build, run) (arclib) o globus-io 4 (build, run) (arclib) o globus-openssl (build, run) (arclib) o CppUnit for unit testing (build) o librdf-perl (run) (Janitor) o Log4perl (run) (Janitor) o wget (run) (Janitor)
Please note, that depending on the operating system, development versions of above mentioned packages may be required as well.
- If you have Python version older than 2.4 it will not be possible to run the Storage service
- Note: RHEL4 only has Python 2.3
- If you have OpenSSL version older than 0.9.7g A-REX, CHARON and ECHO can not be configured with TLS
- Use http instead of https
Installation from source
After downloading the the tar ball unpack it and cd into the created directory
tar -zxvf nordugrid-arc-nox-1.0.0.tar.gz cd nordugrid-arc-nox-1.0.0
If the code was obtained from the Subversion repository, just cd into the trunk directory. Then run autogen script
and configure the code
Choose installation prefix wisely and according to the requirements of your OS and/or personal preferences. ARC should function properly from any location. By default installation goes into /usr/local. For some modules of ARC to work properly one may need to set up the ARC_LOCATION environment variable after installation
Configuration step allows to specify very detaily which components of ARC are to be built. Please check ./configure --help for details.
On some systems autogen.sh may produce some warnings. Ignore them as long as configure passes without errors. But in the case of problems during configure or compilation, collect them and present while reporting problems at http://bugzilla.nordugrid.org. If the previous commands finish without errors, compile the code
optionally, check the code via implemented unit tests.
and install ARC
On some systems gmake may be needed instead of make. Depending on chosen installation location, the last command may require to be run from a root account.
Non Linux platforms
OS specific instructions on how to build ARC Nox on Windows, Mac OS X and Solaris can be found in OS specific READMEs in the toplevel of the source code directory.
Installation from Binaries
For the Windows native installer coming with all the necessary run time dependencies is available
Mac OS X
Mac users are welcome to try out
The linux binaries (relocatable RPMs and DEB packages) are divided into the following modules:
o nordugrid-arc-nox - Shared libraries o nordugrid-arc-nox-hed - Hosting Environment Daemon (HED) o nordugrid-arc-nox-arex - A-REX service o nordugrid-arc-nox-client - Client programs o nordugrid-arc-nox-dev - Development files o nordugrid-arc-nox-doc - Documentation o nordugrid-arc-nox-plugins-base - Base plugins o nordugrid-arc-nox-plugins-globus - Globus dependent plugins o nordugrid-arc-nox-python - Python wrapper and the Chelonia storage system o nordugrid-arc-nox-isis - Information system service o nordugrid-arc-nox-hopi - Simple http service o nordugrid-arc-nox-janitor - A-REX plugin for Dynamic RTE management o nordugrid-arc-nox-charon - Policy decision service
The Linux distribution tarbals, source RPMs and binaries can be downloaded from
The alternative way to install ARC is via the Nordugrid yum or apt repositories. In this case the installation is simpler. After correctly configuring the yum repository at
The group installation as indicated in the repository setup page is not yet available for the NOX release. But the installation is still simple. One RedHat and Fedora one can use yum and specify a list of packages or simply install all:
yum install nordugrid-arc-nox nordugrid-arc-nox-hed nordugrid-arc-nox-arex nordugrid-arc-nox-client nordugrid-arc-nox-isis nordugrid-arc-nox-charon nordugrid-arc-nox-hopi
yum install nordugrid-arc-nox*
The commands for the Debian/Ubuntu tool apt-get are similar:
apt-get install nordugrid-arc-nox nordugrid-arc-nox-hed nordugrid-arc-nox-arex nordugrid-arc-nox-client nordugrid-arc-nox-isis nordugrid-arc-nox-charon nordugrid-arc-nox-hopi
apt-get install nordugrid-arc-nox*
Setup and Usage
- Download two configuration files:
- In all files check if path to modules (<ModuleManager><Path></Path></ModuleManager>) is correctly:
<ModuleManager> <Path>XXX</Path> (default is /usr/local/lib/arc/ when PREFIX was not set in other case it shall be @PREFIX@/lib/arc/ or @PREFIX@/lib64/arc/ for installation on 64bit, ...) </ModuleManager>
- Change in files the 'localhost' to the fully qualified hostname of your cluster in both files
- In config file related to A-REX:
- change the 'nobody' (twice in the config) to user which shall be used for unix mapping
- set in <charon:Location Type="file">XXX</charon:Location> the absolute location of your charon_policy.xml. The example policy can be found in $ARC_LOCATION/share/doc/arc directory (location of example config files may differe on different distributions)
- For the HOPI and STORAGE services:
- Make sure that the ports 60000 and 50000 are open for incoming connections in your firewall
- set the PYTHONPATH env variable:
export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.4/site-packages/:/usr/lib/python2.4/site-packages/ # (set python version to yours )
- Create a directory /tmp/httpd and create some random file(s) there - this is your storage element
mkdir /tmp/httpd echo "<html><head><title>welcome to hopi</title></head><body><h1>Welcome to Hopi</h1></body></html>" > /tmp/httpd/index.html chmod -R go+r /tmp/httpd/index.html chmod 755 /tmp/httpd
- After you installed ARC1 a file named arc_arex.conf in /etc directory was created. In this file update the [common] block with proper information related to your lrms. Similarly for [cluster] and [queue/fork]. All configuration options can be found here.
- Note: Do not remove or change the existing [grid-manager] block!
Start HED with the services
arched -c arched -c arex-charon-echo.xml arched -c storage-hopi.xml
If everything went well you should be running A-REX, CHARON, HOPI, ECHO and STORAGE service now. Their endpoints should be:
A-REX: https://example.org:60000/arex CHARON: https://example.org:60000/Charon ECHO https://example.org:60000/Echo HOPI http://example.org:50000/hopi/ STORAGE: http://example.org:50000/Bartender
Job submission and management
- generate your proxy certificate with arcproxy utility (see --help for usage). The following certificate/proxy environment setup may be useful if the command-line options are not preferred:
export X509_USER_CERT=$HOME/.globus/usercert.pem export X509_USER_KEY=$HOME/.globus/userkey.pem export X509_USER_PROXY=`mktemp /tmp/x509up.XXXXXX`
- in any case make sure that the X509_USER_PROXY env variable points to the location of your proxy certificate. grid-proxy-init requires the following setting:
export X509_USER_PROXY=/tmp/x509up.u`id -u`
- create a job description in JSDL form (e.g. http://vls.grid.upjs.sk/testing/job_descriptions/get_hostname.html) and save it as .xml. Please note that the current release requires that the executable (if locally staged) is explicitly listed as an inputfile.
- submit a job to your A-REX service (you have to know your A-REX endpoint) using arcsub command (see man arcsub for more details)
lynx --dump http://vls.grid.upjs.sk/testing/job_descriptions/get_hostname.html | sed '1,5d' > your_job.xml arcsub -c ARC1:https://example.org:60000/arex your_job.xml # (a jobID will be returned)
- monitor the status of your job with arcstat (see man arcstat for more details)
arcstat -a arcstat jobID
- get the results of finished job using arcget command (see man arcget for more details)
arcget -a arcget jobID
- clean your job on a cluster using arcclean command (see man arcclean for more details)
- kill your running job using arckill command (see man arckill for more details)
- To be added
HOPI service (simple HTTP server)
- point your favourite web browser to your HOPI endpoint url (e.g. http://example.org:50000/httpd). You should see the content of /tmp/httpd directory.
- use wget or curl to PUT, GET the files to/from your HOPI service (again use the same endpoint url here)
- To be added
- Certificates. To test most of the functionality of ARC you need to have a valid X509 host certificate, if you don't have it already you can generate one using KnowARC instant certificate authority (CA) running at https://vls.grid.upjs.sk/CA. You may find useful this short video demonstrating use of the instant CA. The generation of certificates for both user and server using web interface is presented in the video.
- LRMS. To test A-REX service you have to setup one of the supported Local Resource Management Systems (PBS/Torque, SGE, Condor, fork,...) and you have your host certificates on default location(/etc/grid-security).