This wiki is obsolete, see the NorduGrid web pages for up to date information.

ARC1/Install and Setup

From NorduGrid
Jump to navigationJump to search

Warn.png

NOTE: This page is out of date and is kept for purely historical reasons.

ARC1 installation on a system which already runs ARC0

Here we assume that you already are successfully running ARC0 (this should be the case for PGS sites) which is working. You have your host certificates on default location(/etc/grid-security)

Prerequisites:

  • If you have Python version older than 2.4 it will not be possible to run the Storage service
    • Note: RHEL4 only has Python 2.3
  • If you have OpenSSL version older than 0.9.7g A-REX, CHARON and ECHO can not be configured with TLS
    • Use http instead of https

1. step - shutdown ARC0 cluster

/etc/init.d/gridftpd stop       #(optional)
/etc/init.d/grid-manager stop
/etc/init.d/grid-infosys stop   #(optional)

2. step - install ARC1 components and clients

yum install nordugrid-arc1*

or

apt-get install nordugrid-arc1*

The groupinstall does not install a package nordugrid-arc1-plugins-globus which is necessary for staging in/out to/from SE (you need to install this package separatelly)

 yum groupinstall "ARC1 Server"
 yum groupinstall "ARC1 Client"

download apropriate packages and issue:

 rpm -Uvh nordugrid-arc1*.rpm
 dpkg -i nordugrid-arc1*.deb

If you downloaded the tarball, unpack it and cd into the created directory.

 tar -zxvf nordugrid-arc1-0.9.1-snapshot.tar.gz
 cd nordugrid-arc1-0.9.1

If you obtained the code from the Subversion repository, use the 'trunk' directory.

 cd trunk

Now configure the obtained code with

 ./autogen.sh
 ./configure --prefix=PLACE_TO_INSTALL_ARC1

Choose installation prefix wisely and according to the requirements of your OS and personal preferences. ARC1 should function properly from any location. By default installation goes into /usr/local if you omit the '--prefix' option. For some modules of ARC1 to work properly you may need to set up the environment variable after installation:

 export ARC_LOCATION=PLACE_TO_INSTALL_ARC

On some systems 'autogen.sh' may produce few warnings. Ignore them as long as 'configure' passes without errors. But in case of problems during configure or compilation, collect them and present while reporting problems. If the previous commands finish without errors, compile and install ARC1

 make
 make install

On some systems You may need to use gmake instead of make.

In all three cases you have to have the prerequirements installed(some of them are needed only when installing from source):

 Mandatory (on client as well as server side):
   o GNU make, autotools (autoconf>=2.56) (automake>=1.8) (build)
   o C++ compiler and library (build)
   o libtool (build)
   o pkg-config (build)
   o gthread-2.0 version 2.4.7 or later (build, run)
   o glibmm-2.4 version 2.4.7 or later (build, run)
   o libxml-2.0 version 2.4.0 or later (build, run)
   o openssl version 0.9.7a or later (build, run)
   o e2fsprogs (build, run)
   o doxygen (build)
   o GNU gettext (build, run)
 Optional (mainly applicable on server side):
   o swig version 1.3.28 or later (build)
   o java sdk 1.4 or later for Java bindings (build, run)
   o python 2.4 or higher for Python bindings (build, run)
   o Grid Packaging Tools (GPT) (http://www.gridpackagingtools.org/) (build)
   o Globus Toolkit 4 (http://www.globus.org/) which contains (build, run)
     - Globus RLS client
     - Globus FTP client
     - Globus RSL
   o LHC File Catalog (LFC) (https://savannah.cern.ch/projects/jra1mdw/) (build, run)
   o CppUnit for unit testing (build)
   o Berkeley DB C++ interface (build, run)

Please note that depending on operating system distribution in order to build ARC1 you may need to install development versions of mentioned packages.

3. step - configuring A-REX

  • In both files check if path to modules (<ModuleManager><Path></Path></ModuleManager>) is correctly:
<ModuleManager>
  <Path>XXX</Path> (could be /usr/lib/arc/ for instalation from RPMs on 32bit, /usr/lib64/arc/ for installation from RPMS on 64bit, ...)
</ModuleManager>

If you are using the pre-built packages there should be no need to change the path unless you are on a 64bit platform.

  • Change 'localhost' to the fully qualified hostname of your cluster in both files
  • In the first file:
    • change the 'nobody' (twice in the config) to user which is specified in your arc.conf file under gridftpd/unixmap
    • set in <charon:Location Type="file">XXX</charon:Location> the absolute location of your charon_policy.xml. The example policy can be found in $ARC_LOCATION/share/doc/nordugrid-arc1-server-0.9.1 directory (location of example config files may differe on different distributions)
  • For the HOPI and STORAGE services:
    • Make sure that the ports 60000 and 50000 are open for incoming connections in your firewall
    • set the PYTHONPATH env variable:
 export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.4/site-packages/:/usr/lib/python2.4/site-packages/  # (set python version to yours )
    • Create a directory /tmp/httpd and create some random file(s) there - this is your storage element
 mkdir /tmp/httpd
 echo "<html><head><title>welcome to hopi</title></head><body><h1>Welcome to Hopi</h1></body></html>" > /tmp/httpd/index.html
 chmod -R go+r /tmp/httpd/index.html
 chmod 755 /tmp/httpd
  • After you installed ARC1 a file named arc_arex.conf in /etc directory was created. In this file replace the [common] block with [common] block from your old arc.conf and [cluster] and [queue/fork] with blocks from your arc.conf
    • Note: Do not remove the existing [grid-manager] block!

4. step - start HED with the services

On some systems you will have to export LD_LIBRARY_PATH env variable before starting arched

arched -c arc1_services-arex-charon-echo.xml
arched -c arc1_services-storage-hopi.xml

5. step - if everything went well you should be running A-REX, CHARON, HOPI, ECHO and STORAGE service now. Their endpoints should be:

A-REX: https://example.org:60000/arex
CHARON: https://example.org:60000/Charon
ECHO https://example.org:60000/Echo
HOPI http://example.org:50000/hopi/
STORAGE: http://example.org:50000/Bartender

The PGS sites endpoints can be found here.

6. step - test your services

Comments:

  • There might be some backend scripts warnings visible in your /var/log/arched*.log files, but they should be harmless.
  • If you are trying to run A-REX on a machine with openssl version older than 0.9.7g then you will have to configure A-REX, CHARON and ECHO without the tls layer (along with changing 'https' to 'http'). Take this config as example.
  • If the staging in/out does not work with your A-REX service then you do not have proper globus packages installed or your openssl verion is too old.