Feb 022010

Here’s a list of the pieces involved in moving data to/from PDSF/HPSS:

HSI (Hierarchical Storage Interface) command line interface to HPSS

Your ~/.netrc needs to be set up for auth. Then you can do things like:

$ hsi -q “ls -l”

More on the hsi command.

CHOS (Change OS)

This is an early attack at VMs, which is still in use on PDSF.

You set the OS you want by putting a short string in your ~/.chos file and/or setting the CHOS env var, then run the chos command (which happens automatically for login shells on pdsf).  You can start a new “OS shell” by running the chos command yourself if you want. For the STAR project, since it has settled on 32bit Scientific Linux 4, use “32sl44”. No ~/.chos will give you a value of “default” will give you a default OS, while “local” will give you the base OS.

Things to investigate related to your chos env are:

$ cat ~/.chos
$ echo $CHOS
$ ls -l /proc/chos/link  # points to /home/os/<chos selected OS>
$ cat /etc/redhat-release
$ lsb_release -a
$ uname -a  # doesn’t seem to get fooled by chos

$ ls -l /chos/home/os/ # list the available choices for $CHOS

Grid Proxy Certs

I know of two ways to do this:

Short-lived NERSC
For easy use within NERSC and/or if you don’t have or want to bother with a DOE Grid Cert.

$ module load globus
$ myproxy-logon -s nerscca.nersc.gov
Enter MyProxy pass phrase:  # <– Enter NIM password!
A credential has been received for user ksb in /tmp/x509up_uXXXXX

That file (the cert in it) will then be good for 12 hours.  More details here.

Using Your DOE Grid Cert
On a grid submit host:

$ source /opt/osg/setup.sh  # Set up env
$ grid-proxy-init  # Initilize a new cert
$ grid-proxy-info  # Look at it
$ myproxy-init -s myproxy.nersc.gov # generate and put proxy cert on nersc server

Now on the NERSC machine:

$ myproxy-get-delegation -s myproxy.nersc.gov
$ grid-proxy-info  # Look at it


Normally you’d do a ‘module load’ command to set your env to pick up BeStMan’s srm commands but that’s in the process of getting fixed. The workaround is to directly source the proper setup.sh file:

$ . /usr/local/pkg/OSG-1.2/setup.sh # See note about qsub and ~/.sge_request settings

srm-ls, srm-copy, etc. should now be on your PATH.

Here’s some same commands looking at HPSS files:

$ export X509_USER_PROXY=/path/to/my/cert # srm will use that env var
$ srm-ls srm://pdsfsrm.nersc.gov:62443/srm/v2/server?SFN=/garchive.nersc.gov/path/to/file/of/interest.data -storageinfo
$ srm-copy srm://pdsfsrm.nersc.gov:62443/srm/v2/server?SFN=/garchive.nersc.gov/path/to/file/to/copy.data file:////path/of/target/file.data -storageinfo

Using that X509_USER_PROXY env var will come in handy when this need to all be wrapped up in a batch job script to run on some unknown batch node. Alternatively, I think the srm commands have options for identifying the cert file.


To submit a job into the SGE queue use the qsub command:

$ qsub job.sh

Then monitor it with one of the following:

$ qstat -u <username>
$ qstat | grep <username>

when your job is done you’ll get “job.sh.oXXXXXXX” and “job.sh.eXXXXXXX” file with stdout and stderr respectively.

A couple of other tricks are to use the ‘qsh’ command which will pop open an xterm logged into the batch node you would have gotten if you had run qsub. This will require your DISPLAY var to be set. I found this troublesome as when some of my jobs died they took out the xterm with them along with any job output that might help me debug the problem. The ‘qlogin’ command is the same as qsh but don’t open a new term, just uses your tty. (Though I discovered that qlogin doesn’t appear to properly setup the ~/.chos env.)

Put in flags to to qsub (and it’s kin qsh and qlogin) in your ~/.sge_request file. I learned I needed the following in that file: “-l 64bit=1”. This says: “only send my job to 64bit machines”.