Here’s a list of the pieces involved in moving data to/from PDSF/HPSS:
HSI (Hierarchical Storage Interface) command line interface to HPSS
More on the hsi command.
CHOS (Change OS)
You set the OS you want by putting a short string in your ~/.chos
file and/or setting the CHOS env var, then run the chos command (which happens automatically for login shells on pdsf). You can start a new “OS shell” by running the chos
command yourself if you want. For the STAR project, since it has settled on 32bit Scientific Linux 4, use “32sl44
”. No ~/.chos
will give you a value of “default
” will give you a default OS, while “local
” will give you the base OS.
Things to investigate related to your chos env are:
$ echo $CHOS
$ ls -l /proc/chos/link # points to /home/os/<chos selected OS>
$ cat /etc/redhat-release
$ lsb_release -a
$ uname -a # doesn’t seem to get fooled by chos
$ ls -l /chos/home/os/ # list the available choices for $CHOS
Grid Proxy Certs
Short-lived NERSC
$ myproxy-logon -s nerscca.nersc.gov
Enter MyProxy pass phrase: # <– Enter NIM password!
A credential has been received for user ksb in /tmp/x509up_uXXXXX
That file (the cert in it) will then be good for 12 hours. More details here.
Using Your DOE Grid Cert
$ grid-proxy-init # Initilize a new cert
$ grid-proxy-info # Look at it
$ myproxy-init -s myproxy.nersc.gov # generate and put proxy cert on nersc server
Now on the NERSC machine:
$ grid-proxy-info # Look at it
BeStMan
srm-ls
, srm-copy
, etc. should now be on your PATH
.
Here’s some same commands looking at HPSS files:
$ srm-ls srm://pdsfsrm.nersc.gov:62443/srm/v2/server?SFN=/garchive.nersc.gov/path/to/file/of/interest.data -storageinfo
$ srm-copy srm://pdsfsrm.nersc.gov:62443/srm/v2/server?SFN=/garchive.nersc.gov/path/to/file/to/copy.data file:////path/of/target/file.data -storageinfo
Using that X509_USER_PROXY
env var will come in handy when this need to all be wrapped up in a batch job script to run on some unknown batch node. Alternatively, I think the srm commands have options for identifying the cert file.
SGE
Then monitor it with one of the following:
$ qstat | grep <username>
when your job is done you’ll get “job.sh.oXXXXXXX
” and “job.sh.eXXXXXXX
” file with stdout and stderr respectively.
A couple of other tricks are to use the ‘qsh
’ command which will pop open an xterm logged into the batch node you would have gotten if you had run qsub
. This will require your DISPLAY
var to be set. I found this troublesome as when some of my jobs died they took out the xterm with them along with any job output that might help me debug the problem. The ‘qlogin
’ command is the same as qsh but don’t open a new term, just uses your tty. (Though I discovered that qlogin doesn’t appear to properly setup the ~/.chos
env.)
Put in flags to to qsub
(and it’s kin qsh
and qlogin
) in your ~/.sge_request
file. I learned I needed the following in that file: “-l 64bit=1
”. This says: “only send my job to 64bit machines”.