Local scratch on nodes
Storing data locally on the nodes is possible everywhere. Please remember you are free to log into any node manually (rsh/ssh, where ssh is now the preferred way) and use the data locally. The head and compute nodes have a directory
which features the following hierarchy
- This directory will be created automatically and every user is free to use her/his directory to dump output locally to the node, e.g. intermediate data products, status information. Please note that data stored here is never backed up nor do we guarantee that we will not delete old data from here once this partition fills up. It has happened rarely so far, but please be prepared!
- This special directory will be read-only by users since it will be the otherwise exact replica of a master directory on a head node.
- Another special directory where data can be distributed over the cluster to gain bandwidth. The exact structure under this directory is still in flux and is custom made for any job/project. The data here is also read only.
areas are directly cross-mounted under /atlas/node/n<1234> on every machine. Lately, we added a new generic mount point to the zoo:
will refer to
on node . The beauty here is that autofs automatically detects if a local directory is referenced and won't use NFS.
You (joe) have a code which needs a lot of disk I/O for condor jobs in the local universe (i.e. head nodes), but this directory needs to be the very same on all machines:
Log into a head node where Condor local universe jobs can run, e.g. atlas1. Create your job directory under
/atlas/user/atlas1/joe and verify that is magically appears under
/local/user/joe. The directory
/atlas/user/atlas1/joe is now valid on any machine and will be accessed via NFS (with the usual overhead), but on atlas1 it will behave just as a local directory.
-- CarstenAulbert - 13 Oct 2008