This is an old revision of the document!
BIAC’s Linux cluster has 30 blades with a total of 720 Intel Xeon processor cores and 5 TB of memory. The cluster is based on Scientific Linux 7.4 and is running Son of Grid Engine for job scheduling. The nodes (not including login nodes) are diskless and operate with NFS mounted /root and /home directories managed via oneSIS. Each node is running the same disk image, with individual node differences handled through NFS mounts and ram-disk elements.
The illustration below gives a good overview of how BIAC software interacts with SGE during job submission:
- User logs in to the head node via ssh
- User request interactive session to access data/scripts saved at Experiment level
- scripts saved in home directory are accessible anywhere, without interactive sessions
- User submits a job through qsub -v EXPERIMENT=Usertest.01
- findexp returns an experiment path .. it can be saved to the $EXPERIMENT variable if following is present in the submission script:
- any access to experiment paths returned through findexp are picked up by the BIAC Proxy mounter. access is provided based on user's experiment priviledges