User Tools

Site Tools


biac:cluster:interactive

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
biac:cluster:interactive [2012/03/27 17:47]
petty [Job restrictions]
biac:cluster:interactive [2023/03/26 20:39] (current)
cmp12 [Accessing experiments]
Line 23: Line 23:
  
 ====== Job restrictions ====== ====== Job restrictions ======
-Each node has a finite amount of memory installed and due to the disk-less nature of the nodes there are restrictions set on the amount of ram used.  Currently, the default is to assign 4G of ram per job that is submitted.  If your job requires more than 4GB, then you may request a higher limit with the **"-l h_vmem"** directive ... otherwise you don't have to do anything.  This is done to prevent memory over subscription and to better distribute the load across the available machines.+Each node has a finite amount of memory installed and due to the disk-less nature of the nodes there are restrictions set on the amount of ram used.  Currently, the default is to assign 8G of ram per job that is submitted.  If your job requires more than 8GB, then you may request a higher limit with the **"-l h_vmem"** directive ... otherwise you don't have to do anything.  This is done to prevent memory over subscription and to better distribute the load across the available machines.
  
-  > qrsh -l h_vmem=5G bash -li+  > qrsh -q interact.q -V -verbose -N interact -l h_vmem=10G bash
  
-The above example will request/reserve 5G of available memory.  Your job will not go to a node unless it has the required amount available.  Also, if you exceed the requested amount the grid engine will terminate the job and you will receive notice.  In most cases you will not have to do anything, since 4G is a significant amount.  The amount of ram used in your jobs is listed as **"Max vmem"** in the emails set from the cluster.  The restriction is put in place to prevent memory being over allocated and jobs crashing an entire node, which would therefore kill other users' jobs.  +The above example will request/reserve 10G of available memory.  Your job will not go to a node unless it has the required amount available.  Also, if you exceed the requested amount the grid engine will terminate the job and you will receive notice.  In most cases you will not have to do anything, since 8G is a significant amount.  The amount of ram used in your jobs is listed as **"Max vmem"** in the emails set from the cluster.  The restriction is put in place to prevent memory being over allocated and jobs crashing an entire node, which would therefore kill other users' jobs.  
  
 The maximum available is 187GB on any node, so if you request more than that, the job will just sit in the queue waiting indefinitely.  The maximum available is 187GB on any node, so if you request more than that, the job will just sit in the queue waiting indefinitely. 
Line 33: Line 33:
 //Please do not request additional resources unless you absolutely need them.  If additional resources are requested, they are deducted from the amount available to everyone else.  If unneeded resources are requested, this reduces the capacity on a given node for other potential usage.//  //Please do not request additional resources unless you absolutely need them.  If additional resources are requested, they are deducted from the amount available to everyone else.  If unneeded resources are requested, this reduces the capacity on a given node for other potential usage.// 
  
-There is a 1GB cumulative quota on all HOME directories and a 32GB cumulative quota for space in /tmp ( or $TMP ) across all nodes combined.+There is a 5GB cumulative quota on all $HOME directories ( shared with your window's home directory ) and a 32GB cumulative quota for space in /tmp ( or $TMP ) across all nodes combined.
 ====== Accessing experiments ====== ====== Accessing experiments ======
 Experiments can be reached multiple ways: Experiments can be reached multiple ways:
  
-There is an automounter that is running on each node that can mount experiments when they are accessed through it's proxy filesystem **"/mnt/BIAC2"** +There is an automounter that is running on each node that can mount experiments when they are accessed through it's proxy filesystem **"/mnt/BIAC"** 
-A call to a valid path will be intercepted by the proxy, and mounted.  Paths are /mnt/BIAC2/server.dhe.duke.edu/share/Experiemnt.01+A call to a valid path will be intercepted by the proxy, and mounted.  Paths are /mnt/server/share/Experiemnt.01
  
 If you are unsure, you can call the helper function **findexp** in various ways to return a valid path: If you are unsure, you can call the helper function **findexp** in various ways to return a valid path:
 <code bash> <code bash>
-cd /mnt/BIAC2/munin.dhe.duke.edu/BIAC/Dummy.01+cd /mnt/munin/BIAC/Dummy.01
 cd `findexp Dummy.01` cd `findexp Dummy.01`
 ls `findexp Dummy.01` ls `findexp Dummy.01`
Line 49: Line 49:
  
 All of those instances would mount the experiment Dummy.01 within the proxy filesystem.  The experiment paths within the proxy filesystem are consistent across all of the nodes, therefore you can access the data with the same path on any batch or interactive job. All of those instances would mount the experiment Dummy.01 within the proxy filesystem.  The experiment paths within the proxy filesystem are consistent across all of the nodes, therefore you can access the data with the same path on any batch or interactive job.
- 
- 
----- 
- 
- 
-<code>mntshare //server/share 
-[deshmukh@node4 ~]$ ls ~/net/fatt/data 
-</code> 
- 
-this will mount the entire server and therefore you can access all the experiments on that share.  You will be prompted for your **window's** password 
- 
- 
----- 
- 
- 
-<code> lnexp Dummy.01 
-[deshmukh@node4 ~]$ ls ~/experiments/Dummy.01 
-</code> 
-this will mount a single experiment and create a symbolic link to the path in ~/experiments 
  
  
Line 88: Line 69:
 matlab -nodesktop < myscript.m  matlab -nodesktop < myscript.m 
 </code> </code>
 +
 +You can run matlab without the desktop and without the java-virtual-machine if you continue to have "out of memory" errors:
 +- some functions that require java may no longer be accessible
 +<code bash>
 +matlab -nodesktop -nojvm -nosplash
 +</code>
 +
 +Also, if you continue having JAVA memory errors you can create a java.opts file to increase the JAVA memory that matlab uses.
 +In a directory where you launch matlab, create a file names "java.opts" containing the following lines:
 +<code bash>
 +-Xms128m
 +-Xmx1g
 +</code>
 +
 +This will increase the initial java-virtual-machine to 128 megabytes from the default of 64, it will also allow it to grow to 1gigabyte from the previous default of 128mb.  This is only relevant if NOT using the "-nojvm" flag.
 +
 +You can also set the heap space preference through the graphical desktop in matlab:
 +[[http://blogs.mathworks.com/community/2010/04/26/controlling-the-java-heap-size/]]
 +
 +Just a word of cause, setting it to the max available space in the gui caused matlab to not open ( for me )
 +
  
 ====== Ending a job ====== ====== Ending a job ======
Line 102: Line 104:
  
  
 +====== OpenGL ======
 +
 +For programs that require OpenGL rendering, like fsleyes, make sure your local X11 client supports openGL and is enabled.
 +
 +on mac make sure xquartz is installed:
 +<code>
 +Via Terminal:
 +
 +defaults read org.xquartz.X11
 +
 +{
 +    "NSWindow Frame x11_apps" = "330 482 454 299 0 0 1792 1095 ";
 +    "NSWindow Frame x11_prefs" = "291 401 484 336 0 0 1792 1095 ";
 +    SUHasLaunchedBefore = 1;
 +    SULastCheckTime = "2023-03-02 13:55:55 +0000";
 +    "app_to_run" = "/opt/X11/bin/xterm";
 +    "cache_fonts" = 1;
 +    "done_xinit_check" = 1;
 +    "enable_iglx" = 1; 
 +    "login_shell" = "/bin/sh";
 +    "no_auth" = 0;
 +    "nolisten_tcp" = 1;
 +    "startx_script" = "/opt/X11/bin/startx -- /opt/X11/bin/Xquartz";
 +}
 +</code>
 +
 +If iglx is not 1, enable openGL.  Afterwards reboot the computer.
 +<code>
 +defaults write org.xquartz.X11 enable_iglx -bool true
 +</code>
  
 +Versions of Xquartz before 2.8.0 used configuration paths of : org.macosforge.xquartz.X11
  
  
biac/cluster/interactive.1332870425.txt.gz · Last modified: 2014/08/04 16:03 (external edit)