User Tools

Site Tools


biac:cluster:interactive

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
biac:cluster:interactive [2013/12/04 19:29]
cmp12 [Job restrictions]
biac:cluster:interactive [2023/03/26 20:39] (current)
cmp12 [Accessing experiments]
Line 23: Line 23:
  
 ====== Job restrictions ====== ====== Job restrictions ======
-Each node has a finite amount of memory installed and due to the disk-less nature of the nodes there are restrictions set on the amount of ram used.  Currently, the default is to assign 4G of ram per job that is submitted.  If your job requires more than 4GB, then you may request a higher limit with the **"-l h_vmem"** directive ... otherwise you don't have to do anything.  This is done to prevent memory over subscription and to better distribute the load across the available machines.+Each node has a finite amount of memory installed and due to the disk-less nature of the nodes there are restrictions set on the amount of ram used.  Currently, the default is to assign 8G of ram per job that is submitted.  If your job requires more than 8GB, then you may request a higher limit with the **"-l h_vmem"** directive ... otherwise you don't have to do anything.  This is done to prevent memory over subscription and to better distribute the load across the available machines.
  
-  > qrsh -l h_vmem=5G bash -li+  > qrsh -q interact.q -V -verbose -N interact -l h_vmem=10G bash
  
-The above example will request/reserve 5G of available memory.  Your job will not go to a node unless it has the required amount available.  Also, if you exceed the requested amount the grid engine will terminate the job and you will receive notice.  In most cases you will not have to do anything, since 4G is a significant amount.  The amount of ram used in your jobs is listed as **"Max vmem"** in the emails set from the cluster.  The restriction is put in place to prevent memory being over allocated and jobs crashing an entire node, which would therefore kill other users' jobs.  +The above example will request/reserve 10G of available memory.  Your job will not go to a node unless it has the required amount available.  Also, if you exceed the requested amount the grid engine will terminate the job and you will receive notice.  In most cases you will not have to do anything, since 8G is a significant amount.  The amount of ram used in your jobs is listed as **"Max vmem"** in the emails set from the cluster.  The restriction is put in place to prevent memory being over allocated and jobs crashing an entire node, which would therefore kill other users' jobs.  
  
 The maximum available is 187GB on any node, so if you request more than that, the job will just sit in the queue waiting indefinitely.  The maximum available is 187GB on any node, so if you request more than that, the job will just sit in the queue waiting indefinitely. 
Line 38: Line 38:
  
 There is an automounter that is running on each node that can mount experiments when they are accessed through it's proxy filesystem **"/mnt/BIAC"** There is an automounter that is running on each node that can mount experiments when they are accessed through it's proxy filesystem **"/mnt/BIAC"**
-A call to a valid path will be intercepted by the proxy, and mounted.  Paths are /mnt/BIAC/server.dhe.duke.edu/share/Experiemnt.01+A call to a valid path will be intercepted by the proxy, and mounted.  Paths are /mnt/server/share/Experiemnt.01
  
 If you are unsure, you can call the helper function **findexp** in various ways to return a valid path: If you are unsure, you can call the helper function **findexp** in various ways to return a valid path:
 <code bash> <code bash>
-cd /mnt/BIAC/munin.dhe.duke.edu/BIAC/Dummy.01+cd /mnt/munin/BIAC/Dummy.01
 cd `findexp Dummy.01` cd `findexp Dummy.01`
 ls `findexp Dummy.01` ls `findexp Dummy.01`
Line 49: Line 49:
  
 All of those instances would mount the experiment Dummy.01 within the proxy filesystem.  The experiment paths within the proxy filesystem are consistent across all of the nodes, therefore you can access the data with the same path on any batch or interactive job. All of those instances would mount the experiment Dummy.01 within the proxy filesystem.  The experiment paths within the proxy filesystem are consistent across all of the nodes, therefore you can access the data with the same path on any batch or interactive job.
- 
- 
-<code> lnexp Dummy.01 
-[deshmukh@node4 ~]$ ls ~/experiments/Dummy.01 
-</code> 
-this will mount a single experiment and create a symbolic link to the path in ~/experiments 
  
  
Line 90: Line 84:
  
 This will increase the initial java-virtual-machine to 128 megabytes from the default of 64, it will also allow it to grow to 1gigabyte from the previous default of 128mb.  This is only relevant if NOT using the "-nojvm" flag. This will increase the initial java-virtual-machine to 128 megabytes from the default of 64, it will also allow it to grow to 1gigabyte from the previous default of 128mb.  This is only relevant if NOT using the "-nojvm" flag.
 +
 +You can also set the heap space preference through the graphical desktop in matlab:
 +[[http://blogs.mathworks.com/community/2010/04/26/controlling-the-java-heap-size/]]
 +
 +Just a word of cause, setting it to the max available space in the gui caused matlab to not open ( for me )
 +
 +
 ====== Ending a job ====== ====== Ending a job ======
  
Line 103: Line 104:
  
  
 +====== OpenGL ======
 +
 +For programs that require OpenGL rendering, like fsleyes, make sure your local X11 client supports openGL and is enabled.
 +
 +on mac make sure xquartz is installed:
 +<code>
 +Via Terminal:
 +
 +defaults read org.xquartz.X11
 +
 +{
 +    "NSWindow Frame x11_apps" = "330 482 454 299 0 0 1792 1095 ";
 +    "NSWindow Frame x11_prefs" = "291 401 484 336 0 0 1792 1095 ";
 +    SUHasLaunchedBefore = 1;
 +    SULastCheckTime = "2023-03-02 13:55:55 +0000";
 +    "app_to_run" = "/opt/X11/bin/xterm";
 +    "cache_fonts" = 1;
 +    "done_xinit_check" = 1;
 +    "enable_iglx" = 1; 
 +    "login_shell" = "/bin/sh";
 +    "no_auth" = 0;
 +    "nolisten_tcp" = 1;
 +    "startx_script" = "/opt/X11/bin/startx -- /opt/X11/bin/Xquartz";
 +}
 +</code>
 +
 +If iglx is not 1, enable openGL.  Afterwards reboot the computer.
 +<code>
 +defaults write org.xquartz.X11 enable_iglx -bool true
 +</code>
  
 +Versions of Xquartz before 2.8.0 used configuration paths of : org.macosforge.xquartz.X11
  
  
biac/cluster/interactive.1386185367.txt.gz · Last modified: 2014/08/04 16:03 (external edit)