User Tools

Site Tools


biac:pipeline

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
biac:pipeline [2014/08/04 17:28]
cmp12_sa [Processing Daemon History]
biac:pipeline [2019/05/07 14:54]
cmp12
Line 5: Line 5:
 BIAC's Processing Pipeline is a software system designed to automatically transfer data off of the scanner, run a series of automated processes on the dataset, and transfer the processed results to a directory where the user can get to it. This process depends on proper assignation of the dataset to a valid Experiment. Once assigned, data is processed according to Experiment based preferences, and the results are transferred to the appropriate Experiment directory, where users who have data access privileges can get to it.  BIAC's Processing Pipeline is a software system designed to automatically transfer data off of the scanner, run a series of automated processes on the dataset, and transfer the processed results to a directory where the user can get to it. This process depends on proper assignation of the dataset to a valid Experiment. Once assigned, data is processed according to Experiment based preferences, and the results are transferred to the appropriate Experiment directory, where users who have data access privileges can get to it. 
  
-===== Transfer Daemon Overview =====+===== AIMS Overview =====
  
-BIAC's transd software is designed to run on GE Signa scanner, monitoring directories for k-space data and transferring raw data files off of the scanner via NFS mounts and/or SCP connections.+Automated Imaging Management System ( AIMS ) is a multi-threaded python daemon that runs on a desktop connected to the internal GE switch.  AIMS connects to the scanner console with sshfs to monitor the hard-drive for raw k-space data and adds new files to a copy queue thread to remove them from the scanner to reconstruct and archive.  AIMS launches as dicom receiver to receive automated dicom pushes from the console.  Once received there is a thread monitoring the receive location for new data to schedule processing and archive.  During the processing of reconstructed or dicom data images are wrapped with with BXH headers, converted to nii.gz, and copied to individual experiment storage locations.  If appropriate a full BIRN QA is run for functional data, which is uploaded to our webserver as well as copied to the experiment storage location.  GE physio files are also transferred if they were collected.  Imaging metadata and QA numbers are added to our imaging database for use with various web services.
  
-===== Transfer Daemon 1.0 Flowchart =====+{{ :biac:aims_flowchart.png?600 |}}
  
-{{:biac:transd-1.0-flowchart-1.png|Transfer Daemon 1.0 Flowchart}} 
  
-===== Processing Daemon 2.0 Beta Flowchart ===== 
- 
-Note: This primarily describes Functional Data Flow once the data has been transferred off of the scanner and onto a valid processing server. Anatomical Data Flow is similar, only the set of processes applied to the input data differ.  
- 
-{{:biac:procd-2.0-flowchart-1.png|Processing Daemon 2.0 Beta Flowchart}} 
  
 ===== Backup Daemon Flowchart & Backup Admin Workflow ===== ===== Backup Daemon Flowchart & Backup Admin Workflow =====
Line 49: Line 43:
   * [[biac:pipeline:versions|Processing Daemon v1.7]] - Manual intervention required for SENSE and b0 corrected data. Why? SENSE and b0 corrected recons expect the calibration run immediately preceding the run to be used. In the case that the transfer daemon transfers to mutiple nodes, there needs to be coordination among these nodes per exam, per run. We will wait until implementation of cluster processing daemon to devise a solution, as its easier to coordinate sequential run processing if there were only one datastream.    * [[biac:pipeline:versions|Processing Daemon v1.7]] - Manual intervention required for SENSE and b0 corrected data. Why? SENSE and b0 corrected recons expect the calibration run immediately preceding the run to be used. In the case that the transfer daemon transfers to mutiple nodes, there needs to be coordination among these nodes per exam, per run. We will wait until implementation of cluster processing daemon to devise a solution, as its easier to coordinate sequential run processing if there were only one datastream. 
   * [[biac:pipeline:softwaretodo|Software ToDo List & Change Log]] - A constantly updated list of changes that need to be made to pipeline software in priority order and change log for systems.   * [[biac:pipeline:softwaretodo|Software ToDo List & Change Log]] - A constantly updated list of changes that need to be made to pipeline software in priority order and change log for systems.
- 
- 
-dbglog($msg)  
biac/pipeline.txt · Last modified: 2023/10/23 14:24 by cmp12