User Tools

Site Tools


biac:pipeline

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
biac:pipeline [2014/08/04 17:26]
cmp12_sa [Processing Daemon History]
biac:pipeline [2019/05/07 15:03] (current)
cmp12
Line 5: Line 5:
 BIAC's Processing Pipeline is a software system designed to automatically transfer data off of the scanner, run a series of automated processes on the dataset, and transfer the processed results to a directory where the user can get to it. This process depends on proper assignation of the dataset to a valid Experiment. Once assigned, data is processed according to Experiment based preferences,​ and the results are transferred to the appropriate Experiment directory, where users who have data access privileges can get to it.  BIAC's Processing Pipeline is a software system designed to automatically transfer data off of the scanner, run a series of automated processes on the dataset, and transfer the processed results to a directory where the user can get to it. This process depends on proper assignation of the dataset to a valid Experiment. Once assigned, data is processed according to Experiment based preferences,​ and the results are transferred to the appropriate Experiment directory, where users who have data access privileges can get to it. 
  
-===== Transfer Daemon ​Overview =====+===== AIMS Overview =====
  
-BIAC's transd software ​is designed to run on GE Signa scanner, monitoring directories ​for k-space data and transferring raw data files off of the scanner ​via NFS mounts ​and/or SCP connections.+Automated Imaging Management System ( AIMS ) is a multi-threaded python daemon that runs on a desktop connected to the internal ​GE switch. ​ AIMS connects to the scanner ​console with sshfs to monitor the hard-drive ​for raw k-space data and adds new files to a copy queue thread to remove them from the scanner ​to reconstruct ​and archive. ​ AIMS launches as dicom receiver to receive automated dicom pushes from the console. ​ Once received there is a thread monitoring the receive location for new data to schedule processing and archive. ​ During the processing of reconstructed ​or dicom data images are wrapped with with BXH headers, converted to nii.gz, and copied to individual experiment storage locations. ​ If appropriate a full BIRN QA is run for functional data, which is uploaded to our webserver as well as copied to the experiment storage location. ​ GE physio files are also transferred if they were collected. ​ Imaging metadata and QA numbers are added to our imaging database for use with various web services.
  
-===== Transfer Daemon 1.0 Flowchart =====+{{ :​biac:​aims_flowchart.png?600 |}}
  
-{{:​biac:​transd-1.0-flowchart-1.png|Transfer Daemon 1.0 Flowchart}} 
  
-===== Processing Daemon 2.0 Beta Flowchart ​=====+===== Backup Workflow ​=====
  
-Note: This primarily describes Functional Data Flow once the data has been transferred off of the scanner and onto valid processing serverAnatomical Data Flow is similaronly the set of processes applied to the input data differ.  +A nightly cron job copies locally processed and raw data from the AIMS computers to network attached storage location This includes all k-space datapfile headers, nii.gz ​data, physio files, etc.
- +
-{{:​biac:​procd-2.0-flowchart-1.png|Processing Daemon 2.0 Beta Flowchart}} +
- +
-===== Backup Daemon Flowchart & Backup Admin Workflow ===== +
- +
-{{:​biac:​backup.png|:​biac:​backup.png}}+
  
 +On a monthly basis the archives are moved to a cloud based S3 cold storage location hosted by Duke.
  
 ===== User Frequently Asked Questions ===== ===== User Frequently Asked Questions =====
  
 === How does the pipeline assign a dataset to an Experiment? === === How does the pipeline assign a dataset to an Experiment? ===
-For functional ​data, the Patient ID that is entered on the scanner console is primarily used for identification. Our software takes what is entered in on the console, strips out the punctuation,​ and lowercases all text before before comparing against a list of valid Experiments.+For imaging ​data, the PatientID and ExperimentID ​that is entered on the scanner console is primarily used for identification. Our software takes what is entered in on the console, strips out the punctuation,​ and lowercases all text before before comparing against a list of valid Experiments.
  
-===The tech entered the correct Experiment ID on the 4.0T console, but the dataset was not properly assigned to an Experiment. What happened? === +===The tech entered the correct Experiment ID on the scanner ​console, but the dataset was not properly assigned to an Experiment. What happened? === 
-If you entered a period in the Patient ID fieldGE'​s ​software seems to have a bug in which the characters following the period are truncated before being inserted into the raw data headerAs result, the truncated Patient ID is unrecognized and the image results gets transferred ​to the Salvage directories. This only applies to functional ​data from the 4.0T+If the ExperimentID ​entered ​on the scanner does not match list of valid Experimentsor that Experiment'​s ​storage location is full, then our processing daemon will temporarily transfer data to our Salvage.01 experiment location. ​ You can file trouble ticket ​to get assistance receiving ​the salvage ​data. 
  
 ===How do I check the status of my data?=== ===How do I check the status of my data?===
-Under the '​Services'​ page on the BIAC website, click on 'Exam Tracker' ​and enter your 5 digit exam number. If the processing daemon has picked up and processed the files, you'​ll ​see the relevant log information.  +You can look at our status monitor page and select Exams -> Exam Number under the appropriate scanner which will display which series have been processed. ​ Also you can check your experiment location ​on the BIAC 'Exam Tracker' ​page to see similar ​information ​and various QA measures
 + 
 ===I see errors on the Exam Tracker page and my data did not show up. What do I need to do?=== ===I see errors on the Exam Tracker page and my data did not show up. What do I need to do?===
 If the error you see is '​Experiment Not Found',​ please submit a trouble ticket so our data administrators can know where the data was supposed to be sent. Our data admins will respond as soon as possible to the trouble ticket. If the error you see is '​Experiment Not Found',​ please submit a trouble ticket so our data administrators can know where the data was supposed to be sent. Our data admins will respond as soon as possible to the trouble ticket.
  
-Looking for the [[biac:​pipeline:​adminfaq|Pipeline/​Data Administration FAQ]]? Restricted. Request access by sending an email to Josh or Jimmy. +Looking for the [[biac:​pipeline:​adminfaq|Pipeline/​Data Administration FAQ]]? Restricted. Request access by sending an email to Josh or Chris.
- +
- +
- +
  
  
Line 49: Line 39:
   * [[biac:​pipeline:​versions|Processing Daemon v1.7]] - Manual intervention required for SENSE and b0 corrected data. Why? SENSE and b0 corrected recons expect the calibration run immediately preceding the run to be used. In the case that the transfer daemon transfers to mutiple nodes, there needs to be coordination among these nodes per exam, per run. We will wait until implementation of cluster processing daemon to devise a solution, as its easier to coordinate sequential run processing if there were only one datastream. ​   * [[biac:​pipeline:​versions|Processing Daemon v1.7]] - Manual intervention required for SENSE and b0 corrected data. Why? SENSE and b0 corrected recons expect the calibration run immediately preceding the run to be used. In the case that the transfer daemon transfers to mutiple nodes, there needs to be coordination among these nodes per exam, per run. We will wait until implementation of cluster processing daemon to devise a solution, as its easier to coordinate sequential run processing if there were only one datastream. ​
   * [[biac:​pipeline:​softwaretodo|Software ToDo List & Change Log]] - A constantly updated list of changes that need to be made to pipeline software in priority order and change log for systems.   * [[biac:​pipeline:​softwaretodo|Software ToDo List & Change Log]] - A constantly updated list of changes that need to be made to pipeline software in priority order and change log for systems.
- 
-dbg($msg) ​ 
biac/pipeline.1407173215.txt.gz · Last modified: 2014/08/04 17:26 by cmp12_sa