This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
biac:pipeline [2014/08/04 17:53] cmp12_sa [Processing Daemon History] |
biac:pipeline [2023/02/23 18:43] 127.0.0.1 external edit |
||
---|---|---|---|
Line 5: | Line 5: | ||
BIAC's Processing Pipeline is a software system designed to automatically transfer data off of the scanner, run a series of automated processes on the dataset, and transfer the processed results to a directory where the user can get to it. This process depends on proper assignation of the dataset to a valid Experiment. Once assigned, data is processed according to Experiment based preferences, | BIAC's Processing Pipeline is a software system designed to automatically transfer data off of the scanner, run a series of automated processes on the dataset, and transfer the processed results to a directory where the user can get to it. This process depends on proper assignation of the dataset to a valid Experiment. Once assigned, data is processed according to Experiment based preferences, | ||
- | ===== Transfer Daemon | + | ===== AIMS Overview ===== |
- | BIAC's transd software | + | Automated Imaging Management System ( AIMS ) is a multi-threaded python daemon that runs on a desktop connected to the internal |
- | ===== Transfer Daemon 1.0 Flowchart ===== | + | {{ : |
- | {{: | ||
- | ===== Processing Daemon 2.0 Beta Flowchart | + | ===== Backup Workflow |
- | Note: This primarily describes Functional Data Flow once the data has been transferred off of the scanner and onto a valid processing server. Anatomical Data Flow is similar, only the set of processes applied to the input data differ. | + | A nightly cron job copies locally processed and raw data from the AIMS computers to a network attached storage location. This includes all k-space data, pfile headers, nii.gz |
- | + | ||
- | {{: | + | |
- | + | ||
- | ===== Backup Daemon Flowchart & Backup Admin Workflow ===== | + | |
- | + | ||
- | {{: | + | |
+ | On a monthly basis the archives are moved to a cloud based S3 cold storage location hosted by Duke. | ||
===== User Frequently Asked Questions ===== | ===== User Frequently Asked Questions ===== | ||
=== How does the pipeline assign a dataset to an Experiment? === | === How does the pipeline assign a dataset to an Experiment? === | ||
- | For functional | + | For imaging |
- | + | ||
- | ===The tech entered the correct Experiment ID on the 4.0T console, but the dataset was not properly assigned to an Experiment. What happened? === | + | |
- | If you entered a period in the Patient ID field, GE's software seems to have a bug in which the characters following the period are truncated before being inserted into the raw data header. As a result, the truncated Patient ID is unrecognized and the image results gets transferred to the Salvage directories. This only applies to functional data from the 4.0T. | + | |
- | + | ||
- | ===How do I check the status of my data?=== | + | |
- | Under the ' | + | |
- | + | ||
- | ===I see errors on the Exam Tracker page and my data did not show up. What do I need to do?=== | + | |
- | If the error you see is ' | + | |
- | + | ||
- | Looking for the [[biac: | + | |
- | + | ||
+ | ===The tech entered the correct Experiment ID on the scanner console, but the dataset was not properly assigned to an Experiment. What happened? === | ||
+ | If the ExperimentID entered on the scanner does not match a list of valid Experiments, | ||
+ | Looking for the [[biac: | ||