Journal of Digital Imaging

  1618-727X

  0897-1889

 

Cơ quản chủ quản:  Springer New York , SPRINGER

Lĩnh vực:
Computer Science ApplicationsRadiological and Ultrasound TechnologyRadiology, Nuclear Medicine and Imaging

Phân tích ảnh hưởng

Thông tin về tạp chí

 

Các bài báo tiêu biểu

Building Large-Scale Quantitative Imaging Databases with Multi-Scale Deep Reinforcement Learning: Initial Experience with Whole-Body Organ Volumetric Analyses
Tập 34 - Trang 124-133 - 2021
David J. Winkel, Hanns-Christian Breit, Thomas J. Weikert, Bram Stieltjes
To explore the feasibility of a fully automated workflow for whole-body volumetric analyses based on deep reinforcement learning (DRL) and to investigate the influence of contrast-phase (CP) and slice thickness (ST) on the calculated organ volume. This retrospective study included 431 multiphasic CT datasets—including three CP and two ST reconstructions for abdominal organs—totaling 10,508 organ volumes (10,344 abdominal organ volumes: liver, spleen, and kidneys, 164 lung volumes). Whole-body organ volumes were determined using multi-scale DRL for 3D anatomical landmark detection and 3D organ segmentation. Total processing time for all volumes and mean calculation time per case were recorded. Repeated measures analyses of variance (ANOVA) were conducted to test for robustness considering CP and ST. The algorithm calculated organ volumes for the liver, spleen, and right and left kidney (mean volumes in milliliter (interquartile range), portal venous CP, 5 mm ST: 1868.6 (1426.9, 2157.8), 350.19 (45.46, 395.26), 186.30 (147.05, 214.99) and 181.91 (143.22, 210.35), respectively), and for the right and left lung (2363.1 (1746.3, 2851.3) and 1950.9 (1335.2, 2414.2)). We found no statistically significant effects of the variable contrast phase or the variable slice thickness on the organ volumes. Mean computational time per case was 10 seconds. The evaluated approach, using state-of-the art DRL, enables a fast processing of substantial amounts irrespective of CP and ST, allowing building up organ-specific volumetric databases. The thus derived volumes may serve as reference for quantitative imaging follow-up.
Malignancy Detection on Mammography Using Dual Deep Convolutional Neural Networks and Genetically Discovered False Color Input Enhancement
Tập 30 - Trang 499-505 - 2017
Philip Teare, Michael Fishman, Oshra Benzaquen, Eyal Toledano, Eldad Elnekave
Breast cancer is the most prevalent malignancy in the US and the third highest cause of cancer-related mortality worldwide. Regular mammography screening has been attributed with doubling the rate of early cancer detection over the past three decades, yet estimates of mammographic accuracy in the hands of experienced radiologists remain suboptimal with sensitivity ranging from 62 to 87% and specificity from 75 to 91%. Advances in machine learning (ML) in recent years have demonstrated capabilities of image analysis which often surpass those of human observers. Here we present two novel techniques to address inherent challenges in the application of ML to the domain of mammography. We describe the use of genetic search of image enhancement methods, leading us to the use of a novel form of false color enhancement through contrast limited adaptive histogram equalization (CLAHE), as a method to optimize mammographic feature representation. We also utilize dual deep convolutional neural networks at different scales, for classification of full mammogram images and derivative patches combined with a random forest gating network as a novel architectural solution capable of discerning malignancy with a specificity of 0.91 and a specificity of 0.80. To our knowledge, this represents the first automatic stand-alone mammography malignancy detection algorithm with sensitivity and specificity performance similar to that of expert radiologists.
Datafish Multiphase Data Mining Technique to Match Multiple Mutually Inclusive Independent Variables in Large PACS Databases
Tập 29 Số 3 - Trang 331-336 - 2016
Brendan Kelley, Chad Klochko, Safwan Halabi, Daniel Siegal
Losing Images in Digital Radiology: More than You Think
Tập 28 - Trang 264-271 - 2014
Catherine Oglevee, Oleg Pianykh
It is a common belief that the shift to digital imaging some 20 years ago helped medical image exchange and got rid of any potential image loss that was happening with printed image films. Unfortunately, this is not the case: despite the most recent advances in digital imaging, most hospitals still keep losing their imaging data, with these losses going completely unnoticed. As a result, not only does image loss affect the faith in digital imaging but it also affects patient diagnosis and daily quality of clinical work. This paper identifies the origins of invisible image losses, provides methods and procedures to detect image loss, and demonstrates modes of action that can be taken to stop the problem from happening.
An Embedded Multi-branch 3D Convolution Neural Network for False Positive Reduction in Lung Nodule Detection
Tập 33 - Trang 846-857 - 2020
Wangxia Zuo, Fuqiang Zhou, Yuzhu He
Numerous lung nodule candidates can be produced through an automated lung nodule detection system. Classifying these candidates to reduce false positives is an important step in the detection process. The objective during this paper is to predict real nodules from a large number of pulmonary nodule candidates. Facing the challenge of the classification task, we propose a novel 3D convolution neural network (CNN) to reduce false positives in lung nodule detection. The novel 3D CNN includes embedded multiple branches in its structure. Each branch processes a feature map from a layer with different depths. All of these branches are cascaded at their ends; thus, features from different depth layers are combined to predict the categories of candidates. The proposed method obtains a competitive score in lung nodule candidate classification on LUNA16 dataset with an accuracy of 0.9783, a sensitivity of 0.8771, a precision of 0.9426, and a specificity of 0.9925. Moreover, a good performance on the competition performance metric (CPM) is also obtained with a score of 0.830. As a 3D CNN, the proposed model can learn complete and three-dimensional discriminative information about nodules and non-nodules to avoid some misidentification problems caused due to lack of spatial correlation information extracted from traditional methods or 2D networks. As an embedded multi-branch structure, the model is also more effective in recognizing the nodules of various shapes and sizes. As a result, the proposed method gains a competitive score on the false positive reduction in lung nodule detection and can be used as a reference for classifying nodule candidates.
IoT in Radiology: Using Raspberry Pi to Automatically Log Telephone Calls in the Reading Room
Tập 31 - Trang 371-378 - 2018
Po-Hao Chen, Nathan Cross
The work environment for medical imaging such as distractions, ergonomics, distance, temperature, humidity, and lighting conditions generates a paucity of data and is difficult to analyze. The emergence of Internet of Things (IoT) with decreasing cost of single-board computers like Raspberry Pi makes creating customized hardware to collect data from the clinical environment within the reach of a clinical imaging informaticist. This article will walk the reader through a series of basic project using a variety sensors and devices in conjunction with a Pi to gather data, culminating in a complex example designed to automatically detect and log telephone calls.
HEARTBEAT4D: An Open-source Toolbox for Turning 4D Cardiac CT into VR/AR
Tập 35 - Trang 1759-1767 - 2022
M. Bindschadler, S. Buddhe, M. R. Ferguson, T. Jones, S. D. Friedman, R. K. Otto
Four-dimensional data sets are increasingly common in MRI and CT. While clinical visualization often focuses on individual temporal phases capturing the tissue(s) of interest, it may be possible to gain additional insight through exploring animated 3D reconstructions of physiological motion made possible by augmented or virtual reality representations of 4D patient imaging. Cardiac CT acquisitions can provide sufficient spatial resolution and temporal data to support advanced visualization, however, there are no open-source tools readily available to facilitate the transformation from raw medical images to dynamic and interactive augmented or virtual reality representations. To address this gap, we developed a workflow using free and open-source tools to process 4D cardiac CT imaging starting from raw DICOM data and ending with dynamic AR representations viewable on a phone, tablet, or computer. In addition to assembling the workflow using existing platforms (3D Slicer and Unity), we also contribute two new features: 1. custom software which can propagate a segmentation created for one cardiac phase to all others and export to surface files in a fully automated fashion, and 2. a user interface and linked code for the animation and interactive review of the surfaces in augmented reality. Validation of the surface-based areas demonstrated excellent correlation with radiologists’ image-based areas (R > 0.99). While our tools were developed specifically for 4D cardiac CT, the open framework will allow it to serve as a blueprint for similar applications applied to 4D imaging of other tissues and using other modalities. We anticipate this and related workflows will be useful both clinically and for educational purposes.
Framework for Extracting Critical Findings in Radiology Reports
Tập 33 - Trang 988-995 - 2020
Thusitha Mabotuwana, Christopher S. Hall, Nathan Cross
Critical results reporting guidelines demand that certain critical findings are communicated to the responsible provider within a specific period of time. In this paper, we discuss a generic report processing pipeline to extract critical findings within the dictated report to allow for automation of quality and compliance oversight using a production dataset containing 1,210,858 radiology exams. Algorithm accuracy on an annotated dataset having 327 sentences was 91.4% (95% CI 87.6–94.2%). Our results show that most critical findings are diagnosed on CT and MR exams and that intracranial hemorrhage and fluid collection are the most prevalent at our institution. 1.6% of the exams were found to have at least one of the ten critical findings we focused on. This methodology can enable detailed analysis of critical results reporting for research, workflow management, compliance, and quality assurance.
DicomAnnotator: a Configurable Open-Source Software Program for Efficient DICOM Image Annotation
Tập 33 - Trang 1514-1526 - 2020
Qifei Dong, Gang Luo, David Haynor, Michael O’Reilly, Ken Linnau, Ziv Yaniv, Jeffrey G. Jarvik, Nathan Cross
Modern, supervised machine learning approaches to medical image classification, image segmentation, and object detection usually require many annotated images. As manual annotation is usually labor-intensive and time-consuming, a well-designed software program can aid and expedite the annotation process. Ideally, this program should be configurable for various annotation tasks, enable efficient placement of several types of annotations on an image or a region of an image, attribute annotations to individual annotators, and be able to display Digital Imaging and Communications in Medicine (DICOM)-formatted images. No current open-source software program fulfills these requirements. To fill this gap, we developed DicomAnnotator, a configurable open-source software program for DICOM image annotation. This program fulfills the above requirements and provides user-friendly features to aid the annotation process. In this paper, we present the design and implementation of DicomAnnotator. Using spine image annotation as a test case, our evaluation showed that annotators with various backgrounds can use DicomAnnotator to annotate DICOM images efficiently. DicomAnnotator is freely available at https://github.com/UW-CLEAR-Center/DICOM-Annotator under the GPLv3 license.
Determining Follow-Up Imaging Study Using Radiology Reports
Tập 33 - Trang 121-130 - 2019
Sandeep Dalal, Vadiraj Hombal, Wei-Hung Weng, Gabe Mankovich, Thusitha Mabotuwana, Christopher S. Hall, Joseph Fuller, Bruce E. Lehnert, Martin L. Gunn
Radiology reports often contain follow-up imaging recommendations. Failure to comply with these recommendations in a timely manner can lead to delayed treatment, poor patient outcomes, complications, unnecessary testing, lost revenue, and legal liability. The objective of this study was to develop a scalable approach to automatically identify the completion of a follow-up imaging study recommended by a radiologist in a preceding report. We selected imaging-reports containing 559 follow-up imaging recommendations and all subsequent reports from a multi-hospital academic practice. Three radiologists identified appropriate follow-up examinations among the subsequent reports for the same patient, if any, to establish a ground-truth dataset. We then trained an Extremely Randomized Trees that uses recommendation attributes, study meta-data and text similarity of the radiology reports to determine the most likely follow-up examination for a preceding recommendation. Pairwise inter-annotator F-score ranged from 0.853 to 0.868; the corresponding F-score of the classifier in identifying follow-up exams was 0.807. Our study describes a methodology to automatically determine the most likely follow-up exam after a follow-up imaging recommendation. The accuracy of the algorithm suggests that automated methods can be integrated into a follow-up management application to improve adherence to follow-up imaging recommendations. Radiology administrators could use such a system to monitor follow-up compliance rates and proactively send reminders to primary care providers and/or patients to improve adherence.