Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
J Digit Imaging ; 33(4): 838-845, 2020 08.
Article in English | MEDLINE | ID: mdl-32043178

ABSTRACT

The purpose of this research is to exploit a weak and semi-supervised deep learning framework to segment prostate cancer in TRUS images, alleviating the time-consuming work of radiologists to draw the boundary of the lesions and training the neural network on the data that do not have complete annotations. A histologic-proven benchmarking dataset of 102 case images was built and 22 images were randomly selected for evaluation. Some portion of the training images were strong supervised, annotated pixel by pixel. Using the strong supervised images, a deep learning neural network was trained. The rest of the training images with only weak supervision, which is just the location of the lesion, were fed to the trained network to produce the intermediate pixelwise labels for the weak supervised images. Then, we retrained the neural network on the all training images with the original labels and the intermediate labels and fed the training images to the retrained network to produce the refined labels. Comparing the distance of the center of mass of the refined labels and the intermediate labels to the weak supervision location, the closer one replaced the previous label, which could be considered as the label updates. After the label updates, test set images were fed to the retrained network for evaluation. The proposed method shows better result with weak and semi-supervised data than the method using only small portion of strong supervised data, although the improvement may not be as much as when the fully strong supervised dataset is used. In terms of mean intersection over union (mIoU), the proposed method reached about 0.6 when the ratio of the strong supervised data was 40%, about 2% decreased performance compared to that of 100% strong supervised case. The proposed method seems to be able to help to alleviate the time-consuming work of radiologists to draw the boundary of the lesions, and to train the neural network on the data that do not have complete annotations.


Subject(s)
Image Processing, Computer-Assisted , Prostatic Neoplasms , Humans , Male , Prostatic Neoplasms/diagnostic imaging , Supervised Machine Learning
2.
J Digit Imaging ; 32(4): 638-643, 2019 08.
Article in English | MEDLINE | ID: mdl-31098732

ABSTRACT

In this research, we exploit an image-based deep learning framework to distinguish three major subtypes of renal cell carcinoma (clear cell, papillary, and chromophobe) using images acquired with computed tomography (CT). A biopsy-proven benchmarking dataset was built from 169 renal cancer cases. In each case, images were acquired at three phases(phase 1, before injection of the contrast agent; phase 2, 1 min after the injection; phase 3, 5 min after the injection). After image acquisition, rectangular ROI (region of interest) in each phase image was marked by radiologists. After cropping the ROIs, a combination weight was multiplied to the three-phase ROI images and the linearly combined images were fed into a deep learning neural network after concatenation. A deep learning neural network was trained to classify the subtypes of renal cell carcinoma, using the drawn ROIs as inputs and the biopsy results as labels. The network showed about 0.85 accuracy, 0.64-0.98 sensitivity, 0.83-0.93 specificity, and 0.9 AUC. The proposed framework which is based on deep learning method and ROIs provided by radiologists showed promising results in renal cell subtype classification. We hope it will help future research on this subject and it can cooperate with radiologists in classifying the subtype of lesion in real clinical situation.


Subject(s)
Carcinoma, Renal Cell/diagnostic imaging , Deep Learning , Kidney Neoplasms/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Humans , Kidney/diagnostic imaging , Reproducibility of Results , Republic of Korea , Sensitivity and Specificity
3.
Phys Med Biol ; 62(19): 7714-7728, 2017 Sep 15.
Article in English | MEDLINE | ID: mdl-28753132

ABSTRACT

In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.


Subject(s)
Breast Neoplasms/classification , Diagnosis, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods , Machine Learning , Ultrasonography, Mammary/methods , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Female , Humans , Middle Aged , Neural Networks, Computer , ROC Curve
4.
J Healthc Eng ; 2017: 2193635, 2017.
Article in English | MEDLINE | ID: mdl-29576861

ABSTRACT

The purpose of this research is to achieve uniform spatial resolution in CT (computed tomography) images without hardware modification. The main idea of this study is to consider geometry optics model, which can provide the approximate blurring PSF (point spread function) kernel, which varies according to the distance from X-ray tube to each pixel. The FOV (field of view) was divided into several band regions based on the distance from X-ray source, and each region was deconvolved with different deconvolution kernels. Though more precise calculation for the PSF for deconvolution is possible as the number of subbands increases, we set the number of subbands to 11. 11 subband settings seem to be a balancing point to reduce noise boost, while MTF (modulation transfer function) increase still remains. As the results show, subband-wise deconvolution makes image resolution (in terms of MTF) relatively uniform across the FOV. The results show that spatial resolution in CT images can be uniform across the FOV without using additional equipment. The beauty of this method is that it can be applied to any CT system as long as we know the specific system parameters and determine the appropriate PSF for deconvolution maps of the system. The proposed algorithm shows promising result in improving spatial resolution uniformity while avoiding the excessive noise boost.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Models, Statistical , Tomography, X-Ray Computed , Humans
5.
J Digit Imaging ; 28(5): 594-603, 2015 Oct.
Article in English | MEDLINE | ID: mdl-25708894

ABSTRACT

The purpose of contrast-enhanced digital mammography (CEDM) is to facilitate detection and characterization of the lesions in the breast using intravenous injection of an iodinated contrast agent. CEDM produces iodine images with gray levels proportional to iodine concentration at each pixel, which can be considered as quantification of iodine. While dual-energy CEDM requires an accurate knowledge of the thickness of compressed breast for the quantification, it is known that the accuracy of the built-in thickness measurement is not satisfactory. Triple-energy CEDM, which can provide a third image, can alleviate the limitation of dual-energy CEDM. If triple exposure technique is applied, it can lead to increased risk of motion artifact. An energy-resolving photon-counting detector (PCD) that can acquire multispectral X-ray images can reduce the risk of motion artifact. In this research, an easily implementable method for iodine quantification in breast imaging was suggested, and it was applied to the images of breast phantom with various iodine concentrations. The iodine concentrations in breast phantom simulate lesions filled with different iodine concentrations in the breast. The result shows that the proposed method can quantify the iodine concentrations in breast phantom accurately.


Subject(s)
Absorptiometry, Photon/methods , Contrast Media , Iodine/administration & dosage , Mammography , Phantoms, Imaging , Radiographic Image Enhancement , Artifacts , Female , Humans , Reproducibility of Results
6.
IEEE Trans Med Imaging ; 33(1): 74-84, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24043372

ABSTRACT

An easily implementable tissue cancellation method for dual energy mammography is proposed to reduce anatomical noise and enhance lesion visibility. For dual energy calibration, the images of an imaging object are directly mapped onto the images of a customized calibration phantom. Each pixel pair of the low and high energy images of the imaging object was compared to pixel pairs of the low and high energy images of the calibration phantom. The correspondence was measured by absolute difference between the pixel values of imaged object and those of the calibration phantom. Then the closest pixel pair of the calibration phantom images is marked and selected. After the calibration using direct mapping, the regions with lesion yielded different thickness from the background tissues. Taking advantage of the different thickness, the visibility of cancerous lesions was enhanced with increased contrast-to-noise ratio, depending on the size of lesion and breast thickness. However, some tissues near the edge of imaged object still remained after tissue cancellation. These remaining residuals seem to occur due to the heel effect, scattering, nonparallel X-ray beam geometry and Poisson distribution of photons. To improve its performance further, scattering and the heel effect should be compensated.


Subject(s)
Breast Neoplasms/diagnostic imaging , Image Enhancement/instrumentation , Mammography/instrumentation , Phantoms, Imaging/standards , Radiography, Dual-Energy Scanned Projection/instrumentation , Calibration , Equipment Design , Equipment Failure Analysis , Female , Humans , Image Enhancement/methods , Image Enhancement/standards , Mammography/standards , Radiography, Dual-Energy Scanned Projection/standards , Reproducibility of Results , Sensitivity and Specificity
7.
Eur Radiol ; 20(6): 1476-84, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20016902

ABSTRACT

PURPOSE: We developed a multiple logistic regression model, an artificial neural network (ANN), and a support vector machine (SVM) model to predict the outcome of a prostate biopsy, and compared the accuracies of each model. METHOD: One thousand and seventy-seven consecutive patients who had undergone transrectal ultrasound (TRUS)-guided prostate biopsy were enrolled in the study. Clinical decision models were constructed from the input data of age, digital rectal examination findings, prostate-specific antigen (PSA), PSA density (PSAD), PSAD in transitional zone, and TRUS findings. The patients were divided into the training and test groups in a randomized fashion. Areas under the receiver operating characteristic (ROC) curve (AUC, Az) were calculated to summarize the overall performance of each decision model for the task of prostate cancer prediction. RESULTS: The Az values of the ROC curves for the use of multiple logistic regression analysis, ANN, and the SVM were 0.768, 0.778, and 0.847, respectively. Pairwise comparison of the ROC curves determined that the performance of the SVM was superior to that of the ANN or the multiple logistic regression model. CONCLUSION: Image-based clinical decision support models allow patients to be informed of the actual probability of having a prostate cancer.


Subject(s)
Artificial Intelligence , Decision Support Systems, Clinical , Image Interpretation, Computer-Assisted/methods , Logistic Models , Pattern Recognition, Automated/methods , Prostatic Neoplasms/diagnostic imaging , Ultrasonography/methods , Adult , Aged , Aged, 80 and over , Decision Support Techniques , Humans , Image Enhancement/methods , Male , Middle Aged , Rectum/diagnostic imaging , Regression Analysis , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL