Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Bioinformatics ; 39(6)2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-37252823

RESUMEN

MOTIVATION: Bone marrow (BM) examination is one of the most important indicators in diagnosing hematologic disorders and is typically performed under the microscope via oil-immersion objective lens with a total 100× objective magnification. On the other hand, mitotic detection and identification is critical not only for accurate cancer diagnosis and grading but also for predicting therapy success and survival. Fully automated BM examination and mitotic figure examination from whole-slide images is highly demanded but challenging and poorly explored. First, the complexity and poor reproducibility of microscopic image examination are due to the cell type diversity, delicate intralineage discrepancy within the multitype cell maturation process, cells overlapping, lipid interference and stain variation. Second, manual annotation on whole-slide images is tedious, laborious and subject to intraobserver variability, which causes the supervised information restricted to limited, easily identifiable and scattered cells annotated by humans. Third, when the training data are sparsely labeled, many unlabeled objects of interest are wrongly defined as background, which severely confuses AI learners. RESULTS: This article presents an efficient and fully automatic CW-Net approach to address the three issues mentioned above and demonstrates its superior performance on both BM examination and mitotic figure examination. The experimental results demonstrate the robustness and generalizability of the proposed CW-Net on a large BM WSI dataset with 16 456 annotated cells of 19 BM cell types and a large-scale WSI dataset for mitotic figure assessment with 262 481 annotated cells of five cell types. AVAILABILITY AND IMPLEMENTATION: An online web-based system of the proposed method has been created for demonstration (see https://youtu.be/MRMR25Mls1A).


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Microscopía , Humanos , Examen de la Médula Ósea , Reproducibilidad de los Resultados , Procesamiento de Imagen Asistido por Computador/métodos
2.
J Med Biol Eng ; 41(6): 826-843, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34744547

RESUMEN

PURPOSE: Image registration is important in medical applications accomplished by improving healthcare technology in recent years. Various studies have been proposed in medical applications, including clinical track of events and updating the treatment plan for radiotherapy and surgery. This study presents a fully automatic registration system for chest X-ray images to generate fusion results for difference analysis. Using the accurate alignment of the proposed system, the fusion result indicates the differences in the thoracic area during the treatment process. METHODS: The proposed method consists of a data normalization method, a hybrid L-SVM model to detect lungs, ribs and clavicles for object recognition, a landmark matching algorithm, two-stage transformation approaches and a fusion method for difference analysis to highlight the differences in the thoracic area. In evaluation, a preliminary test was performed to compare three transformation models, with a full evaluation process to compare the proposed method with two existing elastic registration methods. RESULTS: The results show that the proposed method produces significantly better results than two benchmark methods (P-value ≤ 0.001). The proposed system achieves the lowest mean registration error distance (MRED) (8.99 mm, 23.55 pixel) and the lowest mean registration error ratio (MRER) w.r.t. the length of image diagonal (1.61%) compared to the two benchmark approaches with MRED (15.64 mm, 40.97 pixel) and (180.5 mm, 472.69 pixel) and MRER (2.81%) and (32.51%), respectively. CONCLUSIONS: The experimental results show that the proposed method is capable of accurately aligning the chest X-ray images acquired at different times, assisting doctors to trace individual health status, evaluate treatment effectiveness and monitor patient recovery progress for thoracic diseases.

3.
Artif Intell Med ; 141: 102568, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37295903

RESUMEN

The overexpression of the human epidermal growth factor receptor 2 (HER2) is a predictive biomarker in therapeutic effects for metastatic breast cancer. Accurate HER2 testing is critical for determining the most suitable treatment for patients. Fluorescent in situ hybridization (FISH) and dual in situ hybridization (DISH) have been recognized as FDA-approved methods to determine HER2 overexpression. However, analysis of HER2 overexpression is challenging. Firstly, the boundaries of cells are often unclear and blurry, with large variations in cell shapes and signals, making it challenging to identify the precise areas of HER2-related cells. Secondly, the use of sparsely labeled data, where some unlabeled HER2-related cells are classified as background, can significantly confuse fully supervised AI learning and result in unsatisfactory model outcomes. In this study, we present a weakly supervised Cascade R-CNN (W-CRCNN) model to automatically detect HER2 overexpression in HER2 DISH and FISH images acquired from clinical breast cancer samples. The experimental results demonstrate that the proposed W-CRCNN achieves excellent results in identification of HER2 amplification in three datasets, including two DISH datasets and a FISH dataset. For the FISH dataset, the proposed W-CRCNN achieves an accuracy of 0.970±0.022, precision of 0.974±0.028, recall of 0.917±0.065, F1-score of 0.943±0.042 and Jaccard Index of 0.899±0.073. For DISH datasets, the proposed W-CRCNN achieves an accuracy of 0.971±0.024, precision of 0.969±0.015, recall of 0.925±0.020, F1-score of 0.947±0.036 and Jaccard Index of 0.884±0.103 for dataset 1, and an accuracy of 0.978±0.011, precision of 0.975±0.011, recall of 0.918±0.038, F1-score of 0.946±0.030 and Jaccard Index of 0.884±0.052 for dataset 2, respectively. In comparison with the benchmark methods, the proposed W-CRCNN significantly outperforms all the benchmark approaches in identification of HER2 overexpression in FISH and DISH datasets (p<0.05). With the high degree of accuracy, precision and recall , the results show that the proposed method in DISH analysis for assessment of HER2 overexpression in breast cancer patients has significant potential to assist precision medicine.


Asunto(s)
Neoplasias de la Mama , Humanos , Femenino , Hibridación Fluorescente in Situ/métodos , Neoplasias de la Mama/genética , Neoplasias de la Mama/patología , Hibridación in Situ , Receptor ErbB-2/genética , Receptor ErbB-2/análisis , Receptor ErbB-2/metabolismo
4.
Diagnostics (Basel) ; 12(6)2022 Jun 17.
Artículo en Inglés | MEDLINE | ID: mdl-35741298

RESUMEN

Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient's disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.

5.
Diagnostics (Basel) ; 12(4)2022 Apr 14.
Artículo en Inglés | MEDLINE | ID: mdl-35454038

RESUMEN

Breast cancer is the leading cause of death for women globally. In clinical practice, pathologists visually scan over enormous amounts of gigapixel microscopic tissue slide images, which is a tedious and challenging task. In breast cancer diagnosis, micro-metastases and especially isolated tumor cells are extremely difficult to detect and are easily neglected because tiny metastatic foci might be missed in visual examinations by medical doctors. However, the literature poorly explores the detection of isolated tumor cells, which could be recognized as a viable marker to determine the prognosis for T1NoMo breast cancer patients. To address these issues, we present a deep learning-based framework for efficient and robust lymph node metastasis segmentation in routinely used histopathological hematoxylin−eosin-stained (H−E) whole-slide images (WSI) in minutes, and a quantitative evaluation is conducted using 188 WSIs, containing 94 pairs of H−E-stained WSIs and immunohistochemical CK(AE1/AE3)-stained WSIs, which are used to produce a reliable and objective reference standard. The quantitative results demonstrate that the proposed method achieves 89.6% precision, 83.8% recall, 84.4% F1-score, and 74.9% mIoU, and that it performs significantly better than eight deep learning approaches, including two recently published models (v3_DCNN and Xception-65), and three variants of Deeplabv3+ with three different backbones, namely, U-Net, SegNet, and FCN, in precision, recall, F1-score, and mIoU (p<0.001). Importantly, the proposed system is shown to be capable of identifying tiny metastatic foci in challenging cases, for which there are high probabilities of misdiagnosis in visual inspection, while the baseline approaches tend to fail in detecting tiny metastatic foci. For computational time comparison, the proposed method takes 2.4 min for processing a WSI utilizing four NVIDIA Geforce GTX 1080Ti GPU cards and 9.6 min using a single NVIDIA Geforce GTX 1080Ti GPU card, and is notably faster than the baseline methods (4-times faster than U-Net and SegNet, 5-times faster than FCN, 2-times faster than the 3 different variants of Deeplabv3+, 1.4-times faster than v3_DCNN, and 41-times faster than Xception-65).

6.
Sci Rep ; 12(1): 11623, 2022 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-35803996

RESUMEN

Joint analysis of multiple protein expressions and tissue morphology patterns is important for disease diagnosis, treatment planning, and drug development, requiring cross-staining alignment of multiple immunohistochemical and histopathological slides. However, cross-staining alignment of enormous gigapixel whole slide images (WSIs) at single cell precision is difficult. Apart from gigantic data dimensions of WSIs, there are large variations on the cell appearance and tissue morphology across different staining together with morphological deformations caused by slide preparation. The goal of this study is to build an image registration framework for cross-staining alignment of gigapixel WSIs of histopathological and immunohistochemical microscopic slides and assess its clinical applicability. To the authors' best knowledge, this is the first study to perform real time fully automatic cross staining alignment of WSIs with 40× and 20× objective magnification. The proposed WSI registration framework consists of a rapid global image registration module, a real time interactive field of view (FOV) localization model and a real time propagated multi-level image registration module. In this study, the proposed method is evaluated on two kinds of cancer datasets from two hospitals using different digital scanners, including a dual staining breast cancer data set with 43 hematoxylin and eosin (H&E) WSIs and 43 immunohistochemical (IHC) CK(AE1/AE3) WSIs, and a triple staining prostate cancer data set containing 30 H&E WSIs, 30 IHC CK18 WSIs, and 30 IHC HMCK WSIs. In evaluation, the registration performance is measured by not only registration accuracy but also computational time. The results show that the proposed method achieves high accuracy of 0.833 ± 0.0674 for the triple-staining prostate cancer data set and 0.931 ± 0.0455 for the dual-staining breast cancer data set, respectively, and takes only 4.34 s per WSI registration on average. In addition, for 30.23% data, the proposed method takes less than 1 s for WSI registration. In comparison with the benchmark methods, the proposed method demonstrates superior performance in registration accuracy and computational time, which has great potentials for assisting medical doctors to identify cancerous tissues and determine the cancer stage in clinical practice.


Asunto(s)
Neoplasias de la Mama , Neoplasias de la Próstata , Neoplasias de la Mama/diagnóstico por imagen , Eosina Amarillenta-(YS) , Hematoxilina , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Coloración y Etiquetado
7.
Cancers (Basel) ; 14(21)2022 Oct 28.
Artículo en Inglés | MEDLINE | ID: mdl-36358732

RESUMEN

According to the World Health Organization Report 2022, cancer is the most common cause of death contributing to nearly one out of six deaths worldwide. Early cancer diagnosis and prognosis have become essential in reducing the mortality rate. On the other hand, cancer detection is a challenging task in cancer pathology. Trained pathologists can detect cancer, but their decisions are subjective to high intra- and inter-observer variability, which can lead to poor patient care owing to false-positive and false-negative results. In this study, we present a soft label fully convolutional network (SL-FCN) to assist in breast cancer target therapy and thyroid cancer diagnosis, using four datasets. To aid in breast cancer target therapy, the proposed method automatically segments human epidermal growth factor receptor 2 (HER2) amplification in fluorescence in situ hybridization (FISH) and dual in situ hybridization (DISH) images. To help in thyroid cancer diagnosis, the proposed method automatically segments papillary thyroid carcinoma (PTC) on Papanicolaou-stained fine needle aspiration and thin prep whole slide images (WSIs). In the evaluation of segmentation of HER2 amplification in FISH and DISH images, we compare the proposed method with thirteen deep learning approaches, including U-Net, U-Net with InceptionV5, Ensemble of U-Net with Inception-v4, Inception-Resnet-v2 encoder, and ResNet-34 encoder, SegNet, FCN, modified FCN, YOLOv5, CPN, SOLOv2, BCNet, and DeepLabv3+ with three different backbones, including MobileNet, ResNet, and Xception, on three clinical datasets, including two DISH datasets on two different magnification levels and a FISH dataset. The result on DISH breast dataset 1 shows that the proposed method achieves high accuracy of 87.77 ± 14.97%, recall of 91.20 ± 7.72%, and F1-score of 81.67 ± 17.76%, while, on DISH breast dataset 2, the proposed method achieves high accuracy of 94.64 ± 2.23%, recall of 83.78 ± 6.42%, and F1-score of 85.14 ± 6.61% and, on the FISH breast dataset, the proposed method achieves high accuracy of 93.54 ± 5.24%, recall of 83.52 ± 13.15%, and F1-score of 86.98 ± 9.85%, respectively. Furthermore, the proposed method outperforms most of the benchmark approaches by a significant margin (p <0.001). In evaluation of segmentation of PTC on Papanicolaou-stained WSIs, the proposed method is compared with three deep learning methods, including Modified FCN, U-Net, and SegNet. The experimental result demonstrates that the proposed method achieves high accuracy of 99.99 ± 0.01%, precision of 92.02 ± 16.6%, recall of 90.90 ± 14.25%, and F1-score of 89.82 ± 14.92% and significantly outperforms the baseline methods, including U-Net and FCN (p <0.001). With the high degree of accuracy, precision, and recall, the results show that the proposed method could be used in assisting breast cancer target therapy and thyroid cancer diagnosis with faster evaluation and minimizing human judgment errors.

8.
Diagnostics (Basel) ; 12(9)2022 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-36140635

RESUMEN

Lung cancer is the biggest cause of cancer-related death worldwide. An accurate nodal staging is critical for the determination of treatment strategy for lung cancer patients. Endobronchial-ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) has revolutionized the field of pulmonology and is considered to be extremely sensitive, specific, and secure for lung cancer staging through rapid on-site evaluation (ROSE), but manual visual inspection on the entire slide of EBUS smears is challenging, time consuming, and worse, subjective, on a large interobserver scale. To satisfy ROSE's needs, a rapid, automated, and accurate diagnosis system using EBUS-TBNA whole-slide images (WSIs) is highly desired to improve diagnosis accuracy and speed, minimize workload and labor costs, and ensure reproducibility. We present a fast, efficient, and fully automatic deep-convolutional-neural-network-based system for advanced lung cancer staging on gigapixel EBUS-TBNA cytological WSIs. Each WSI was converted into a patch-based hierarchical structure and examined by the proposed deep convolutional neural network, generating the segmentation of metastatic lesions in EBUS-TBNA WSIs. To the best of the authors' knowledge, this is the first research on fully automated enlarged mediastinal lymph node analysis using EBUS-TBNA cytological WSIs. We evaluated the robustness of the proposed framework on a dataset of 122 WSIs, and the proposed method achieved a high precision of 93.4%, sensitivity of 89.8%, DSC of 82.2%, and IoU of 83.2% for the first experiment (37.7% training and 62.3% testing) and a high precision of 91.8 ± 1.2, sensitivity of 96.3 ± 0.8, DSC of 94.0 ± 1.0, and IoU of 88.7 ± 1.8 for the second experiment using a three-fold cross-validation, respectively. Furthermore, the proposed method significantly outperformed the three state-of-the-art baseline models, including U-Net, SegNet, and FCN, in terms of precision, sensitivity, DSC, and Jaccard index, based on Fisher's least significant difference (LSD) test (p<0.001). For a computational time comparison on a WSI, the proposed method was 2.5 times faster than U-Net, 2.3 times faster than SegNet, and 3.4 times faster than FCN, using a single GeForce GTX 1080 Ti, respectively. With its high precision and sensitivity, the proposed method demonstrated that it manifested the potential to reduce the workload of pathologists in their routine clinical practice.

9.
Sci Data ; 9(1): 25, 2022 01 27.
Artículo en Inglés | MEDLINE | ID: mdl-35087101

RESUMEN

Ovarian cancer is the leading cause of gynecologic cancer death among women. Regardless of the development made in the past two decades in the surgery and chemotherapy of ovarian cancer, most of the advanced-stage patients are with recurrent cancer and die. The conventional treatment for ovarian cancer is to remove cancerous tissues using surgery followed by chemotherapy, however, patients with such treatment remain at great risk for tumor recurrence and progressive resistance. Nowadays, new treatment with molecular-targeted agents have become accessible. Bevacizumab as a monotherapy in combination with chemotherapy has been recently approved by FDA for the treatment of epithelial ovarian cancer (EOC). Prediction of therapeutic effects and individualization of therapeutic strategies are critical, but to the authors' best knowledge, there are no effective biomarkers that can be used to predict patient response to bevacizumab treatment for EOC and peritoneal serous papillary carcinoma (PSPC). This dataset helps researchers to explore and develop methods to predict the therapeutic effect of patients with EOC and PSPC to bevacizumab.


Asunto(s)
Carcinoma Epitelial de Ovario , Neoplasias Ováricas , Antineoplásicos/uso terapéutico , Bevacizumab/uso terapéutico , Carcinoma Epitelial de Ovario/diagnóstico por imagen , Carcinoma Epitelial de Ovario/patología , Carcinoma Epitelial de Ovario/terapia , Femenino , Humanos , Neoplasias Ováricas/diagnóstico por imagen , Neoplasias Ováricas/tratamiento farmacológico , Neoplasias Ováricas/patología , Neoplasias Ováricas/terapia
10.
Magn Reson Imaging ; 80: 58-70, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33905834

RESUMEN

Magnetic Resonance Imaging (MRI) uses non-ionizing radiations and is safer as compared to CT and X-ray imaging. MRI is broadly used around the globe for medical diagnostics. One main limitation of MRI is its long data acquisition time. Parallel MRI (pMRI) was introduced in late 1990's to reduce the MRI data acquisition time. In pMRI, data is acquired by under-sampling the Phase Encoding (PE) steps which introduces aliasing artefacts in the MR images. SENSitivity Encoding (SENSE) is a pMRI based method that reconstructs fully sampled MR image from the acquired under-sampled data using the sensitivity information of receiver coils. In SENSE, precise estimation of the receiver coil sensitivity maps is vital to obtain good quality images. Eigen-value method (a recently proposed method in literature for the estimation of receiver coil sensitivity information) does not require a pre-scan image unlike other conventional methods of sensitivity estimation. However, Eigen-value method is computationally intensive and takes a significant amount of time to estimate the receiver coil sensitivity maps. This work proposes a parallel framework for Eigen-value method of receiver coil sensitivity estimation that exploits its inherent parallelism using Graphics Processing Units (GPUs). We evaluated the performance of the proposed algorithm on in-vivo and simulated MRI datasets (i.e. human head and simulated phantom datasets) with Peak Signal-to-Noise Ratio (PSNR) and Artefact Power (AP) as evaluation metrics. The results show that the proposed GPU implementation reduces the execution time of Eigen-value method of receiver coil sensitivity estimation (providing up to 30 times speed up in our experiments) without degrading the quality of the reconstructed image.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Artefactos , Humanos , Imagen por Resonancia Magnética , Fantasmas de Imagen , Relación Señal-Ruido
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA