Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
J Pathol Inform ; 13: 100093, 2022.
Article in English | MEDLINE | ID: mdl-36268061

ABSTRACT

Background: Renal cell carcinoma is the most common type of malignant kidney tumor and is responsible for 14,830 deaths per year in the United States. Among the four most common subtypes of renal cell carcinoma, clear cell renal cell carcinoma has the worst prognosis and clear cell papillary renal cell carcinoma appears to have no malignant potential. Distinction between these two subtypes can be difficult due to morphologic overlap on examination of histopathological preparation stained with hematoxylin and eosin. Ancillary techniques, such as immunohistochemistry, can be helpful, but they are not universally available. We propose and evaluate a new deep learning framework for tumor classification tasks to distinguish clear cell renal cell carcinoma from papillary renal cell carcinoma. Methods: Our deep learning framework is composed of three convolutional neural networks. We divided whole-slide kidney images into patches with three different sizes where each network processes a specific patch size. Our framework provides patchwise and pixelwise classification. The histopathological kidney data is composed of 64 image slides that belong to 4 categories: fat, parenchyma, clear cell renal cell carcinoma, and clear cell papillary renal cell carcinoma. The final output of our framework is an image map where each pixel is classified into one class. To maintain consistency, we processed the map with Gauss-Markov random field smoothing. Results: Our framework succeeded in classifying the four classes and showed superior performance compared to well-established state-of-the-art methods (pixel accuracy: 0.89 ResNet18, 0.92 proposed). Conclusions: Deep learning techniques have a significant potential for cancer diagnosis.

2.
Bioengineering (Basel) ; 9(10)2022 Oct 09.
Article in English | MEDLINE | ID: mdl-36290506

ABSTRACT

In this paper, a machine learning-based system for the prediction of the required level of respiratory support in COVID-19 patients is proposed. The level of respiratory support is divided into three classes: class 0 which refers to minimal support, class 1 which refers to non-invasive support, and class 2 which refers to invasive support. A two-stage classification system is built. First, the classification between class 0 and others is performed. Then, the classification between class 1 and class 2 is performed. The system is built using a dataset collected retrospectively from 3491 patients admitted to tertiary care hospitals at the University of Louisville Medical Center. The use of the feature selection method based on analysis of variance is demonstrated in the paper. Furthermore, a dimensionality reduction method called principal component analysis is used. XGBoost classifier achieves the best classification accuracy (84%) in the first stage. It also achieved optimal performance in the second stage, with a classification accuracy of 83%.

3.
Commun Biol ; 5(1): 934, 2022 09 09.
Article in English | MEDLINE | ID: mdl-36085302

ABSTRACT

There is need for a reliable in vitro system that can accurately replicate the cardiac physiological environment for drug testing. The limited availability of human heart tissue culture systems has led to inaccurate interpretations of cardiac-related drug effects. Here, we developed a cardiac tissue culture model (CTCM) that can electro-mechanically stimulate heart slices with physiological stretches in systole and diastole during the cardiac cycle. After 12 days in culture, this approach partially improved the viability of heart slices but did not completely maintain their structural integrity. Therefore, following small molecule screening, we found that the incorporation of 100 nM tri-iodothyronine (T3) and 1 µM dexamethasone (Dex) into our culture media preserved the microscopic structure of the slices for 12 days. When combined with T3/Dex treatment, the CTCM system maintained the transcriptional profile, viability, metabolic activity, and structural integrity for 12 days at the same levels as the fresh heart tissue. Furthermore, overstretching the cardiac tissue induced cardiac hypertrophic signaling in culture, which provides a proof of concept for the ability of the CTCM to emulate cardiac stretch-induced hypertrophic conditions. In conclusion, CTCM can emulate cardiac physiology and pathophysiology in culture for an extended time, thereby enabling reliable drug screening.


Subject(s)
Biomimetics , Heart , Cardiomegaly , Culture Media , Humans , Systole
4.
Sensors (Basel) ; 22(6)2022 Mar 18.
Article in English | MEDLINE | ID: mdl-35336513

ABSTRACT

Diabetic retinopathy (DR) refers to the ophthalmological complications of diabetes mellitus. It is primarily a disease of the retinal vasculature that can lead to vision loss. Optical coherence tomography angiography (OCTA) demonstrates the ability to detect the changes in the retinal vascular system, which can help in the early detection of DR. In this paper, we describe a novel framework that can detect DR from OCTA based on capturing the appearance and morphological markers of the retinal vascular system. This new framework consists of the following main steps: (1) extracting retinal vascular system from OCTA images based on using joint Markov-Gibbs Random Field (MGRF) model to model the appearance of OCTA images and (2) estimating the distance map inside the extracted vascular system to be used as imaging markers that describe the morphology of the retinal vascular (RV) system. The OCTA images, extracted vascular system, and the RV-estimated distance map is then composed into a three-dimensional matrix to be used as an input to a convolutional neural network (CNN). The main motivation for using this data representation is that it combines the low-level data as well as high-level processed data to allow the CNN to capture significant features to increase its ability to distinguish DR from the normal retina. This has been applied on multi-scale levels to include the original full dimension images as well as sub-images extracted from the original OCTA images. The proposed approach was tested on in-vivo data using about 91 patients, which were qualitatively graded by retinal experts. In addition, it was quantitatively validated using datasets based on three metrics: sensitivity, specificity, and overall accuracy. Results showed the capability of the proposed approach, outperforming the current deep learning as well as features-based detecting DR approaches.


Subject(s)
Diabetic Retinopathy , Tomography, Optical Coherence , Diabetic Retinopathy/diagnostic imaging , Fluorescein Angiography/methods , Humans , Machine Learning , Retinal Vessels/diagnostic imaging , Tomography, Optical Coherence/methods
5.
Cardiovasc Eng Technol ; 13(1): 170-180, 2022 02.
Article in English | MEDLINE | ID: mdl-34402037

ABSTRACT

PURPOSE: Drug induced cardiac toxicity is a disruption of the functionality of cardiomyocytes which is highly correlated to the organization of the subcellular structures. We can analyze cellular structures by utilizing microscopy imaging data. However, conventional image analysis methods might miss structural deteriorations that are difficult to perceive. Here, we propose an image-based deep learning pipeline for the automated quantification of drug induced structural deteriorations using a 3D heart slice culture model. METHODS: In our deep learning pipeline, we quantify the induced structural deterioration from three anticancer drugs (doxorubicin, sunitinib, and herceptin) with known adverse cardiac effects. The proposed deep learning framework is composed of three convolutional neural networks that process three different image sizes. The results of the three networks are combined to produce a classification map that shows the locations of the structural deteriorations in the input cardiac image. RESULTS: The result of our technique is the capability of producing classification maps that accurately detect drug induced structural deterioration on the pixel level. CONCLUSION: This technology could be widely applied to perform unbiased quantification of the structural effect of the cardiotoxins on heart slices.


Subject(s)
Artificial Intelligence , Myocytes, Cardiac , Image Processing, Computer-Assisted/methods , Neural Networks, Computer
7.
Sci Rep ; 11(1): 20189, 2021 10 12.
Article in English | MEDLINE | ID: mdl-34642404

ABSTRACT

Renal cell carcinoma is the most common type of kidney cancer. There are several subtypes of renal cell carcinoma with distinct clinicopathologic features. Among the subtypes, clear cell renal cell carcinoma is the most common and tends to portend poor prognosis. In contrast, clear cell papillary renal cell carcinoma has an excellent prognosis. These two subtypes are primarily classified based on the histopathologic features. However, a subset of cases can a have a significant degree of histopathologic overlap. In cases with ambiguous histologic features, the correct diagnosis is dependent on the pathologist's experience and usage of immunohistochemistry. We propose a new method to address this diagnostic task based on a deep learning pipeline for automated classification. The model can detect tumor and non-tumoral portions of kidney and classify the tumor as either clear cell renal cell carcinoma or clear cell papillary renal cell carcinoma. Our framework consists of three convolutional neural networks and the whole slide images of kidney which were divided into patches of three different sizes for input into the networks. Our approach can provide patchwise and pixelwise classification. The kidney histology images consist of 64 whole slide images. Our framework results in an image map that classifies the slide image on the pixel-level. Furthermore, we applied generalized Gauss-Markov random field smoothing to maintain consistency in the map. Our approach classified the four classes accurately and surpassed other state-of-the-art methods, such as ResNet (pixel accuracy: 0.89 Resnet18, 0.92 proposed). We conclude that deep learning has the potential to augment the pathologist's capabilities by providing automated classification for histopathological images.


Subject(s)
Carcinoma, Renal Cell/diagnosis , Image Interpretation, Computer-Assisted/methods , Kidney Neoplasms/diagnosis , Carcinoma, Renal Cell/pathology , Deep Learning , Diagnosis, Differential , Humans , Kidney Neoplasms/pathology , Markov Chains , Neural Networks, Computer , Prognosis
8.
Sensors (Basel) ; 21(16)2021 Aug 13.
Article in English | MEDLINE | ID: mdl-34450898

ABSTRACT

Uveitis is one of the leading causes of severe vision loss that can lead to blindness worldwide. Clinical records show that early and accurate detection of vitreous inflammation can potentially reduce the blindness rate. In this paper, a novel framework is proposed for automatic quantification of the vitreous on optical coherence tomography (OCT) with particular application for use in the grading of vitreous inflammation. The proposed pipeline consists of two stages, vitreous region segmentation followed by a neural network classifier. In the first stage, the vitreous region is automatically segmented using a U-net convolutional neural network (U-CNN). For the input of U-CNN, we utilized three novel image descriptors to account for the visual appearance similarity of the vitreous region and other tissues. Namely, we developed an adaptive appearance-based approach that utilizes a prior shape information, which consisted of a labeled dataset of the manually segmented images. This image descriptor is adaptively updated during segmentation and is integrated with the original greyscale image and a distance map image descriptor to construct an input fused image for the U-net segmentation stage. In the second stage, a fully connected neural network (FCNN) is proposed as a classifier to assess the vitreous inflammation severity. To achieve this task, a novel discriminatory feature of the segmented vitreous region is extracted. Namely, the signal intensities of the vitreous are represented by a cumulative distribution function (CDF). The constructed CDFs are then used to train and test the FCNN classifier for grading (grade from 0 to 3). The performance of the proposed pipeline is evaluated on a dataset of 200 OCT images. Our segmentation approach documented a higher performance than related methods, as evidenced by the Dice coefficient of 0.988 ± 0.01 and Hausdorff distance of 0.0003 mm ± 0.001 mm. On the other hand, the FCNN classification is evidenced by its average accuracy of 86%, which supports the benefits of the proposed pipeline as an aid for early and objective diagnosis of uvea inflammation.


Subject(s)
Image Processing, Computer-Assisted , Uveitis , Humans , Neural Networks, Computer , Tomography, Optical Coherence , Uveitis/diagnostic imaging
9.
Comput Med Imaging Graph ; 81: 101717, 2020 04.
Article in English | MEDLINE | ID: mdl-32222684

ABSTRACT

Cardiac MRI has been widely used for noninvasive assessment of cardiac anatomy and function as well as heart diagnosis. The estimation of physiological heart parameters for heart diagnosis essentially require accurate segmentation of the Left ventricle (LV) from cardiac MRI. Therefore, we propose a novel deep learning approach for the automated segmentation and quantification of the LV from cardiac cine MR images. We aim to achieve lower errors for the estimated heart parameters compared to the previous studies by proposing a novel deep learning segmentation method. Our framework starts by an accurate localization of the LV blood pool center-point using a fully convolutional neural network (FCN) architecture called FCN1. Then, a region of interest (ROI) that contains the LV is extracted from all heart sections. The extracted ROIs are used for the segmentation of LV cavity and myocardium via a novel FCN architecture called FCN2. The FCN2 network has several bottleneck layers and uses less memory footprint than conventional architectures such as U-net. Furthermore, a new loss function called radial loss that minimizes the distance between the predicted and true contours of the LV is introduced into our model. Following myocardial segmentation, functional and mass parameters of the LV are estimated. Automated Cardiac Diagnosis Challenge (ACDC-2017) dataset was used to validate our framework, which gave better segmentation, accurate estimation of cardiac parameters, and produced less error compared to other methods applied on the same dataset. Furthermore, we showed that our segmentation approach generalizes well across different datasets by testing its performance on a locally acquired dataset. To sum up, we propose a deep learning approach that can be translated into a clinical tool for heart diagnosis.


Subject(s)
Deep Learning , Heart Ventricles/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging, Cine , Humans
10.
Sci Rep ; 9(1): 5948, 2019 04 11.
Article in English | MEDLINE | ID: mdl-30976081

ABSTRACT

This paper introduces a deep-learning based computer-aided diagnostic (CAD) system for the early detection of acute renal transplant rejection. For noninvasive detection of kidney rejection at an early stage, the proposed CAD system is based on the fusion of both imaging markers and clinical biomarkers. The former are derived from diffusion-weighted magnetic resonance imaging (DW-MRI) by estimating the apparent diffusion coefficients (ADC) representing the perfusion of the blood and the diffusion of the water inside the transplanted kidney. The clinical biomarkers, namely: creatinine clearance (CrCl) and serum plasma creatinine (SPCr), are integrated into the proposed CAD system as kidney functionality indexes to enhance its diagnostic performance. The ADC maps are estimated for a user-defined region of interest (ROI) that encompasses the whole kidney. The estimated ADCs are fused with the clinical biomarkers and the fused data is then used as an input to train and test a convolutional neural network (CNN) based classifier. The CAD system is tested on DW-MRI scans collected from 56 subjects from geographically diverse populations and different scanner types/image collection protocols. The overall accuracy of the proposed system is 92.9% with 93.3% sensitivity and 92.3% specificity in distinguishing non-rejected kidney transplants from rejected ones. These results demonstrate the potential of the proposed system for a reliable non-invasive diagnosis of renal transplant status for any DW-MRI scans, regardless of the geographical differences and/or imaging protocol.


Subject(s)
Algorithms , Diagnosis, Computer-Assisted/methods , Graft Rejection/diagnosis , Image Interpretation, Computer-Assisted/methods , Kidney Transplantation/adverse effects , Neural Networks, Computer , Postoperative Complications/diagnosis , Adolescent , Adult , Aged , Diffusion Magnetic Resonance Imaging , Female , Follow-Up Studies , Glomerular Filtration Rate , Graft Rejection/etiology , Graft Rejection/pathology , Graft Survival , Humans , Kidney Function Tests , Male , Middle Aged , Postoperative Complications/etiology , Postoperative Complications/pathology , Prognosis , Risk Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...