Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
J Med Syst ; 41(9): 144, 2017 Aug 10.
Article in English | MEDLINE | ID: mdl-28799130

ABSTRACT

This paper introducesnear-set based segmentation method for extraction and quantification of mucin regions for detecting mucinouscarcinoma (MC which is a sub type of Invasive ductal carcinoma (IDC)). From histology point of view, the presence of mucin is one of the indicators for detection of this carcinoma. In order to detect MC, the proposed method majorly includes pre-processing by colour correction, colour transformation followed by near-set based segmentation and post-processing for delineating only mucin regions from the histological images at 40×. The segmentation step works in two phases such as Learn and Run.In pre-processing step, white balance method is used for colour correction of microscopic images (RGB format). These images are transformed into HSI (Hue, Saturation, and Intensity) colour space and H-plane is extracted in order to get better visual separation of the different histological regions (background, mucin and tissue regions). Thereafter, histogram in H-plane is optimally partitioned to find set representation for each of the regions. In Learn phase, features of typical mucin pixel and unlabeled pixels are learnt in terms of coverage of observed sets in the sample space surrounding the pixel under consideration. On the other hand, in Run phase the unlabeled pixels are clustered as mucin and non-mucin based on its indiscernibilty with ideal mucin, i.e. their feature values differ within a tolerance limit. This experiment is performed for grade-I and grade-II of MC and hence percentage of average segmentation accuracy is achieved within confidence interval of [97.36 97.70] for extracting mucin areas. In addition, computation of percentage of mucin present in a histological image is provided for understanding the alteration of such diagnostic indicator in MC detection.


Subject(s)
Adenocarcinoma, Mucinous , Color , Humans , Mucins
2.
ArXiv ; 2024 Apr 11.
Article in English | MEDLINE | ID: mdl-38903738

ABSTRACT

Whole Slide Images (WSI), obtained by high-resolution digital scanning of microscope slides at multiple scales, are the cornerstone of modern Digital Pathology. However, they represent a particular challenge to AI-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level. It is not just that medical diagnostics is recorded at the specimen level, the detection of oncogene mutation is also experimentally obtained, and recorded by initiatives like The Cancer Genome Atlas (TCGA), at the slide level. This configures a dual challenge: a) accurately predicting the overall cancer phenotype and b) finding out what cellular morphologies are associated with it at the tile level. To address these challenges, a weakly supervised Multiple Instance Learning (MIL) approach was explored for two prevalent cancer types, Invasive Breast Carcinoma (TCGA-BRCA) and Lung Squamous Cell Carcinoma (TCGA-LUSC). This approach was explored for tumor detection at low magnification levels and TP53 mutations at various levels. Our results show that a novel additive implementation of MIL matched the performance of reference implementation (AUC 0.96), and was only slightly outperformed by Attention MIL (AUC 0.97). More interestingly from the perspective of the molecular pathologist, these different AI architectures identify distinct sensitivities to morphological features (through the detection of Regions of Interest, RoI) at different amplification levels. Tellingly, TP53 mutation was most sensitive to features at the higher applications where cellular morphology is resolved.

3.
IEEE Trans Biomed Circuits Syst ; 17(2): 312-322, 2023 04.
Article in English | MEDLINE | ID: mdl-37028013

ABSTRACT

This work presents an artificial intelligence (AI) framework for real-time, personalized sepsis prediction four hours before onset through fusion of electrocardiogram (ECG) and patient electronic medical record. An on-chip classifier combines analog reservoir-computer and artificial neural network to perform prediction without front-end data converter or feature extraction which reduces energy by 13× compared to digital baseline at normalized power efficiency of 528 TOPS/W, and reduces energy by 159× compared to RF transmission of all digitized ECG samples. The proposed AI framework predicts sepsis onset with 89.9% and 92.9% accuracy on patient data from Emory University Hospital and MIMIC-III respectively. The proposed framework is non-invasive and does not require lab tests which makes it suitable for at-home monitoring.


Subject(s)
Artificial Intelligence , Sepsis , Humans , Signal Processing, Computer-Assisted , Electronic Health Records , Electrocardiography
4.
Phys Imaging Radiat Oncol ; 28: 100520, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38077272

ABSTRACT

Background and purpose: Contouring of organs at risk is important for studying health effects following breast radiotherapy. However, manual contouring is time-consuming and subject to variability. The purpose of this study was to develop a deep learning-based method to automatically segment multiple structures on breast radiotherapy planning computed tomography (CT) images. Materials and methods: We used data from 118 patients, including 90 diagnostic CT scans with expert structure delineations for training and 28 breast radiotherapy planning CT images for testing. The radiotherapy CT images also had expert delineations for evaluating performance. We targeted a total of eleven organs at risk including five heart substructures. Segmentation performance was evaluated using the metrics of Dice similarity coefficient (DSC), overlap fraction, volume similarity, Hausdorff distance, mean surface distance, and dose. Results: The average DSC achieved on the radiotherapy planning images was 0.94 ± 0.02 for the whole heart, 0.96 ± 0.02 and 0.97 ± 0.01 for the left and right lung, 0.61 ± 0.10 for the esophagus, 0.81 ± 0.04 and 0.86 ± 0.04 for left and right atrium, 0.91 ± 0.02 and 0.84 ± 0.04 for left and right ventricle, and 0.21 ± 0.11 for the left anterior descending artery (LAD), respectively. Except for the LAD, the median difference in mean dose to these structures was small with absolute (relative) differences < 0.1 Gy (6 %). Conclusions: Except for the LAD, our method demonstrated excellent performance and can be generalized to segment additional structures of interest.

5.
Comput Biol Med ; 143: 105298, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35220076

ABSTRACT

The COVID-19 (coronavirus disease 2019) pandemic affected more than 186 million people with over 4 million deaths worldwide by June 2021. The magnitude of which has strained global healthcare systems. Chest Computed Tomography (CT) scans have a potential role in the diagnosis and prognostication of COVID-19. Designing a diagnostic system, which is cost-efficient and convenient to operate on resource-constrained devices like mobile phones would enhance the clinical usage of chest CT scans and provide swift, mobile, and accessible diagnostic capabilities. This work proposes developing a novel Android application that detects COVID-19 infection from chest CT scans using a highly efficient and accurate deep learning algorithm. It further creates an attention heatmap, augmented on the segmented lung parenchyma region in the chest CT scans which shows the regions of infection in the lungs through an algorithm developed as a part of this work, and verified through radiologists. We propose a novel selection approach combined with multi-threading for a faster generation of heatmaps on a Mobile Device, which reduces the processing time by about 93%. The neural network trained to detect COVID-19 in this work is tested with a F1 score and accuracy, both of 99.58% and sensitivity of 99.69%, which is better than most of the results in the domain of COVID diagnosis from CT scans. This work will be beneficial in high-volume practices and help doctors triage patients for the early diagnosis of COVID-19 quickly and efficiently.

6.
Sci Rep ; 12(1): 5711, 2022 04 05.
Article in English | MEDLINE | ID: mdl-35383233

ABSTRACT

The objective of this work is to develop a fusion artificial intelligence (AI) model that combines patient electronic medical record (EMR) and physiological sensor data to accurately predict early risk of sepsis. The fusion AI model has two components-an on-chip AI model that continuously analyzes patient electrocardiogram (ECG) data and a cloud AI model that combines EMR and prediction scores from on-chip AI model to predict fusion sepsis onset score. The on-chip AI model is designed using analog circuits for sepsis prediction with high energy efficiency for integration with resource constrained wearable device. Combination of EMR and sensor physiological data improves prediction performance compared to EMR or physiological data alone, and the late fusion model has an accuracy of 93% in predicting sepsis 4 h before onset. The key differentiation of this work over existing sepsis prediction literature is the use of single modality patient vital (ECG) and simple demographic information, instead of comprehensive laboratory test results and multiple vital signs. Such simple configuration and high accuracy makes our solution favorable for real-time, at-home use for self-monitoring.


Subject(s)
Artificial Intelligence , Sepsis , Electronic Health Records , Humans , Machine Learning , Sepsis/diagnosis , Vital Signs
7.
PLoS One ; 17(3): e0263916, 2022.
Article in English | MEDLINE | ID: mdl-35286309

ABSTRACT

OBJECTIVES: Ground-glass opacity (GGO)-a hazy, gray appearing density on computed tomography (CT) of lungs-is one of the hallmark features of SARS-CoV-2 in COVID-19 patients. This AI-driven study is focused on segmentation, morphology, and distribution patterns of GGOs. METHOD: We use an AI-driven unsupervised machine learning approach called PointNet++ to detect and quantify GGOs in CT scans of COVID-19 patients and to assess the severity of the disease. We have conducted our study on the "MosMedData", which contains CT lung scans of 1110 patients with or without COVID-19 infections. We quantify the morphologies of GGOs using Minkowski tensors and compute the abnormality score of individual regions of segmented lung and GGOs. RESULTS: PointNet++ detects GGOs with the highest evaluation accuracy (98%), average class accuracy (95%), and intersection over union (92%) using only a fraction of 3D data. On average, the shapes of GGOs in the COVID-19 datasets deviate from sphericity by 15% and anisotropies in GGOs are dominated by dipole and hexapole components. These anisotropies may help to quantitatively delineate GGOs of COVID-19 from other lung diseases. CONCLUSION: The PointNet++ and the Minkowski tensor based morphological approach together with abnormality analysis will provide radiologists and clinicians with a valuable set of tools when interpreting CT lung scans of COVID-19 patients. Implementation would be particularly useful in countries severely devastated by COVID-19 such as India, where the number of cases has outstripped available resources creating delays or even breakdowns in patient care. This AI-driven approach synthesizes both the unique GGO distribution pattern and severity of the disease to allow for more efficient diagnosis, triaging and conservation of limited resources.


Subject(s)
COVID-19/diagnostic imaging , Lung/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Artificial Intelligence , COVID-19/pathology , Female , Humans , India , Lung/diagnostic imaging , Male , Patient Acuity , Retrospective Studies , Tomography, X-Ray Computed/methods , Unsupervised Machine Learning
8.
IEEE Access ; 9: 79829-79840, 2021.
Article in English | MEDLINE | ID: mdl-34178560

ABSTRACT

Tumor-infiltrating lymphocytes (TILs) act as immune cells against cancer tissues. The manual assessment of TILs is usually erroneous, tedious, costly and subject to inter- and intraobserver variability. Machine learning approaches can solve these issues, but they require a large amount of labeled data for model training, which is expensive and not readily available. In this study, we present an efficient generative adversarial network, TilGAN, to generate high-quality synthetic pathology images followed by classification of TIL and non-TIL regions. Our proposed architecture is constructed with a generator network and a discriminator network. The novelty exists in the TilGAN architecture, loss functions, and evaluation techniques. Our TilGAN-generated images achieved a higher Inception score than the real images (2.90 vs. 2.32, respectively). They also achieved a lower kernel Inception distance (1.44) and a lower Fréchet Inception distance (0.312). It also passed the Turing test performed by experienced pathologists and clinicians. We further extended our evaluation studies and used almost one million synthetic data, generated by TilGAN, to train a classification model. Our proposed classification model achieved a 97.83% accuracy, a 97.37% F1-score, and a 97% area under the curve. Our extensive experiments and superior outcomes show the efficiency and effectiveness of our proposed TilGAN architecture. This architecture can also be used for other types of images for image synthesis.

9.
medRxiv ; 2021 Jul 08.
Article in English | MEDLINE | ID: mdl-34268519

ABSTRACT

OBJECTIVES: Ground-glass opacity (GGO) - a hazy, gray appearing density on computed tomography (CT) of lungs - is one of the hallmark features of SARS-CoV-2 in COVID-19 patients. This AI-driven study is focused on segmentation, morphology, and distribution patterns of GGOs. METHOD: We use an AI-driven unsupervised machine learning approach called PointNet++ to detect and quantify GGOs in CT scans of COVID-19 patients and to assess the severity of the disease. We have conducted our study on the "MosMedData", which contains CT lung scans of 1110 patients with or without COVID-19 infections. We quantify the morphologies of GGOs using Minkowski tensors and compute the abnormality score of individual regions of segmented lung and GGOs. RESULTS: PointNet++ detects GGOs with the highest evaluation accuracy (98%), average class accuracy (95%), and intersection over union (92%) using only a fraction of 3D data. On average, the shapes of GGOs in the COVID-19 datasets deviate from sphericity by 15% and anisotropies in GGOs are dominated by dipole and hexapole components. These anisotropies may help to quantitatively delineate GGOs of COVID-19 from other lung diseases. CONCLUSION: The PointNet++ and the Minkowski tensor based morphological approach together with abnormality analysis will provide radiologists and clinicians with a valuable set of tools when interpreting CT lung scans of COVID-19 patients. Implementation would be particularly useful in countries severely devastated by COVID-19 such as India, where the number of cases has outstripped available resources creating delays or even breakdowns in patient care. This AI-driven approach synthesizes both the unique GGO distribution pattern and severity of the disease to allow for more efficient diagnosis, triaging and conservation of limited resources. KEY POINTS: Our approach to GGO analysis has four distinguishing features:We combine an unsupervised computer vision approach with convex hull and convex points algorithms to segment and preserve the actual structure of the lung.To the best of our knowledge, we are the first group to use PointNet++ architecture for 3D visualization, segmentation, classification, and pattern analysis of GGOs.We make abnormality predictions using a deep network and Cox proportional hazards model using lung CT images of COVID-19 patients.We quantify the shapes and sizes of GGOs using Minkowski tensors to understand the morphological variations of GGOs within the COVID-19 cohort.

10.
IEEE Trans Image Process ; 27(5): 2189-2200, 2018 May.
Article in English | MEDLINE | ID: mdl-29432100

ABSTRACT

We present an efficient deep learning framework for identifying, segmenting, and classifying cell membranes and nuclei from human epidermal growth factor receptor-2 (HER2)-stained breast cancer images with minimal user intervention. This is a long-standing issue for pathologists because the manual quantification of HER2 is error-prone, costly, and time-consuming. Hence, we propose a deep learning-based HER2 deep neural network (Her2Net) to solve this issue. The convolutional and deconvolutional parts of the proposed Her2Net framework consisted mainly of multiple convolution layers, max-pooling layers, spatial pyramid pooling layers, deconvolution layers, up-sampling layers, and trapezoidal long short-term memory (TLSTM). A fully connected layer and a softmax layer were also used for classification and error estimation. Finally, HER2 scores were calculated based on the classification results. The main contribution of our proposed Her2Net framework includes the implementation of TLSTM and a deep learning framework for cell membrane and nucleus detection, segmentation, and classification and HER2 scoring. Our proposed Her2Net achieved 96.64% precision, 96.79% recall, 96.71% F-score, 93.08% negative predictive value, 98.33% accuracy, and a 6.84% false-positive rate. Our results demonstrate the high accuracy and wide applicability of the proposed Her2Net in the context of HER2 scoring for breast cancer evaluation.


Subject(s)
Breast Neoplasms/diagnostic imaging , Cell Membrane/classification , Cell Nucleus/classification , Image Interpretation, Computer-Assisted/methods , Breast/chemistry , Breast/cytology , Breast/diagnostic imaging , Breast Neoplasms/chemistry , Cell Membrane/chemistry , Cell Nucleus/chemistry , Deep Learning , Female , Histocytochemistry , Humans , Receptor, ErbB-2
11.
Comput Med Imaging Graph ; 64: 29-40, 2018 03.
Article in English | MEDLINE | ID: mdl-29409716

ABSTRACT

Mitosis detection is one of the critical factors of cancer prognosis, carrying significant diagnostic information required for breast cancer grading. It provides vital clues to estimate the aggressiveness and the proliferation rate of the tumour. The manual mitosis quantification from whole slide images is a very labor-intensive and challenging task. The aim of this study is to propose a supervised model to detect mitosis signature from breast histopathology WSI images. The model has been designed using deep learning architecture with handcrafted features. We used handcrafted features issued from previous medical challenges MITOS @ ICPR 2012, AMIDA-13 and projects (MICO ANR TecSan) expertise. The deep learning architecture mainly consists of five convolution layers, four max-pooling layers, four rectified linear units (ReLU), and two fully connected layers. ReLU has been used after each convolution layer as an activation function. Dropout layer has been included after first fully connected layer to avoid overfitting. Handcrafted features mainly consist of morphological, textural and intensity features. The proposed architecture has shown to have an improved 92% precision, 88% recall and 90% F-score. Prospectively, the proposed model will be very beneficial in routine exam, providing pathologists with efficient and - as we will prove - effective second opinion for breast cancer grading from whole slide images. Last but not the least, this model could lead junior and senior pathologists, as medical researchers, to a superior understanding and evaluation of breast cancer stage and genesis.


Subject(s)
Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Mitosis , Supervised Machine Learning , Algorithms , Coloring Agents , Eosine Yellowish-(YS) , Female , Fluorescent Dyes , Hematoxylin , Humans , Image Processing, Computer-Assisted
12.
Sci Rep ; 7(1): 3213, 2017 06 12.
Article in English | MEDLINE | ID: mdl-28607456

ABSTRACT

Being a non-histone protein, Ki-67 is one of the essential biomarkers for the immunohistochemical assessment of proliferation rate in breast cancer screening and grading. The Ki-67 signature is always sensitive to radiotherapy and chemotherapy. Due to random morphological, color and intensity variations of cell nuclei (immunopositive and immunonegative), manual/subjective assessment of Ki-67 scoring is error-prone and time-consuming. Hence, several machine learning approaches have been reported; nevertheless, none of them had worked on deep learning based hotspots detection and proliferation scoring. In this article, we suggest an advanced deep learning model for computerized recognition of candidate hotspots and subsequent proliferation rate scoring by quantifying Ki-67 appearance in breast cancer immunohistochemical images. Unlike existing Ki-67 scoring techniques, our methodology uses Gamma mixture model (GMM) with Expectation-Maximization for seed point detection and patch selection and deep learning, comprises with decision layer, for hotspots detection and proliferation scoring. Experimental results provide 93% precision, 0.88% recall and 0.91% F-score value. The model performance has also been compared with the pathologists' manual annotations and recently published articles. In future, the proposed deep learning framework will be highly reliable and beneficial to the junior and senior pathologists for fast and efficient Ki-67 scoring.


Subject(s)
Biomarkers, Tumor/analysis , Breast Neoplasms/metabolism , Cell Proliferation , Deep Learning , Immunohistochemistry/methods , Ki-67 Antigen/analysis , Breast Neoplasms/diagnosis , Female , Humans , Pathology, Clinical/methods , Reproducibility of Results , Sensitivity and Specificity
13.
Tissue Cell ; 48(5): 461-74, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27528421

ABSTRACT

Cytological evaluation by microscopic image-based characterization [imprint cytology (IC) and fine needle aspiration cytology (FNAC)] plays an integral role in primary screening/detection of breast cancer. The sensitivity of IC and FNAC as a screening tool is dependent on the image quality and the pathologist's level of expertise. Computer-aided diagnosis (CAD) is used to assists the pathologists by developing various machine learning and image processing algorithms. This study reviews the various manual and computer-aided techniques used so far in breast cytology. Diagnostic applications were studied to estimate the role of CAD in breast cancer diagnosis. This paper presents an overview of image processing and pattern recognition techniques that have been used to address several issues in breast cytology-based CAD including slide preparation, staining, microscopic imaging, pre-processing, segmentation, feature extraction and diagnostic classification. This review provides better insights to readers regarding the state of the art the knowledge on CAD-based breast cancer diagnosis to date.


Subject(s)
Breast Neoplasms/diagnostic imaging , Cytodiagnosis , Diagnosis, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Biopsy, Fine-Needle , Breast Neoplasms/diagnosis , Breast Neoplasms/pathology , Diagnosis, Computer-Assisted/trends , Female , Humans , Image Processing, Computer-Assisted/trends
14.
Tissue Cell ; 48(3): 265-73, 2016 Jun.
Article in English | MEDLINE | ID: mdl-26971129

ABSTRACT

Mucinous carcinoma (MC) of the breast is very rare (∼1-7% of all breast cancers), invasive ductal carcinoma. Presence of pools of extracellular mucin is one of the most important histological features for MC. This paper aims at developing a quantitative computer-aided methodology for automated identification of mucin areas and its percentage using tissue histological images. The proposed method includes pre-processing (i.e., colour space transformation and colour normalization), mucin regions segmentation, post-processing, and performance evaluation. The proposed algorithm achieved 97.74% segmentation accuracy in comparison to ground truths. In addition, the percentage of mucin present in the tissue regions is calculated by the mucin index (MI) for grading MC (pure, moderately, minimally mucinous).


Subject(s)
Adenocarcinoma, Mucinous/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods , Mucins/biosynthesis , Adenocarcinoma, Mucinous/metabolism , Adenocarcinoma, Mucinous/pathology , Breast Neoplasms/metabolism , Breast Neoplasms/pathology , Female , Humans , Neoplasm Grading
15.
IEEE Trans Nanobioscience ; 14(6): 625-33, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25935044

ABSTRACT

Erythrocytes (red blood cells, RBCs), the most common type of blood cells in humans are well known for their ability in transporting oxygen to the whole body through hemoglobin. Alterations in their membrane skeletal proteins modify shape and mechanical properties resulting in several diseases. Atomic force microscopy (AFM), a new emerging technique allows non-invasive imaging of cell, its membrane and characterization of surface roughness at micrometer/nanometer resolution with minimal sample preparation. AFM imaging provides direct measurement of single cell morphology, its alteration and quantitative data on surface properties. Hence, AFM studies of human RBCs have picked up pace in the last decade. The aim of this paper is to review the various applications of AFM for characterization of human RBCs topology. AFM has been used for studying surface characteristics like nanostructure of membranes, cytoskeleton, microstructure, fluidity, vascular endothelium, etc., of human RBCs. Various modes of AFM imaging has been used to measure surface properties like stiffness, roughness, and elasticity. Topological alterations of erythrocytes in response to different pathological conditions have also been investigated by AFM. Thus, AFM-based studies and application of image processing techniques can effectively provide detailed insights about the morphology and membrane properties of human erythrocytes at nanoscale.


Subject(s)
Erythrocyte Membrane/ultrastructure , Microscopy, Atomic Force/methods , Humans , Nanotechnology , Surface Properties
SELECTION OF CITATIONS
SEARCH DETAIL