Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 30
1.
Med Image Anal ; 89: 102888, 2023 10.
Article En | MEDLINE | ID: mdl-37451133

Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.


Artificial Intelligence , Surgery, Computer-Assisted , Humans , Endoscopy , Algorithms , Surgery, Computer-Assisted/methods , Surgical Instruments
2.
Med Image Anal ; 86: 102803, 2023 05.
Article En | MEDLINE | ID: mdl-37004378

Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of combination delivers more comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and the assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms from the competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.


Benchmarking , Laparoscopy , Humans , Algorithms , Operating Rooms , Workflow , Deep Learning
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3419-3422, 2021 11.
Article En | MEDLINE | ID: mdl-34891974

Magnetic resonance imaging (MRI) is widely used in clinical applications due to its ability to acquire a wide variety of soft tissues using multiple pulse sequences. Each sequence provides information that generally complements the other. However, factors like an increase in scan time or contrast allergies impede imaging with numerous sequences. Synthesizing images of such non acquired sequences is a challenging proposition that can suffice for corrupted acquisition, fast reconstruction prior, super-resolution, etc. This manuscript employed a deep convolution neural network (CNN) to synthesize multiple missing pulse sequences of brain MRI with tumors. The CNN is an encoder-decoder-like network trained to minimize reconstruction mean square error (MSE) loss while maximizing the adversarial attack. It inflicts on a relativistic Visual Turing Test discriminator (rVTT). The approach is evaluated through experiments performed with the Brats2018 dataset, quantitative metrics viz. MSE, Structural Similarity Measure (SSIM), and Peak Signal to Noise Ratio (PSNR). The Radiologist and MR physicist performed the Turing test with 76% accuracy, demonstrating our approach's performance superiority over the prior art. We can synthesize MR images of missing pulse sequences at an inference cost of 350.71 GFlops/voxel through this approach.


Image Processing, Computer-Assisted , Neural Networks, Computer , Brain , Magnetic Resonance Imaging , Signal-To-Noise Ratio
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3961-3964, 2021 11.
Article En | MEDLINE | ID: mdl-34892098

Segmentation of COVID-19 infection in the lung tissue and its quantification in individual lobes is pivotal to understanding the disease's effect. It helps to determine the disease progression and gauge the extent of medical support required. Automation of this process is challenging due to the lack of a standardized dataset with voxel-wise annotations of the lung field, lobes, and infections like ground-glass opacity (GGO) and consolidation. However, multiple datasets have been found to contain one or more classes of the required annotations. Typical deep learning-based solutions overcome such challenges by training neural networks under adversarial and multi-task constraints. We propose to train a convolutional neural network to solve the challenge while it learns from multiple data sources, each of which is annotated for only a few classes. We have experimentally verified our approach by training the model on three publicly available datasets and evaluating its ability to segment the lung field, lobes and COVID-19 infected regions. Additionally, eight scans that previously had annotations for infection and lung have been annotated for lobes. Our model quantifies infection per lobe in these scans with an average error of 4.5%.


COVID-19 , Humans , Lung/diagnostic imaging , SARS-CoV-2 , Tomography
5.
J Oral Biol Craniofac Res ; 11(4): 628-637, 2021.
Article En | MEDLINE | ID: mdl-34603951

PURPOSE: Sweptsource optical coherence tomography (SS-OCT) permits cross-sectional observation of surface/subsurface characteristics of enamel including early carious lesions (ECL) or remineralization non-invasively.This study aimed to visually compare the cross-sectional remineralizing efficacy of various agents on ICDAS-II scores-1&2 by using SS-OCT and histology. METHODS: Baseline SS-OCT (grey-scale/false-colour) and histology was performed on the randomly selected two samples with scores-1&2. Four remineralizing agents [fluoride-varnish (FV), CPP-ACP, nanohydroxy-paste (NHP) and silver-diamine-fluoride (SDF)]were evaluated for 2-or 6-weeks post-remineralization using SS-OCT and histology. RESULTS: Score-1&2 baseline SS-OCT images showed a linear-shaped demineralization with dentino-enamel junction (DEJ) visible; and bowl-shaped demineralization with DEJ invisible respectively. Remineralizing agents were assessed on the basis of their ability to remineralize the surface, subsurface and made visualize the DEJ in score-2. SS-OCT showed an outer growth layer in post-remineralization score-1, 2-weeks samples with FV and NHP. All the agents showed progressive subsurface remineralization in 6 weeks. Active lesions showed rapid uptake of minerals on surface. Subsurface mineralization in pigmented score-2 matched sound enamel with NHP and SDF. Surface remineralization was comparable in FV and SDF followed by NHP. SDF demonstrated deeper subsurface remineralization followed by NHP and CPP-ACP. CONCLUSION: SS-OCT images correlated to histology. SS-OCT could monitor surface/subsurface in-situde/remineralization activity non-invasively.

6.
Med Image Anal ; 69: 101950, 2021 04.
Article En | MEDLINE | ID: mdl-33421920

Segmentation of abdominal organs has been a comprehensive, yet unresolved, research field for many years. In the last decade, intensive developments in deep learning (DL) introduced new state-of-the-art segmentation systems. Despite outperforming the overall accuracy of existing systems, the effects of DL model properties and parameters on the performance are hard to interpret. This makes comparative analysis a necessary tool towards interpretable studies and systems. Moreover, the performance of DL for emerging learning approaches such as cross-modality and multi-modal semantic segmentation tasks has been rarely discussed. In order to expand the knowledge on these topics, the CHAOS - Combined (CT-MR) Healthy Abdominal Organ Segmentation challenge was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI), 2019, in Venice, Italy. Abdominal organ segmentation from routine acquisitions plays an important role in several clinical applications, such as pre-surgical planning or morphological and volumetric follow-ups for various diseases. These applications require a certain level of performance on a diverse set of metrics such as maximum symmetric surface distance (MSSD) to determine surgical error-margin or overlap errors for tracking size and shape differences. Previous abdomen related challenges are mainly focused on tumor/lesion detection and/or classification with a single modality. Conversely, CHAOS provides both abdominal CT and MR data from healthy subjects for single and multiple abdominal organ segmentation. Five different but complementary tasks were designed to analyze the capabilities of participating approaches from multiple perspectives. The results were investigated thoroughly, compared with manual annotations and interactive methods. The analysis shows that the performance of DL models for single modality (CT / MR) can show reliable volumetric analysis performance (DICE: 0.98 ± 0.00 / 0.95 ± 0.01), but the best MSSD performance remains limited (21.89 ± 13.94 / 20.85 ± 10.63 mm). The performances of participating models decrease dramatically for cross-modality tasks both for the liver (DICE: 0.88 ± 0.15 MSSD: 36.33 ± 21.97 mm). Despite contrary examples on different applications, multi-tasking DL models designed to segment all organs are observed to perform worse compared to organ-specific ones (performance drop around 5%). Nevertheless, some of the successful models show better performance with their multi-organ versions. We conclude that the exploration of those pros and cons in both single vs multi-organ and cross-modality segmentations is poised to have an impact on further research for developing effective algorithms that would support real-world clinical applications. Finally, having more than 1500 participants and receiving more than 550 submissions, another important contribution of this study is the analysis on shortcomings of challenge organizations such as the effects of multiple submissions and peeking phenomenon.


Algorithms , Tomography, X-Ray Computed , Abdomen/diagnostic imaging , Humans , Liver
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1128-1131, 2020 07.
Article En | MEDLINE | ID: mdl-33018185

Mammograms are commonly employed in the large scale screening of breast cancer which is primarily characterized by the presence of malignant masses. However, automated image-level detection of malignancy is a challenging task given the small size of the mass regions and difficulty in discriminating between malignant, benign mass and healthy dense fibro-glandular tissue. To address these issues, we explore a two-stage Multiple Instance Learning (MIL) framework. A Convolutional Neural Network (CNN) is trained in the first stage to extract local candidate patches in the mammograms that may contain either a benign or malignant mass. The second stage employs a MIL strategy for an image level benign vs. malignant classification. A global image-level feature is computed as a weighted average of patch-level features learned using a CNN. Our method performed well on the task of localization of masses with an average Precision/Recall of 0.76/0.80 and achieved an average AUC of 0.91 on the image-level classification task using a five-fold cross-validation on the INbreast dataset. Restricting the MIL only to the candidate patches extracted in Stage 1 led to a significant improvement in classification performance in comparison to a dense extraction of patches from the entire mammogram.


Breast Neoplasms , Breast Neoplasms/diagnostic imaging , Humans , Machine Learning , Mammography , Neural Networks, Computer
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1225-1228, 2020 07.
Article En | MEDLINE | ID: mdl-33018208

Chest radiographs are primarily employed for the screening of pulmonary and cardio-/thoracic conditions. Being undertaken at primary healthcare centers, they require the presence of an on-premise reporting Radiologist, which is a challenge in low and middle income countries. This has inspired the development of machine learning based automation of the screening process. While recent efforts demonstrate a performance benchmark using an ensemble of deep convolutional neural networks (CNN), our systematic search over multiple standard CNN architectures identified single candidate CNN models whose classification performances were found to be at par with ensembles. Over 63 experiments spanning 400 hours, executed on a 11.3 FP32 TensorTFLOPS compute system, we found the Xception and ResNet-18 architectures to be consistent performers in identifying co-existing disease conditions with an average AUC of 0.87 across nine pathologies. We conclude on the reliability of the models by assessing their saliency maps generated using the randomized input sampling for explanation (RISE) method and qualitatively validating them against manual annotations locally sourced from an experienced Radiologist. We also draw a critical note on the limitations of the publicly available CheXpert dataset primarily on account of disparity in class distribution in training vs. testing sets, and unavailability of sufficient samples for few classes, which hampers quantitative reporting due to sample insufficiency.


Lung , Neural Networks, Computer , Radiography , Reproducibility of Results , Research
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1234-1237, 2020 07.
Article En | MEDLINE | ID: mdl-33018210

Chest radiographs are primarily employed for the screening of cardio, thoracic and pulmonary conditions. Machine learning based automated solutions are being developed to reduce the burden of routine screening on Radiologists, allowing them to focus on critical cases. While recent efforts demonstrate the use of ensemble of deep convolutional neural networks (CNN), they do not take disease comorbidity into consideration, thus lowering their screening performance. To address this issue, we propose a Graph Neural Network (GNN) based solution to obtain ensemble predictions which models the dependencies between different diseases. A comprehensive evaluation of the proposed method demonstrated its potential by improving the performance over standard ensembling technique across a wide range of ensemble constructions. The best performance was achieved using the GNN ensemble of DenseNet121 with an average AUC of 0.821 across thirteen disease comorbidities.


Machine Learning , Neural Networks, Computer , Comorbidity , Radiography , Research
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1331-1334, 2020 07.
Article En | MEDLINE | ID: mdl-33018234

Lung cancer is the most common form of cancer found worldwide with a high mortality rate. Early detection of pulmonary nodules by screening with a low-dose computed tomography (CT) scan is crucial for its effective clinical management. Nodules which are symptomatic of malignancy occupy about 0.0125 - 0.025% of volume in a CT scan of a patient. Manual screening of all slices is a tedious task and presents a high risk of human errors. To tackle this problem we propose a computationally efficient two stage framework. In the first stage, a convolutional neural network (CNN) trained adversarially using Turing test loss segments the lung region. In the second stage, patches sampled from the segmented region are then classified to detect the presence of nodules. The proposed method is experimentally validated on the LUNA16 challenge dataset with a dice coefficient of 0.984±0.0007 for 10-fold cross-validation.


Radiographic Image Interpretation, Computer-Assisted , Tomography, X-Ray Computed , Humans , Lung/diagnostic imaging , Neural Networks, Computer , Radionuclide Imaging
11.
Med Image Anal ; 59: 101561, 2020 01.
Article En | MEDLINE | ID: mdl-31671320

Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.


Deep Learning , Diabetic Retinopathy/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods , Photography , Datasets as Topic , Humans , Pattern Recognition, Automated
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 1010-1013, 2019 Jul.
Article En | MEDLINE | ID: mdl-31946064

Ischaemic stroke is a medical condition caused by occlusion of blood supply to the brain tissue thus forming a lesion. A lesion is zoned into a core associated with irreversible necrosis typically located at the center of the lesion, while reversible hypoxic changes in the outer regions of the lesion are termed as the penumbra. Early estimation of core and penumbra in ischaemic stroke is crucial for timely intervention with thrombolytic therapy to reverse the damage and restore normalcy. Multisequence magnetic resonance imaging (MRI) is commonly employed for clinical diagnosis. However, a sequence singly has not been found to be sufficiently able to differentiate between core and penumbra, while a combination of sequences is required to determine the extent of the damage. The challenge, however, is that with an increase in the number of sequences, it cognitively taxes the clinician to discover symptomatic biomarkers in these images. In this paper, we present a data-driven fully automated method for estimation of core and penumbra in ischaemic lesions using diffusion-weighted imaging (DWI) and perfusion-weighted imaging (PWI) sequence maps of MRI. The method employs recent developments in convolutional neural networks (CNN) for semantic segmentation in medical images. In the absence of availability of a large amount of labeled data, the CNN is trained using an adversarial approach employing cross-entropy as a segmentation loss along with losses aggregated from three discriminators of which two employ relativistic visual Turing test. This method is experimentally validated on the ISLES-2015 dataset through three-fold cross-validation to obtain with an average Dice score of 0.82 and 0.73 for segmentation of penumbra and core respectively.


Brain Ischemia , Stroke , Brain Ischemia/diagnostic imaging , Humans , Magnetic Resonance Imaging , Neural Networks, Computer , Semantics
13.
IEEE J Biomed Health Inform ; 23(3): 1110-1118, 2019 05.
Article En | MEDLINE | ID: mdl-30113902

Ultrasound (US) is widely used as a low-cost alternative to computed tomography or magnetic resonance and primarily for preliminary imaging. Since speckle intensity in US images is inherently stochastic, readers are often challenged in their ability to identify the pathological regions in a volume of a large number of images. This paper introduces a generalized approach for volumetric segmentation of structures in US images and volumes. We employ an iterative random walks (IRW) solver, a random forest learning model, and a gradient vector flow (GVF) based interframe belief propagation technique for achieving cross-frame volumetric segmentation. At the start, a weak estimate of the tissue structure is obtained using estimates of parameters of a statistical mechanics model of US tissue interaction. Ensemble learning of these parameters further using a random forest is used to initialize the segmentation pipeline. IRW is used for correcting the contour in various steps of the algorithm. Subsequently, a GVF-based interframe belief propagation is applied to adjacent frames based on the initialization of contour using information in the current frame to segment the complete volume by frame-wise processing. We have experimentally evaluated our approach using two different datasets. Intravascular ultrasound (IVUS) segmentation was evaluated using 10 pullbacks acquired at 20 MHz and thyroid US segmentation is evaluated on 16 volumes acquired at [Formula: see text] MHz. Our approach obtains a Jaccard score of [Formula: see text] for IVUS segmentation and [Formula: see text] for thyroid segmentation while processing each frame in [Formula: see text] for the IVUS and in [Formula: see text] for thyroid segmentation without the need of any computing accelerators such as GPUs.


Image Processing, Computer-Assisted/methods , Models, Statistical , Ultrasonography/methods , Abdomen/diagnostic imaging , Algorithms , Humans , Phantoms, Imaging , Stochastic Processes , Thyroid Gland/diagnostic imaging
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 5085-5088, 2018 Jul.
Article En | MEDLINE | ID: mdl-30441484

Motor imagery (MI) based brain-computer interface (BCI) plays a crucial role in various scenarios ranging from post-traumatic rehabilitation to control prosthetics. Computer-aided interpretation of MI has augmented prior mentioned scenarios since decades but failed to address interpersonal variability. Such variability further escalates in case of multi-class MI, which is currently a common practice. The failures due to interpersonal variability can be attributed to handcrafted features as they failed to extract more generalized features. The proposed approach employs convolution neural network (CNN) based model with both filtering (through axis shuffling) and feature extraction to avail end-to-end training. Axis shuffling is performed adopted in initial blocks of the model for 1D preprocessing and reduce the parameters required. Such practice has avoided the overfitting which resulted in an improved generalized model. Publicly available BCI Competition-IV 2a dataset is considered to evaluate the proposed model. The proposed model has demonstrated the capability to identify subject-specific frequency band with an average and highest accuracy of 70.5% and S3.6% respectively. Proposed CNN model can classify in real time without relying on accelerated computing device like GPU.


Brain-Computer Interfaces , Algorithms , Electroencephalography , Imagery, Psychotherapy , Neural Networks, Computer
15.
J Healthc Eng ; 2018: 8087624, 2018.
Article En | MEDLINE | ID: mdl-30344990

The thyroid is one of the largest endocrine glands in the human body, which is involved in several body mechanisms like controlling protein synthesis and the body's sensitivity to other hormones and use of energy sources. Hence, it is of prime importance to track the shape and size of thyroid over time in order to evaluate its state. Thyroid segmentation and volume computation are important tools that can be used for thyroid state tracking assessment. Most of the proposed approaches are not automatic and require long time to correctly segment the thyroid. In this work, we compare three different nonautomatic segmentation algorithms (i.e., active contours without edges, graph cut, and pixel-based classifier) in freehand three-dimensional ultrasound imaging in terms of accuracy, robustness, ease of use, level of human interaction required, and computation time. We figured out that these methods lack automation and machine intelligence and are not highly accurate. Hence, we implemented two machine learning approaches (i.e., random forest and convolutional neural network) to improve the accuracy of segmentation as well as provide automation. This comparative study intends to discuss and analyse the advantages and disadvantages of different algorithms. In the last step, the volume of the thyroid is computed using the segmentation results, and the performance analysis of all the algorithms is carried out by comparing the segmentation results with the ground truth.


Algorithms , Image Processing, Computer-Assisted/methods , Machine Learning , Thyroid Neoplasms/diagnostic imaging , Ultrasonography , Automation , Decision Trees , Diagnosis, Computer-Assisted , Humans , Imaging, Three-Dimensional , Neural Networks, Computer , Reproducibility of Results , Software , Thyroid Gland/diagnostic imaging
16.
Biomed Opt Express ; 8(8): 3627-3642, 2017 Aug 01.
Article En | MEDLINE | ID: mdl-28856040

Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness.

17.
IEEE Pulse ; 7(6): 34-37, 2016.
Article En | MEDLINE | ID: mdl-27875116

How would you provide effective and affordable health care in a country of more than 1.25 billion where there are only 0.7 physicians for every 1,000 people [1]? The Revised National Tuberculosis Control Program (RNTCP) and the Karnataka Internet-Assisted Diagnosis of Retinopathy of Prematurity (KIDROP) service are two notable efforts designed to deliver care across India, in both urban and rural areas and from the country?s flat plains to its rugged mountainous and desert regions.


Comprehensive Health Care , Telemedicine , Cell Phone , Communication , Humans , India , Tuberculosis/therapy
18.
Microvasc Res ; 107: 6-16, 2016 09.
Article En | MEDLINE | ID: mdl-27131831

Laser speckle contrast imaging (LSCI) provides a noninvasive and cost effective solution for in vivo monitoring of blood flow. So far, most of the researches consider changes in speckle pattern (i.e. correlation time of speckle intensity fluctuation), account for relative change in blood flow during abnormal conditions. This paper introduces an application of LSCI for monitoring wound progression and characterization of cutaneous wound regions on mice model. Speckle images are captured on a tumor wound region at mice leg in periodic interval. Initially, raw speckle images are converted to their corresponding contrast images. Functional characterization begins with first segmenting the affected area using k-means clustering, taking wavelet energies in a local region as feature set. In the next stage, different regions in wound bed are clustered based on progressive and non-progressive nature of tissue properties. Changes in contrast due to heterogeneity in tissue structure and functionality are modeled using LSCI speckle statistics. Final characterization is achieved through supervised learning of these speckle statistics using support vector machine. On cross evaluation with mice model experiment, the proposed approach classifies the progressive and non-progressive wound regions with an average sensitivity of 96.18%, 97.62% and average specificity of 97.24%, 96.42% respectively. The clinical information yield with this approach is validated with the conventional immunohistochemistry result of wound to justify the ability of LSCI for in vivo, noninvasive and periodic assessment of wounds.


Image Interpretation, Computer-Assisted/methods , Laser-Doppler Flowmetry/methods , Microcirculation , Perfusion Imaging/methods , Sarcoma 180/blood supply , Sarcoma 180/diagnostic imaging , Skin/blood supply , Supervised Machine Learning , Animals , Area Under Curve , Blood Flow Velocity , Data Interpretation, Statistical , Disease Models, Animal , Immunohistochemistry , Laser-Doppler Flowmetry/statistics & numerical data , Male , Mice , Perfusion Imaging/statistics & numerical data , Predictive Value of Tests , ROC Curve , Regional Blood Flow , Reproducibility of Results , Sarcoma 180/pathology , Skin/pathology , Time Factors , Wound Healing
19.
Med Image Anal ; 32: 1-17, 2016 08.
Article En | MEDLINE | ID: mdl-27035487

In this paper, we propose a supervised domain adaptation (DA) framework for adapting decision forests in the presence of distribution shift between training (source) and testing (target) domains, given few labeled examples. We introduce a novel method for DA through an error-correcting hierarchical transfer relaxation scheme with domain alignment, feature normalization, and leaf posterior reweighting to correct for the distribution shift between the domains. For the first time we apply DA to the challenging problem of extending in vitro trained forests (source domain) for in vivo applications (target domain). The proof-of-concept is provided for in vivo characterization of atherosclerotic tissues using intravascular ultrasound signals, where presence of flowing blood is a source of distribution shift between the two domains. This potentially leads to misclassification upon direct deployment of in vitro trained classifier, thus motivating the need for DA as obtaining reliable in vivo training labels is often challenging if not infeasible. Exhaustive validations and parameter sensitivity analysis substantiate the reliability of the proposed DA framework and demonstrates improved tissue characterization performance for scenarios where adaptation is conducted in presence of only a few examples. The proposed method can thus be leveraged to reduce annotation costs and improve computational efficiency over conventional retraining approaches.


Coronary Circulation , Heart/diagnostic imaging , Image Processing, Computer-Assisted/methods , Supervised Machine Learning , Ultrasonography/methods , Humans , Reproducibility of Results , Sensitivity and Specificity
20.
IEEE J Biomed Health Inform ; 20(2): 606-14, 2016 Mar.
Article En | MEDLINE | ID: mdl-25700476

Intravascular imaging using ultrasound or optical coherence tomography (OCT) is predominantly used to adjunct clinical information in interventional cardiology. OCT provides high-resolution images for detailed investigation of atherosclerosis-induced thickening of the lumen wall resulting in arterial blockage and triggering acute coronary events. However, the stochastic uncertainty of speckles limits effective visual investigation over large volume of pullback data, and clinicians are challenged by their inability to investigate subtle variations in the lumen topology associated with plaque vulnerability and onset of necrosis. This paper presents a lumen segmentation method using OCT imaging physics-based graph representation of signals and random walks image segmentation approaches. The edge weights in the graph are assigned incorporating OCT signal attenuation physics models. Optical backscattering maxima is tracked along each A-scan of OCT and is subsequently refined using global graylevel statistics and used for initializing seeds for the random walks image segmentation. Accuracy of lumen versus tunica segmentation has been measured on 15 in vitro and 6 in vivo pullbacks, each with 150-200 frames using 1) Cohen's kappa coefficient (0.9786 ±0.0061) measured with respect to cardiologist's annotation and 2) divergence of histogram of the segments computed with Kullback-Leibler (5.17 ±2.39) and Bhattacharya measures (0.56 ±0.28). High segmentation accuracy and consistency substantiates the characteristics of this method to reliably segment lumen across pullbacks in the presence of vulnerability cues and necrotic pool and has a deterministic finite time-complexity. This paper in general also illustrates the development of methods and framework for tissue classification and segmentation incorporating cues of tissue-energy interaction physics in imaging.


Coronary Vessels/diagnostic imaging , Image Processing, Computer-Assisted/methods , Tomography, Optical Coherence/methods , Ultrasonography, Interventional/methods , Humans , Scattering, Radiation
...