Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39.697
Filtrar
1.
J Med Syst ; 44(5): 96, 2020 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-32193703

RESUMEN

Optic disc (OD) and optic cup (OC) segmentation are important steps for automatic screening and diagnosing of optic nerve head abnormalities such as glaucoma. Many recent works formulated the OD and OC segmentation as a pixel classification task. However, it is hard for these methods to explicitly model the spatial relations between the labels in the output mask. Furthermore, the proportion of the background, OD and OC are unbalanced which also may result in a biased model as well as introduce more noise. To address these problems, we developed an approach that follows a coarse-to-fine segmentation process. We start with a U-Net to obtain a rough segmenting boundary and then crop the area around the boundary to form a boundary contour centered image. Second, inspired by sequence labeling tasks in natural language processing, we regard the OD and OC segmentation as a sequence labeling task and propose a novel fully convolutional network called SU-Net and combine it with the Viterbi algorithm to jointly decode the segmentation boundary. We also introduced a geometric parameter-based data augmentation method to generate more training samples in order to minimize the differences between training and test sets and reduce overfitting. Experimental results show that our method achieved state-of-the-art results on 2 datasets for both OD and OC segmentation and our method outperforms most of the ophthalmologists in terms of achieving agreement out of 6 ophthalmologists on the MESSIDOR dataset for both OD and OC segmentation. In terms of glaucoma screening, we achieved the best cup-to-disc ratio (CDR) error and area under the ROC curve (AUC) for glaucoma classification on the Drishti-GS dataset.


Asunto(s)
Glaucoma , Procesamiento de Imagen Asistida por Computador , Disco Óptico/diagnóstico por imagen , Fondo de Ojo , Glaucoma/diagnóstico , Humanos , Procesamiento de Imagen Asistida por Computador/métodos , Procesamiento de Lenguaje Natural
2.
Ultrasonics ; 103: 106097, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32078843

RESUMEN

Speed of Sound (SoS) maps from ultrasound tomography (UST) provide valuable quantitative information for soft tissue characterization and identification of lesions, making this technique interesting for breast cancer detection. However, due to the complexity of the processes that characterize the interaction of ultrasonic waves with matter, classic and fast tomographic algorithms such as back-projection are not suitable. Consequently, the image reconstruction process in UST is generally slow compared to other more conventional medical tomography modalities. With the aim of facilitating the translation of this technique into real clinical practice, several reconstruction algorithms are being proposed to make image reconstruction in UST to be a fast and accurate process. The geometrical acoustic approximation is often used to reconstruct SoS with less computational burden in comparison with full-wave inversion methods. In this work, we propose a simple formulation to perform on-the-flight reconstruction for UST using geometrical acoustics with refraction correction based on quadratic Bézier polynomials. Here we demonstrate that the trajectories created with these polynomials are an accurate approximation to reproduce the refracted acoustic paths connecting the emitter and receiver transducers. The method is faster than typical acquisition times in UST. Thus, it can be considered a step towards real-time reconstructions, which may contribute to its future clinical translation.


Asunto(s)
Procesamiento de Imagen Asistida por Computador/métodos , Ultrasonografía Mamaria , Algoritmos , Técnicas In Vitro , Modelos Estadísticos , Fantasmas de Imagen
3.
Medicine (Baltimore) ; 99(8): e19157, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32080093

RESUMEN

INTRODUCTION: Peritoneal metastasis (PM) is a frequent condition in patients presenting with gastric cancer, especially in younger patients with advanced tumor stages. Computer tomography (CT) is the most common noninvasive modality for preoperative staging in gastric cancer. However, the challenges of limited CT soft tissue contrast result in poor CT depiction of small peritoneal tumors. The sensitivity for detecting PM remains low. About 16% of PM are undetected. Deep learning belongs to the category of artificial intelligence and has demonstrated amazing results in medical image analyses. So far, there has been no deep learning study based on CT images for the diagnosis of PM in gastric cancer. WE PROPOSED A HYPOTHESIS: CT images in the primary tumor region of gastric cancer had valuable information that could predict occult PM of gastric cancer, which could be extracted effectively through deep learning. OBJECTIVE: To develop a deep learning model for accurate preoperative diagnosis of PM in gastric cancer. METHOD: All patients with gastric cancer were retrospectively enrolled. All patients were initially diagnosed as PM negative by CT and later confirmed as positive through surgery or laparoscopy. The dataset was randomly split into training cohort (70% of all patients) and testing cohort (30% of all patients). To develop deep convolutional neural network (DCNN) models with high generalizability, 5-fold cross-validation and model ensemble were utilized. The area under the receiver operating characteristic curve, sensitivity and specificity were used to evaluate DCNN models on the testing cohort. DISCUSSION: This study will help us know whether deep learning can improve the performance of CT in diagnosing PM in gastric cancer.


Asunto(s)
Procesamiento de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Neoplasias Peritoneales/diagnóstico por imagen , Neoplasias Peritoneales/secundario , Neoplasias Gástricas/patología , Inteligencia Artificial , Humanos , Estudios Retrospectivos , Sensibilidad y Especificidad , Tomografía Computarizada por Rayos X/métodos
4.
Lancet Haematol ; 7(3): e259-e269, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32109406

RESUMEN

Understanding the subclinical pathway to cellular engraftment following haemopoietic stem cell transplantation (HSCT) has historically been limited by infrequent marrow biopsies, which increase the risk of infections and might poorly represent the health of the marrow space. Nuclear imaging could represent an opportunity to evaluate the entire medullary space non-invasively, yielding information about cell number, proliferation, or metabolism. Because imaging is not associated with infectious risk, it permits assessment of neutropenic timepoints that were previously inaccessible. This Viewpoint summarises the data regarding the use of nuclear medicine techniques to assess the phases of HSCT: pre-transplant homoeostasis, induced aplasia, early settling and engraftment of infused cells, and later recovery of lymphocytes that target cancers or mediate tolerance. Although these data are newly emerging and preliminary, nuclear medicine imaging approaches might advance our understanding of HSCT events and lead to novel recommendations to enhance outcomes.


Asunto(s)
Enfermedad Injerto contra Huésped/patología , Neoplasias Hematológicas/terapia , Trasplante de Células Madre Hematopoyéticas/efectos adversos , Células Madre Hematopoyéticas/patología , Procesamiento de Imagen Asistida por Computador/métodos , Tomografía de Emisión de Positrones/métodos , Nicho de Células Madre , Enfermedad Injerto contra Huésped/diagnóstico por imagen , Enfermedad Injerto contra Huésped/etiología , Neoplasias Hematológicas/patología , Humanos
5.
Br J Radiol ; 93(1108): 20190948, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32101448

RESUMEN

Historically, medical imaging has been a qualitative or semi-quantitative modality. It is difficult to quantify what can be seen in an image, and to turn it into valuable predictive outcomes. As a result of advances in both computational hardware and machine learning algorithms, computers are making great strides in obtaining quantitative information from imaging and correlating it with outcomes. Radiomics, in its two forms "handcrafted and deep," is an emerging field that translates medical images into quantitative data to yield biological information and enable radiologic phenotypic profiling for diagnosis, theragnosis, decision support, and monitoring. Handcrafted radiomics is a multistage process in which features based on shape, pixel intensities, and texture are extracted from radiographs. Within this review, we describe the steps: starting with quantitative imaging data, how it can be extracted, how to correlate it with clinical and biological outcomes, resulting in models that can be used to make predictions, such as survival, or for detection and classification used in diagnostics. The application of deep learning, the second arm of radiomics, and its place in the radiomics workflow is discussed, along with its advantages and disadvantages. To better illustrate the technologies being used, we provide real-world clinical applications of radiomics in oncology, showcasing research on the applications of radiomics, as well as covering its limitations and its future direction.


Asunto(s)
Aprendizaje Profundo/tendencias , Diagnóstico por Imagen/tendencias , Procesamiento de Imagen Asistida por Computador/tendencias , Tecnología Radiológica/tendencias , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Diagnóstico por Imagen/métodos , Femenino , Predicción , Humanos , Procesamiento de Imagen Asistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Masculino , Radiografía/métodos , Tecnología Radiológica/métodos , Flujo de Trabajo
6.
Adv Exp Med Biol ; 1213: 23-44, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32030661

RESUMEN

Medical images have been widely used in clinics, providing visual representations of under-skin tissues in human body. By applying different imaging protocols, diverse modalities of medical images with unique characteristics of visualization can be produced. Considering the cost of scanning high-quality single modality images or homogeneous multiple modalities of images, medical image synthesis methods have been extensively explored for clinical applications. Among them, deep learning approaches, especially convolutional neural networks (CNNs) and generative adversarial networks (GANs), have rapidly become dominating for medical image synthesis in recent years. In this chapter, based on a general review of the medical image synthesis methods, we will focus on introducing typical CNNs and GANs models for medical image synthesis. Especially, we will elaborate our recent work about low-dose to high-dose PET image synthesis, and cross-modality MR image synthesis, using these models.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistida por Computador/métodos , Humanos
7.
Adv Exp Med Biol ; 1213: 135-147, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32030668

RESUMEN

This chapter focuses on modern deep learning techniques that are proposed for automatically recognizing and segmenting multiple organ regions on three-dimensional (3D) computed tomography (CT) images. CT images are widely used to visualize 3D anatomical structures composed of multiple organ regions inside the human body in clinical medicine. Automatic recognition and segmentation of multiple organs on CT images is a fundamental processing step of computer-aided diagnosis, surgery, and radiation therapy systems, which aim to achieve precision and personalized medicines. In this chapter, we introduce our recent works on addressing the issue of multiple organ segmentation on 3D CT images by using deep learning, a completely novel approach, instead of conventional segmentation methods originated from traditional digital image processing techniques. We evaluated and compared the segmentation performances of two different deep learning approaches based on 2D- and 3D deep convolutional neural networks (CNNs) without and with a pre-processing step. A conventional method based on a probabilistic atlas algorithm, which presented the best performance within the conventional approaches, was also adopted as a baseline for performance comparison. A dataset containing 240 CT scans of different portions of human bodies was used for training the CNNs and validating the segmentation performance of the learning results. A maximum number of 17 types of organ regions in each CT scan were segmented automatically and validated with the human annotations by using ratio of intersection over union (IoU) as the criterion. Our experimental results showed that the IoUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that were segmented by the proposed 3D and 2D deep CNNs, respectively. All results using the deep learning approaches showed better accuracy and robustness than the conventional segmentation method that used the probabilistic atlas algorithm. The effectiveness and usefulness of deep learning approaches were demonstrated for multiple organ segmentation on 3D CT images.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Humanos
8.
BMC Bioinformatics ; 21(1): 44, 2020 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-32024462

RESUMEN

BACKGROUND: The localization of objects of interest is a key initial step in most image analysis workflows. For biomedical image data, classical image-segmentation methods like thresholding or edge detection are typically used. While those methods perform well for labelled objects, they are reaching a limit when samples are poorly contrasted with the background, or when only parts of larger structures should be detected. Furthermore, the development of such pipelines requires substantial engineering of analysis workflows and often results in case-specific solutions. Therefore, we propose a new straightforward and generic approach for object-localization by template matching that utilizes multiple template images to improve the detection capacity. RESULTS: We provide a new implementation of template matching that offers higher detection capacity than single template approach, by enabling the detection of multiple template images. To provide an easy-to-use method for the automatic localization of objects of interest in microscopy images, we implemented multi-template matching as a Fiji plugin, a KNIME workflow and a python package. We demonstrate its application for the localization of entire, partial and multiple biological objects in zebrafish and medaka high-content screening datasets. The Fiji plugin can be installed by activating the Multi-Template-Matching and IJ-OpenCV update sites. The KNIME workflow is available on nodepit and KNIME Hub. Source codes and documentations are available on GitHub (https://github.com/multi-template-matching). CONCLUSION: The novel multi-template matching is a simple yet powerful object-localization algorithm, that requires no data-pre-processing or annotation. Our implementation can be used out-of-the-box by non-expert users for any type of 2D-image. It is compatible with a large variety of applications including, for instance, analysis of large-scale datasets originating from automated microscopy, detection and tracking of objects in time-lapse assays, or as a general image-analysis step in any custom processing pipelines. Using different templates corresponding to distinct object categories, the tool can also be used for classification of the detected regions.


Asunto(s)
Procesamiento de Imagen Asistida por Computador/métodos , Microscopía/métodos , Programas Informáticos , Algoritmos , Animales , Oryzias/anatomía & histología , Pez Cebra/anatomía & histología
9.
BMC Bioinformatics ; 21(1): 8, 2020 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-31914944

RESUMEN

BACKGROUND: Cell nuclei segmentation is a fundamental task in microscopy image analysis, based on which multiple biological related analysis can be performed. Although deep learning (DL) based techniques have achieved state-of-the-art performances in image segmentation tasks, these methods are usually complex and require support of powerful computing resources. In addition, it is impractical to allocate advanced computing resources to each dark- or bright-field microscopy, which is widely employed in vast clinical institutions, considering the cost of medical exams. Thus, it is essential to develop accurate DL based segmentation algorithms working with resources-constraint computing. RESULTS: An enhanced, light-weighted U-Net (called U-Net+) with modified encoded branch is proposed to potentially work with low-resources computing. Through strictly controlled experiments, the average IOU and precision of U-Net+ predictions are confirmed to outperform other prevalent competing methods with 1.0% to 3.0% gain on the first stage test set of 2018 Kaggle Data Science Bowl cell nuclei segmentation contest with shorter inference time. CONCLUSIONS: Our results preliminarily demonstrate the potential of proposed U-Net+ in correctly spotting microscopy cell nuclei with resources-constraint computing.


Asunto(s)
Núcleo Celular/patología , Microscopía , Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistida por Computador/métodos
10.
Nat Commun ; 11(1): 150, 2020 01 09.
Artículo en Inglés | MEDLINE | ID: mdl-31919345

RESUMEN

Fluorescence microscopy is an essential tool for biological discoveries. There is a constant demand for better spatial resolution across a larger field of view. Although strides have been made to improve the theoretical resolution and speed of the optical instruments, in mesoscopic samples, image quality is still largely limited by the optical properties of the sample. In Selective Plane Illumination Microscopy (SPIM), the achievable optical performance is hampered by optical degradations encountered in both the illumination and detection. Multi-view imaging, either through sample rotation or additional optical paths, is a popular strategy to improve sample coverage. In this work, we introduce a smart rotation workflow that utilizes on-the-fly image analysis to identify the optimal light sheet imaging orientations. The smart rotation workflow outperforms the conventional approach without additional hardware and achieves a better sample coverage using the same number of angles or less and thereby reduces data volume and phototoxicity.


Asunto(s)
Aumento de la Imagen/métodos , Imagen Tridimensional/métodos , Microscopía Fluorescente/métodos , Procesamiento de Imagen Asistida por Computador/métodos
11.
Nat Commun ; 11(1): 573, 2020 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-31996677

RESUMEN

Hypoxia in solid tumors is thought to be an important factor in resistance to therapy, but the extreme microscopic heterogeneity of the partial pressures of oxygen (pO2) between the capillaries makes it difficult to characterize the scope of this phenomenon without invasive sampling of oxygen distributions throughout the tissue. Here we develop a non-invasive method to track spatial oxygen distributions in tumors during fractionated radiotherapy, using oxygen-dependent quenching of phosphorescence, oxygen probe Oxyphor PtG4 and the radiotherapy-induced Cherenkov light to excite and image the phosphorescence lifetimes within the tissue. Mice bearing MDA-MB-231 breast cancer and FaDu head neck cancer xenografts show different pO2 responses during each of the 5 fractions (5 Gy per fraction), delivered from a clinical linear accelerator. This study demonstrates subsurface in vivo mapping of tumor pO2 distributions with submillimeter spatial resolution, thus providing a methodology to track response of tumors to fractionated radiotherapy.


Asunto(s)
Fraccionamiento de la Dosis de Radiación , Procesamiento de Imagen Asistida por Computador/métodos , Oxígeno/química , Radioterapia/métodos , Ensayos Antitumor por Modelo de Xenoinjerto/métodos , Animales , Ingeniería Biomédica/métodos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/radioterapia , Línea Celular Tumoral , Femenino , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Xenoinjertos , Humanos , Hipoxia , Metaloporfirinas , Ratones , Presión Parcial , Aceleradores de Partículas
12.
BMC Bioinformatics ; 21(1): 27, 2020 Jan 28.
Artículo en Inglés | MEDLINE | ID: mdl-31992200

RESUMEN

BACKGROUND: Phosphorylated histone H2AX, also known as γH2AX, forms µm-sized nuclear foci at the sites of DNA double-strand breaks (DSBs) induced by ionizing radiation and other agents. Due to their specificity and sensitivity, γH2AX immunoassays have become the gold standard for studying DSB induction and repair. One of these assays relies on the immunofluorescent staining of γH2AX followed by microscopic imaging and foci counting. During the last years, semi- and fully automated image analysis, capable of fast detection and quantification of γH2AX foci in large datasets of fluorescence images, are gradually replacing the traditional method of manual foci counting. A major drawback of the non-commercial software for foci counting (available so far) is that they are restricted to 2D-image data. In practice, these algorithms are useful for counting the foci located close to the midsection plane of the nucleus, while the out-of-plane foci are neglected. RESULTS: To overcome the limitations of 2D foci counting, we present a freely available ImageJ-based plugin (FocAn) for automated 3D analysis of γH2AX foci in z-image stacks acquired by confocal fluorescence microscopy. The image-stack processing algorithm implemented in FocAn is capable of automatic 3D recognition of individual cell nuclei and γH2AX foci, as well as evaluation of the total foci number per cell nucleus. The FocAn algorithm consists of two parts: nucleus identification and foci detection, each employing specific sequences of auto local thresholding in combination with watershed segmentation techniques. We validated the FocAn algorithm using fluorescence-labeled γH2AX in two glioblastoma cell lines, irradiated with 2 Gy and given up to 24 h post-irradiation for repair. We found that the data obtained with FocAn agreed well with those obtained with an already available software (FoCo) and manual counting. Moreover, FocAn was capable of identifying overlapping foci in 3D space, which ensured accurate foci counting even at high DSB density of up to ~ 200 DSB/nucleus. CONCLUSIONS: FocAn is freely available an open-source 3D foci analyzer. The user-friendly algorithm FocAn requires little supervision and can automatically count the amount of DNA-DSBs, i.e. fluorescence-labeled γH2AX foci, in 3D image stacks acquired by laser-scanning microscopes without additional nuclei staining.


Asunto(s)
Algoritmos , Reparación del ADN , Procesamiento de Imagen Asistida por Computador/métodos , Microscopía Confocal/métodos , Microscopía Fluorescente/métodos , Línea Celular Tumoral , Núcleo Celular/metabolismo , Roturas del ADN de Doble Cadena , Histonas/análisis , Histonas/metabolismo , Humanos
13.
IEEE Trans Cybern ; 50(1): 153-163, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30188843

RESUMEN

Gaze tracking is a promising technology for studying the visual perception of clinicians during image-based medical exams. It could be used in longitudinal studies to analyze their perceptive process, explore human-machine interactions, and develop innovative computer-aided imaging systems. However, using a remote eye tracker in an unconstrained environment and over time periods of weeks requires a certain guarantee of performance to ensure that collected gaze data are fit for purpose. We report the results of evaluating eye tracking calibration for longitudinal studies. First, we tested the performance of an eye tracker on a cohort of 13 users over a period of one month. For each participant, the eye tracker was calibrated during the first session. The participants were asked to sit in front of a monitor equipped with the eye tracker, but their position was not constrained. Second, we tested the performance of the eye tracker on sonographers positioned in front of a cart-based ultrasound scanner. Experimental results show a decrease of accuracy between calibration and later testing of 0.30° and a further degradation over time at a rate of 0.13°. month-1. The overall median accuracy was 1.00° (50.9 pixels) and the overall median precision was 0.16° (8.3 pixels). The results from the ultrasonography setting show a decrease of accuracy of 0.16° between calibration and later testing. This slow degradation of gaze tracking accuracy could impact the data quality in long-term studies. Therefore, the results we present here can help in planning such long-term gaze tracking studies.


Asunto(s)
Fijación Ocular/fisiología , Procesamiento de Imagen Asistida por Computador/métodos , Ultrasonografía/métodos , Calibración , Humanos , Procesamiento de Imagen Asistida por Computador/normas , Estudios Longitudinales
14.
IEEE Trans Image Process ; 29: 1-14, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31265394

RESUMEN

The prevailing characteristics of micro-videos result in the less descriptive power of each modality. The micro-video representations, several pioneer efforts proposed, are limited in implicitly exploring the consistency between different modality information but ignore the complementarity. In this paper, we focus on how to explicitly separate the consistent features and the complementary features from the mixed information and harness their combination to improve the expressiveness of each modality. Toward this end, we present a neural multimodal cooperative learning (NMCL) model to split the consistent component and the complementary component by a novel relation-aware attention mechanism. Specifically, the computed attention score can be used to measure the correlation between the features extracted from different modalities. Then, a threshold is learned for each modality to distinguish the consistent and complementary features according to the score. Thereafter, we integrate the consistent parts to enhance the representations and supplement the complementary ones to reinforce the information in each modality. As to the problem of redundant information, which may cause overfitting and is hard to distinguish, we devise an attention network to dynamically capture the features which closely related the category and output a discriminative representation for prediction. The experimental results on a real-world micro-video dataset show that the NMCL outperforms the state-of-the-art methods. Further studies verify the effectiveness and cooperative effects brought by the attentive mechanism.


Asunto(s)
Minería de Datos/métodos , Procesamiento de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Algoritmos , Animales , Perros , Semántica , Grabación en Video
15.
IEEE Trans Image Process ; 29: 44-56, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31329555

RESUMEN

Hyperspectral images (HSIs) are often degraded by a mixture of various types of noise during the imaging process, including Gaussian noise, impulse noise, and stripes. Such complex noise could plague the subsequent HSIs processing. Generally, most HSI denoising methods formulate sparsity optimization problems with convex norm constraints, which over-penalize large entries of vectors, and may result in a biased solution. In this paper, a nonconvex regularized low-rank and sparse matrix decomposition (NonRLRS) method is proposed for HSI denoising, which can simultaneously remove the Gaussian noise, impulse noise, dead lines, and stripes. The NonRLRS aims to decompose the degraded HSI, expressed in a matrix form, into low-rank and sparse components with a robust formulation. To enhance the sparsity in both the intrinsic low-rank structure and the sparse corruptions, a novel nonconvex regularizer named as normalized ε -penalty, is presented, which can adaptively shrink each entry. In addition, an effective algorithm based on the majorization minimization (MM) is developed to solve the resulting nonconvex optimization problem. Specifically, the MM algorithm first substitutes the nonconvex objective function with the surrogate upper-bound in each iteration, and then minimizes the constructed surrogate function, which enables the nonconvex problem to be solved in the framework of reweighted technique. Experimental results on both simulated and real data demonstrate the effectiveness of the proposed method.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistida por Computador/métodos , Relación Señal-Ruido
16.
Spectrochim Acta A Mol Biomol Spectrosc ; 224: 117386, 2020 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-31336320

RESUMEN

Non-O157 Shiga toxin-producing Escherichia coli (STEC) serogroups such as O26, O45, O103, O111, O121 and O145 often cause illness to people in the United States and the conventional identification of these "Big-Six" are complex. The label-free hyperspectral microscope imaging (HMI) method, which provides spectral "fingerprints" information of bacterial cells, was employed to classify serogroups at the cellular level. In spectral analysis, principal component analysis (PCA) method and stacked auto-encoder (SAE) method were conducted to extract principal spectral features for classification task. Based on these features, multiple classifiers including linear discriminant analysis (LDA), support vector machine (SVM) and soft-max regression (SR) methods were evaluated. Different sizes of datasets were also tested in search for the suitable classification models. Among the results, SAE-based classification models performed better than PCA-based models, achieving classification accuracy of SAE-LDA (93.5%), SAE-SVM (94.9%) and SAE-SR (94.6%), respectively. In contrast, classification results of PCA-based methods such as PCA-LDA, PCA-SVM and PCA-SR were only 75.5%, 85.7% and 77.1%, respectively. The results also suggested the increasing number of training samples have positive effects on classification models. Taking advantage of increasing dataset, the SAE-SR classification model finally performed better than others with average accuracy of 94.9% in classifying STEC serogroups. Specifically, O103 serogroup was classified with the highest accuracy of 97.4%, followed by O111 (96.5%), O26 (95.3%), O121 (95%), O145 (92.9%) and O45 (92.4%), respectively. Thus, the HMI technology coupled with SAE-SR classification model has the potential for "Big-Six" identification.


Asunto(s)
Técnicas de Tipificación Bacteriana/métodos , Aprendizaje Profundo , Procesamiento de Imagen Asistida por Computador/métodos , Microscopía/métodos , Escherichia coli Shiga-Toxigénica , Algoritmos , Microbiología de Alimentos , Enfermedades Transmitidas por los Alimentos/microbiología , Humanos , Imagen Óptica/métodos , Análisis de Componente Principal , Escherichia coli Shiga-Toxigénica/química , Escherichia coli Shiga-Toxigénica/clasificación
17.
J Comput Assist Tomogr ; 44(1): 1-6, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31855880

RESUMEN

OBJECTIVES: To investigate the coronary venous system (CVS) and its spatial relationship with coronary arteries by using 256-slice computed tomography (CT). METHODS: One hundred one patients underwent coronary CT angiography by using a 256-slice CT. In each patient, the CVS and its spatial relationship with coronary arteries were analyzed. We measured the diameters and angulations of the coronary sinus (CS), great cardiac vein, anterior interventricular vein (AIV), left marginal vein, posterior vein of the left ventricle (PVLV), and posterior interventricular vein (PIV), and the distances, respectively, from the CS ostium and from the crossing point to the ostium of corresponding tributaries. RESULTS: The following 5 pairs of veins and arteries had a higher frequency of intersecting compared with others: the CS/great cardiac vein and the left circumflex coronary artery (97.1%), the AIV and the diagonal or ramus branch (92.1%), the PIV and the posterior branch of left ventricle artery (88.1%), the left marginal vein and the circumflex or circumflex marginal (73.9%), and the PVLV and the circumflex or circumflex marginal (31.6%). The other 2 pairs had a higher frequency of running parallel to each other: the AIV and the left anterior descending artery (76.2%) and the PIV and the posterior descending artery (54.4%). Most tributaries were lateral to their corresponding arteries at the crossing point except for the AIV. For the PVLV and PIV, the distances from the crossing point to the ostium of corresponding veins when the veins were lateral to the arteries were smaller than those when the veins were medial to the arteries (P < 0.05). CONCLUSIONS: The CVS and its anatomical relationship with the coronary arterial system can be examined with details by using a 256-slice CT, which has important clinical implications.


Asunto(s)
Enfermedades Cardiovasculares/diagnóstico por imagen , Angiografía por Tomografía Computarizada/instrumentación , Vasos Coronarios/diagnóstico por imagen , Adulto , Anciano , Anciano de 80 o más Años , Angiografía por Tomografía Computarizada/métodos , Enfermedad de la Arteria Coronaria , Vasos Coronarios/anatomía & histología , Femenino , Humanos , Procesamiento de Imagen Asistida por Computador/métodos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos
18.
Ultrasonics ; 101: 106001, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-31505328

RESUMEN

Ultrasound is the first-line tool for screening hepatic steatosis. Statistical distributions can be used to model the backscattered signals for liver characterization. The Nakagami distribution is the most frequently adopted model; however, the homodyned K (HK) distribution has received attention due to its link to physical meaning and improved parameter estimation through X- and U-statistics (termed "XU"). To assess hepatic steatosis, we proposed HK parametric imaging based on the α parameter (a measure of the number of scatterers per resolution cell) calculated using the XU estimator. Using a commercial system equipped with a 7-MHz linear array transducer, phantom experiments were performed to suggest an appropriate window size for α imaging using the sliding window technique, which was further applied to measuring the livers of rats (n = 66) with hepatic steatosis induced by feeding the rats a methionine- and choline-deficient diet. The relationships between the α parameter, the stage of hepatic steatosis, and histological features were verified by the correlation coefficient r, one-way analysis of variance, and regression analysis. The phantom results showed that the window side length corresponding to five times the pulse length supported a reliable α imaging. The α parameter showed a promising performance for grading hepatic steatosis (p < 0.05; r2 = 0.68). Compared with conventional Nakagami imaging, α parametric imaging provided significant information associated with fat droplet size (p < 0.05; r2 = 0.53), enabling further analysis and evaluation of severe hepatic steatosis.


Asunto(s)
Hígado Graso/diagnóstico por imagen , Ultrasonografía/métodos , Animales , Modelos Animales de Enfermedad , Aumento de la Imagen/métodos , Procesamiento de Imagen Asistida por Computador/métodos , Masculino , Fantasmas de Imagen , Ratas , Ratas Wistar
19.
Neural Netw ; 121: 74-87, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31536901

RESUMEN

In recent years Deep Learning has brought about a breakthrough in Medical Image Segmentation. In this regard, U-Net has been the most popular architecture in the medical imaging community. Despite outstanding overall performance in segmenting multimodal medical images, through extensive experimentations on some challenging datasets, we demonstrate that the classical U-Net architecture seems to be lacking in certain aspects. Therefore, we propose some modifications to improve upon the already state-of-the-art U-Net model. Following these modifications, we develop a novel architecture, MultiResUNet, as the potential successor to the U-Net architecture. We have tested and compared MultiResUNet with the classical U-Net on a vast repertoire of multimodal medical images. Although only slight improvements in the cases of ideal images are noticed, remarkable gains in performance have been attained for the challenging ones. We have evaluated our model on five different datasets, each with their own unique challenges, and have obtained a relative improvement in performance of 10.15%, 5.07%, 2.63%, 1.41%, and 0.62% respectively. We have also discussed and highlighted some qualitatively superior aspects of MultiResUNet over classical U-Net that are not really reflected in the quantitative measures.


Asunto(s)
Aprendizaje Profundo , Imagen Tridimensional/métodos , Humanos , Procesamiento de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Microscopía Fluorescente/métodos
20.
Nat Biotechnol ; 37(12): 1482-1492, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31796933

RESUMEN

The high-dimensional data created by high-throughput technologies require visualization tools that reveal data structure and patterns in an intuitive form. We present PHATE, a visualization method that captures both local and global nonlinear structure using an information-geometric distance between data points. We compare PHATE to other tools on a variety of artificial and biological datasets, and find that it consistently preserves a range of patterns in data, including continual progressions, branches and clusters, better than other tools. We define a manifold preservation metric, which we call denoised embedding manifold preservation (DEMaP), and show that PHATE produces lower-dimensional embeddings that are quantitatively better denoised as compared to existing visualization methods. An analysis of a newly generated single-cell RNA sequencing dataset on human germ-layer differentiation demonstrates how PHATE reveals unique biological insight into the main developmental branches, including identification of three previously undescribed subpopulations. We also show that PHATE is applicable to a wide variety of data types, including mass cytometry, single-cell RNA sequencing, Hi-C and gut microbiome data.


Asunto(s)
Genómica/métodos , Ensayos Analíticos de Alto Rendimiento/métodos , Procesamiento de Imagen Asistida por Computador/métodos , Algoritmos , Animales , Macrodatos , Diferenciación Celular , Células Cultivadas , Simulación por Computador , Bases de Datos Genéticas , Microbioma Gastrointestinal , Humanos , Ratones , Análisis de Secuencia de ARN , Análisis de la Célula Individual
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA