Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40.788
Filtrar
1.
Eur J Med Res ; 25(1): 49, 2020 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-33046116

RESUMO

BACKGROUND: The coronavirus disease 2019 (COVID-19) has brought a global disaster. Quantitative lesions may provide the radiological evidence of the severity of pneumonia and further to assess the effect of comorbidity on patients with COVID-19. METHODS: 294 patients with COVID-19 were enrolled from February, 24, 2020 to June, 1, 2020 from six centers. Multi-task Unet network was used to segment the whole lung and lesions from chest CT images. This deep learning method was pre-trained in 650 CT images (550 in primary dataset and 100 in test dataset) with COVID-19 or community-acquired pneumonia and Dice coefficients in test dataset were calculated. 50 CT scans of 50 patients (15 with comorbidity and 35 without comorbidity) were random selected to mark lesions manually. The results will be compared with the automatic segmentation model. Eight quantitative parameters were calculated based on the segmentation results to evaluate the effect of comorbidity on patients with COVID-19. RESULTS: Quantitative segmentation model was proved to be effective and accurate with all Dice coefficients more than 0.85 and all accuracies more than 0.95. Of the 294 patients, 52 (17.7%) patients were reported having at least one comorbidity; 14 (4.8%) having more than one comorbidity. Patients with any comorbidity were older (P < 0.001), had longer incubation period (P < 0.001), were more likely to have abnormal laboratory findings (P < 0.05), and be in severity status (P < 0.001). More lesions (including larger volume of lesion, consolidation, and ground-glass opacity) were shown in patients with any comorbidity than patients without comorbidity (all P < 0.001). More lesions were found on CT images in patients with more comorbidities. The median volumes of lesion, consolidation, and ground-glass opacity in diabetes mellitus group were largest among the groups with single comorbidity that had the incidence rate of top three. CONCLUSIONS: Multi-task Unet network can make quantitative CT analysis of lesions to assess the effect of comorbidity on patients with COVID-19, further to provide the radiological evidence of the severity of pneumonia. More lesions (including GGO and consolidation) were found in CT images of cases with comorbidity. The more comorbidities patients have, the more lesions CT images show.


Assuntos
Algoritmos , Betacoronavirus , Infecções por Coronavirus/epidemiologia , Processamento de Imagem Assistida por Computador/métodos , Pulmão/diagnóstico por imagem , Pneumonia Viral/epidemiologia , Pneumonia/diagnóstico , Tomografia Computadorizada por Raios X/métodos , Adulto , Idoso , Comorbidade , Infecções por Coronavirus/diagnóstico , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pandemias , Pneumonia/epidemiologia , Pneumonia Viral/diagnóstico , Reprodutibilidade dos Testes , Estudos Retrospectivos
2.
Medicine (Baltimore) ; 99(40): e22350, 2020 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-33019411

RESUMO

BACKGROUND: Ultrasonography is the first choice for clinical diagnosis and differentiation of thyroid cancer Currently. However, due to the complexity and overlapping nature of the thyroid nodule sonograms, it remains difficult to accurately identify nodules with atypical ultrasound characteristics. Previous studies showed that superb microvascular imaging (SMI) can detect tumor neovascularization to differentiate benign from malignant thyroid nodules. However, the results of these studies have been contradictory with low sample sizes. This meta-analysis tested the hypothesis that SMI is accurate in distinguishing benign and malignant thyroid nodules. METHODS: We will search PubMed, Web of Science, Cochrane Library, and Chinese biomedical databases from their inceptions to the August 20, 2020, without language restrictions. Two authors will independently carry out searching literature records, scanning titles and abstracts, full texts, collecting data, and assessing risk of bias. Review Manager 5.2 and Stata14.0 software ((Stata Corp, College Station, TX) will be used for data analysis. RESULTS: This systematic review will determine the accuracy of SMI in distinguishing thyroid nodules. CONCLUSION: Its findings will provide helpful evidence for the accuracy of SMI in in distinguishing thyroid nodules.Systematic review registration: INPLASY202080084.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Nódulo da Glândula Tireoide/diagnóstico , Nódulo da Glândula Tireoide/patologia , Ultrassonografia/métodos , Diagnóstico Diferencial , Neovascularização Patológica/patologia , Projetos de Pesquisa , Neoplasias da Glândula Tireoide/diagnóstico por imagem , Neoplasias da Glândula Tireoide/patologia , Nódulo da Glândula Tireoide/diagnóstico por imagem
3.
Nat Commun ; 11(1): 4686, 2020 09 17.
Artigo em Inglês | MEDLINE | ID: mdl-32943633

RESUMO

Electrophysiology provides a direct readout of neuronal activity at a temporal precision only limited by the sampling rate. However, interrogating deep brain structures, implanting multiple targets or aiming at unusual angles still poses significant challenges for operators, and errors are only discovered by post-hoc histological reconstruction. Here, we propose a method combining the high-resolution information about bone landmarks provided by micro-CT scanning with the soft tissue contrast of the MRI, which allowed us to precisely localize electrodes and optic fibers in mice in vivo. This enables arbitrating the success of implantation directly after surgery with a precision comparable to gold standard histology. Adjustment of the recording depth with micro-drives or early termination of unsuccessful experiments saves many working hours, and fast 3-dimensional feedback helps surgeons avoid systematic errors. Increased aiming precision enables more precise targeting of small or deep brain nuclei and multiple targeting of specific cortical or hippocampal layers.


Assuntos
Encéfalo/diagnóstico por imagem , Eletrodos Implantados , Processamento de Imagem Assistida por Computador/métodos , Fibras Ópticas , Microtomografia por Raio-X/métodos , Animais , Comportamento Animal , Encéfalo/patologia , Mapeamento Encefálico , Hipocampo/diagnóstico por imagem , Hipocampo/patologia , Hipocampo/cirurgia , Técnicas Histológicas/métodos , Imagem por Ressonância Magnética/métodos , Masculino , Camundongos , Camundongos Endogâmicos C57BL , Modelos Animais , Silício , Técnicas Estereotáxicas
4.
PLoS Comput Biol ; 16(9): e1008193, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32925919

RESUMO

Segmenting cell nuclei within microscopy images is a ubiquitous task in biological research and clinical applications. Unfortunately, segmenting low-contrast overlapping objects that may be tightly packed is a major bottleneck in standard deep learning-based models. We report a Nuclear Segmentation Tool (NuSeT) based on deep learning that accurately segments nuclei across multiple types of fluorescence imaging data. Using a hybrid network consisting of U-Net and Region Proposal Networks (RPN), followed by a watershed step, we have achieved superior performance in detecting and delineating nuclear boundaries in 2D and 3D images of varying complexities. By using foreground normalization and additional training on synthetic images containing non-cellular artifacts, NuSeT improves nuclear detection and reduces false positives. NuSeT addresses common challenges in nuclear segmentation such as variability in nuclear signal and shape, limited training sample size, and sample preparation artifacts. Compared to other segmentation models, NuSeT consistently fares better in generating accurate segmentation masks and assigning boundaries for touching nuclei.


Assuntos
Núcleo Celular/fisiologia , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Algoritmos , Artefatos , Biologia Computacional , Células HeLa , Humanos , Software
5.
Nat Commun ; 11(1): 4829, 2020 09 24.
Artigo em Inglês | MEDLINE | ID: mdl-32973154

RESUMO

The computed tomography angiography (CTA) postprocessing manually recognized by technologists is extremely labor intensive and error prone. We propose an artificial intelligence reconstruction system supported by an optimized physiological anatomical-based 3D convolutional neural network that can automatically achieve CTA reconstruction in healthcare services. This system is trained and tested with 18,766 head and neck CTA scans from 5 tertiary hospitals in China collected between June 2017 and November 2018. The overall reconstruction accuracy of the independent testing dataset is 0.931. It is clinically applicable due to its consistency with manually processed images, which achieves a qualification rate of 92.1%. This system reduces the time consumed from 14.22 ± 3.64 min to 4.94 ± 0.36 min, the number of clicks from 115.87 ± 25.9 to 4 and the labor force from 3 to 1 technologist after five months application. Thus, the system facilitates clinical workflows and provides an opportunity for clinical technologists to improve humanistic patient care.


Assuntos
Angiografia/métodos , Vasos Sanguíneos/diagnóstico por imagem , Cabeça/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Pescoço/diagnóstico por imagem , Rede Nervosa/diagnóstico por imagem , Idoso , Osso e Ossos/diagnóstico por imagem , China , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Tomografia Computadorizada por Raios X
6.
PLoS Comput Biol ; 16(9): e1007758, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32881897

RESUMO

With the ever-increasing quality and quantity of imaging data in biomedical research comes the demand for computational methodologies that enable efficient and reliable automated extraction of the quantitative information contained within these images. One of the challenges in providing such methodology is the need for tailoring algorithms to the specifics of the data, limiting their areas of application. Here we present a broadly applicable approach to quantification and classification of complex shapes and patterns in biological or other multi-component formations. This approach integrates the mapping of all shape boundaries within an image onto a global information-rich graph and machine learning on the multidimensional measures of the graph. We demonstrated the power of this method by (1) extracting subtle structural differences from visually indistinguishable images in our phenotype rescue experiments using the endothelial tube formations assay, (2) training the algorithm to identify biophysical parameters underlying the formation of different multicellular networks in our simulation model of collective cell behavior, and (3) analyzing the response of U2OS cell cultures to a broad array of small molecule perturbations.


Assuntos
Biologia Computacional/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Linhagem Celular Tumoral , Técnicas Citológicas , Árvores de Decisões , Técnicas de Silenciamento de Genes , Células Endoteliais da Veia Umbilical Humana , Humanos
7.
PLoS Comput Biol ; 16(9): e1008179, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32898132

RESUMO

Detection and segmentation of macrophage cells in fluorescence microscopy images is a challenging problem, mainly due to crowded cells, variation in shapes, and morphological complexity. We present a new deep learning approach for cell detection and segmentation that incorporates previously learned nucleus features. A novel fusion of feature pyramids for nucleus detection and segmentation with feature pyramids for cell detection and segmentation is used to improve performance on a microscopic image dataset created by us and provided for public use, containing both nucleus and cell signals. Our experimental results indicate that cell detection and segmentation performance significantly benefit from the fusion of previously learned nucleus features. The proposed feature pyramid fusion architecture clearly outperforms a state-of-the-art Mask R-CNN approach for cell detection and segmentation with relative mean average precision improvements of up to 23.88% and 23.17%, respectively.


Assuntos
Células Eucarióticas/citologia , Processamento de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência/métodos , Redes Neurais de Computação , Biologia Computacional , Aprendizado Profundo , Humanos , Macrófagos/citologia , Células THP-1
8.
Sci Rep ; 10(1): 15364, 2020 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-32958781

RESUMO

Currently, we witness the severe spread of the pandemic of the new Corona virus, COVID-19, which causes dangerous symptoms to humans and animals, its complications may lead to death. Although convolutional neural networks (CNNs) is considered the current state-of-the-art image classification technique, it needs massive computational cost for deployment and training. In this paper, we propose an improved hybrid classification approach for COVID-19 images by combining the strengths of CNNs (using a powerful architecture called Inception) to extract features and a swarm-based feature selection algorithm (Marine Predators Algorithm) to select the most relevant features. A combination of fractional-order and marine predators algorithm (FO-MPA) is considered an integration among a robust tool in mathematics named fractional-order calculus (FO). The proposed approach was evaluated on two public COVID-19 X-ray datasets which achieves both high performance and reduction of computational complexity. The two datasets consist of X-ray COVID-19 images by international Cardiothoracic radiologist, researchers and others published on Kaggle. The proposed approach selected successfully 130 and 86 out of 51 K features extracted by inception from dataset 1 and dataset 2, while improving classification accuracy at the same time. The results are the best achieved on these datasets when compared to a set of recent feature selection algorithms. By achieving 98.7%, 98.2% and 99.6%, 99% of classification accuracy and F-Score for dataset 1 and dataset 2, respectively, the proposed approach outperforms several CNNs and all recent works on COVID-19 images.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Infecções por Coronavirus/diagnóstico , Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Pneumonia Viral/diagnóstico por imagem , Pneumonia Viral/diagnóstico , Algoritmos , Betacoronavirus , Aprendizado Profundo , Humanos , Redes Neurais de Computação , Pandemias , Raios X
9.
J Comput Assist Tomogr ; 44(5): 796-805, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32932343

RESUMO

OBJECTIVE: In this article, a statistical-based iterative ring removal (IRR) algorithm that effectively removes ring artifacts generated by defective detector cells is proposed. METHODS: The physical state of computed tomography (CT) detector elements can change dynamically owing to their temperature dependence and the varying irradiation caused by focal spot movements. This variation in the properties of cells may cause false pixel values in sinograms, resulting in rings or segments of rings in reconstructed images. In this article, the proposed algorithm is studied on clinical CT. Two patients were scanned using a clinical CT scanner (AnyScan SPECT/CT, Mediso). Artificial rings and band rings were generated on the real sinogram data to examine the algorithm in different cases. The method was performed also on real ring artifacts. RESULTS: The IRR can correct both single and band-like ring artifacts with one or more defective pixels. The proposed algorithm can detect the period when pixels contain false signals and only those periods are corrected. The IRR reduces ring artifacts, even in cases where low-contrast rings occur in the reconstructed image. CONCLUSIONS: This statistical correction method efficiently detects and corrects false pixel values in the projection data without causing new artifacts in the reconstructed image. The algorithm is less sensitive to its parameters.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada Espiral/métodos , Algoritmos , Humanos
10.
Sensors (Basel) ; 20(18)2020 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-32937867

RESUMO

The rapid worldwide spread of Coronavirus Disease 2019 (COVID-19) has resulted in a global pandemic. Correct facemask wearing is valuable for infectious disease control, but the effectiveness of facemasks has been diminished, mostly due to improper wearing. However, there have not been any published reports on the automatic identification of facemask-wearing conditions. In this study, we develop a new facemask-wearing condition identification method by combining image super-resolution and classification networks (SRCNet), which quantifies a three-category classification problem based on unconstrained 2D facial images. The proposed algorithm contains four main steps: Image pre-processing, facial detection and cropping, image super-resolution, and facemask-wearing condition identification. Our method was trained and evaluated on the public dataset Medical Masks Dataset containing 3835 images with 671 images of no facemask-wearing, 134 images of incorrect facemask-wearing, and 3030 images of correct facemask-wearing. Finally, the proposed SRCNet achieved 98.70% accuracy and outperformed traditional end-to-end image classification methods using deep learning without image super-resolution by over 1.5% in kappa. Our findings indicate that the proposed SRCNet can achieve high-accuracy identification of facemask-wearing conditions, thus having potential applications in epidemic prevention involving COVID-19.


Assuntos
Betacoronavirus , Infecções por Coronavirus/prevenção & controle , Máscaras , Pandemias/prevenção & controle , Pneumonia Viral/prevenção & controle , Algoritmos , China/epidemiologia , Infecções por Coronavirus/epidemiologia , Bases de Dados Factuais , Aprendizado Profundo , Face , Humanos , Processamento de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Máscaras/classificação , Máscaras/estatística & dados numéricos , Redes Neurais de Computação , Pneumonia Viral/epidemiologia
11.
Nat Commun ; 11(1): 4560, 2020 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-32917899

RESUMO

The rhesus macaque is an important model species in several branches of science, including neuroscience, psychology, ethology, and medicine. The utility of the macaque model would be greatly enhanced by the ability to precisely measure behavior in freely moving conditions. Existing approaches do not provide sufficient tracking. Here, we describe OpenMonkeyStudio, a deep learning-based markerless motion capture system for estimating 3D pose in freely moving macaques in large unconstrained environments. Our system makes use of 62 machine vision cameras that encircle an open 2.45 m × 2.45 m × 2.75 m enclosure. The resulting multiview image streams allow for data augmentation via 3D-reconstruction of annotated images to train a robust view-invariant deep neural network. This view invariance represents an important advance over previous markerless 2D tracking approaches, and allows fully automatic pose inference on unconstrained natural motion. We show that OpenMonkeyStudio can be used to accurately recognize actions and track social interactions.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Macaca mulatta/fisiologia , Movimento (Física) , Algoritmos , Animais , Fenômenos Biomecânicos , Aprendizado Profundo , Masculino , Modelos Animais , Movimento , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/fisiologia , Redes Neurais de Computação
12.
Medicine (Baltimore) ; 99(37): e22189, 2020 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-32925793

RESUMO

Herein, a Harris corner detection algorithm is proposed based on the concepts of iterated threshold segmentation and adaptive iterative threshold (AIT-Harris), and a stepwise local stitching algorithm is used to obtain wide-field ultrasound (US) images.Cone-beam computer tomography (CBCT) and US images from 9 cervical cancer patients and 1 prostate cancer patient were examined. In the experiment, corner features were extracted based on the AIT-Harris, Harris, and Morave algorithms. Accordingly, wide-field ultrasonic images were obtained based on the extracted features after local stitching, and the corner matching rates of all tested algorithms were compared. The accuracies of the drawn contours of organs at risk (OARs) were compared based on the stitched ultrasonic images and CBCT.The corner matching rate of the Morave algorithm was compared with those obtained by the Harris and AIT-Harris algorithms, and paired sample t tests were conducted (t = 6.142, t = 31.859, P < .05). The results showed that the differences were statistically significant. The average Dice similarity coefficient between the automatically delineated bladder region based on wide-field US images and the manually delineated bladder region based on ground truth CBCT images was 0.924, and the average Jaccard coefficient was 0.894.The proposed algorithm improved the accuracy of corner detection, and the stitched wide-field US image could modify the delineation range of OARs in the pelvic cavity.


Assuntos
Algoritmos , Tomografia Computadorizada de Feixe Cônico/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias do Colo do Útero/diagnóstico por imagem , Feminino , Humanos , Masculino , Sensibilidade e Especificidade , Ultrassonografia/métodos
13.
J Comput Assist Tomogr ; 44(5): 673-680, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32936576

RESUMO

OBJECTIVES: This study aimed to evaluate the image quality of 7 iterative reconstruction (IR) algorithms in comparison to filtered back-projection (FBP) algorithm. METHODS: An anthropomorphic chest phantom was scanned on 4 computed tomography scanners and reconstructed with FBP and IR algorithms. Image quality of anatomical details-large/medium-sized pulmonary vessels, small pulmonary vessels, thoracic wall, and small and large lesions-was scored. Furthermore, general impression of noise, image contrast, and artifacts were evaluated. Visual grading regression was used to analyze the data. Standard deviations were measured, and the noise power spectrum was calculated. RESULTS: Iterative reconstruction algorithms showed significantly better results when compared with FBP for these criteria (regression coefficients/P values in parentheses): vessels (FIRST: -1.8/0.05, AIDR Enhanced: <-2.3/0.01, Veo: <-0.1/0.03, ADMIRE: <-2.1/0.04), lesions (FIRST: <-2.6/0.01, AIDR Enhanced: <-1.9/0.03, IMR1: <-2.7/0.01, Veo: <-2.4/0.02, ADMIRE: -2.3/0.02), image noise (FIRST: <-3.2/0.004, AIDR Enhanced: <-3.5/0.002, IMR1: <-6.1/0.001, iDose: <-2.3/0.02, Veo: <-3.4/0.002, ADMIRE: <-3.5/0.02), image contrast (FIRST: -2.3/0.01, AIDR Enhanced: -2.5/0.01, IMR1: -3.7/0.001, iDose: -2.1/0.02), and artifacts (FIRST: <-3.8/0.004, AIDR Enhanced: <-2.7/0.02, IMR1: <-2.6/0.02, iDose: -2.1/0.04, Veo: -2.6/0.02). The iDose algorithm was the only IR algorithm that maintained the noise frequencies. CONCLUSIONS: Iterative reconstruction algorithms performed differently on all evaluated criteria, showing the importance of careful implementation of algorithms for diagnostic purposes.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Radiografia Torácica/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Artefatos , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Razão Sinal-Ruído
14.
PLoS One ; 15(9): e0237972, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32915784

RESUMO

Automated profiling of cell morphology is a powerful tool for inferring cell function. However, this technique retains a high barrier to entry. In particular, configuring image processing parameters for optimal cell profiling is susceptible to cognitive biases and dependent on user experience. Here, we use interactive machine learning to identify the optimum cell profiling configuration that maximises quality of the cell profiling outcome. The process is guided by the user, from whom a rating of the quality of a cell profiling configuration is obtained. We use Bayesian optimisation, an established machine learning algorithm, to learn from this information and automatically recommend the next configuration to examine with the aim of maximising the quality of the processing or analysis. Compared to existing interactive machine learning tools that require domain expertise for per-class or per-pixel annotations, we rely on users' explicit assessment of output quality of the cell profiling task at hand. We validated our interactive approach against the standard human trial-and-error scheme to optimise an object segmentation task using the standard software CellProfiler. Our toolkit enabled rapid optimisation of an object segmentation pipeline, increasing the quality of object segmentation over a pipeline optimised through trial-and-error. Users also attested to the ease of use and reduced cognitive load enabled by our machine learning strategy over the standard approach. We envision that our interactive machine learning approach can enhance the quality and efficiency of pipeline optimisation to democratise image-based cell profiling.


Assuntos
Aprendizado de Máquina , Algoritmos , Teorema de Bayes , Humanos , Processamento de Imagem Assistida por Computador/métodos , Microscopia
15.
Nat Commun ; 11(1): 4391, 2020 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-32873806

RESUMO

Deep learning with Convolutional Neural Networks has shown great promise in image-based classification and enhancement but is often unsuitable for predictive modeling using features without spatial correlations. We present a feature representation approach termed REFINED (REpresentation of Features as Images with NEighborhood Dependencies) to arrange high-dimensional vectors in a compact image form conducible for CNN-based deep learning. We consider the similarities between features to generate a concise feature map in the form of a two-dimensional image by minimizing the pairwise distance values following a Bayesian Metric Multidimensional Scaling Approach. We hypothesize that this approach enables embedded feature extraction and, integrated with CNN-based deep learning, can boost the predictive accuracy. We illustrate the superior predictive capabilities of the proposed framework as compared to state-of-the-art methodologies in drug sensitivity prediction scenarios using synthetic datasets, drug chemical descriptors as predictors from NCI60, and both transcriptomic information and drug descriptors as predictors from GDSC.


Assuntos
Antineoplásicos/farmacologia , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Neoplasias/tratamento farmacológico , Antineoplásicos/uso terapêutico , Teorema de Bayes , Biomarcadores Tumorais/genética , Linhagem Celular Tumoral , Proliferação de Células/efeitos dos fármacos , Conjuntos de Dados como Assunto , Resistencia a Medicamentos Antineoplásicos , Ensaios de Seleção de Medicamentos Antitumorais/métodos , Perfilação da Expressão Gênica , Sequenciamento de Nucleotídeos em Larga Escala , Humanos , Neoplasias/patologia , Análise de Sequência com Séries de Oligonucleotídeos
16.
Biomed Eng Online ; 19(1): 63, 2020 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-32787937

RESUMO

BACKGROUND: Chest CT is used for the assessment of the severity of patients infected with novel coronavirus 2019 (COVID-19). We collected chest CT scans of 202 patients diagnosed with the COVID-19, and try to develop a rapid, accurate and automatic tool for severity screening follow-up therapeutic treatment. METHODS: A total of 729 2D axial plan slices with 246 severe cases and 483 non-severe cases were employed in this study. By taking the advantages of the pre-trained deep neural network, four pre-trained off-the-shelf deep models (Inception-V3, ResNet-50, ResNet-101, DenseNet-201) were exploited to extract the features from these CT scans. These features are then fed to multiple classifiers (linear discriminant, linear SVM, cubic SVM, KNN and Adaboost decision tree) to identify the severe and non-severe COVID-19 cases. Three validation strategies (holdout validation, tenfold cross-validation and leave-one-out) are employed to validate the feasibility of proposed pipelines. RESULTS AND CONCLUSION: The experimental results demonstrate that classification of the features from pre-trained deep models shows the promising application in COVID-19 severity screening, whereas the DenseNet-201 with cubic SVM model achieved the best performance. Specifically, it achieved the highest severity classification accuracy of 95.20% and 95.34% for tenfold cross-validation and leave-one-out, respectively. The established pipeline was able to achieve a rapid and accurate identification of the severity of COVID-19. This may assist the physicians to make more efficient and reliable decisions.


Assuntos
Infecções por Coronavirus/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Pneumonia Viral/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Criança , Pré-Escolar , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Pandemias , Sensibilidade e Especificidade , Fatores de Tempo , Adulto Jovem
17.
PLoS Comput Biol ; 16(8): e1008049, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32822341

RESUMO

Tissue morphogenesis relies on repeated use of dynamic behaviors at the levels of intracellular structures, individual cells, and cell groups. Rapidly accumulating live imaging datasets make it increasingly important to formalize and automate the task of mapping recurrent dynamic behaviors (motifs), as it is done in speech recognition and other data mining applications. Here, we present a "template-based search" approach for accurate mapping of sub- to multi-cellular morphogenetic motifs using a time series data mining framework. We formulated the task of motif mapping as a subsequence matching problem and solved it using dynamic time warping, while relying on high throughput graph-theoretic algorithms for efficient exploration of the search space. This formulation allows our algorithm to accurately identify the complete duration of each instance and automatically label different stages throughout its progress, such as cell cycle phases during cell division. To illustrate our approach, we mapped cell intercalations during germband extension in the early Drosophila embryo. Our framework enabled statistical analysis of intercalary cell behaviors in wild-type and mutant embryos, comparison of temporal dynamics in contracting and growing junctions in different genotypes, and the identification of a novel mode of iterative cell intercalation. Our formulation of tissue morphogenesis using time series opens new avenues for systematic decomposition of tissue morphogenesis.


Assuntos
Biologia Computacional/métodos , Processamento de Imagem Assistida por Computador/métodos , Morfogênese/fisiologia , Algoritmos , Animais , Divisão Celular/fisiologia , Mineração de Dados/métodos , Drosophila/citologia , Drosophila/embriologia , Embrião não Mamífero/citologia , Embrião não Mamífero/embriologia , Feminino , Masculino , Microscopia Confocal , Fatores de Tempo
18.
Ultrasonics ; 108: 106214, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32736163

RESUMO

In this work, a compressed sensing method to reduce hardware complexity of ultrasound imaging systems is proposed and experimentally verified. We provide clinical evaluation of the method with a possible high compression rates (up to 64 RF signals compressed into a single channel on receive) which uses elastic net estimation for decoding stage. This allows a reduction in size and power consumption of the front-end electronics with only a minor loss in image quality. We demonstrate an 8-fold receive channel count reduction with a 3.16 dB and 3.64 dB mean absolute error for gallbladder and kidney images, respectively, as well as 7.4% increase in the contrast-to-noise ratio for kidney images and 0.1% loss in the contrast-to noise ratio for gallbladder images, on average. The proposed method may enable a fully portable ultrasonic device with virtually no loss in image quality as compared to a full size clinical scanner to be constructed.


Assuntos
Compressão de Dados/métodos , Ultrassonografia/métodos , Algoritmos , Vesícula Biliar/diagnóstico por imagem , Voluntários Saudáveis , Humanos , Processamento de Imagem Assistida por Computador/métodos , Rim/diagnóstico por imagem , Fígado/diagnóstico por imagem , Processamento de Sinais Assistido por Computador , Razão Sinal-Ruído , Ultrassonografia/instrumentação
19.
PLoS One ; 15(8): e0236493, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32745102

RESUMO

Accurate segmentation of brain magnetic resonance imaging (MRI) is an essential step in quantifying the changes in brain structure. Deep learning in recent years has been extensively used for brain image segmentation with highly promising performance. In particular, the U-net architecture has been widely used for segmentation in various biomedical related fields. In this paper, we propose a patch-wise U-net architecture for the automatic segmentation of brain structures in structural MRI. In the proposed brain segmentation method, the non-overlapping patch-wise U-net is used to overcome the drawbacks of conventional U-net with more retention of local information. In our proposed method, the slices from an MRI scan are divided into non-overlapping patches that are fed into the U-net model along with their corresponding patches of ground truth so as to train the network. The experimental results show that the proposed patch-wise U-net model achieves a Dice similarity coefficient (DSC) score of 0.93 in average and outperforms the conventional U-net and the SegNet-based methods by 3% and 10%, respectively, for on Open Access Series of Imaging Studies (OASIS) and Internet Brain Segmentation Repository (IBSR) dataset.


Assuntos
Encéfalo/diagnóstico por imagem , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Imagem por Ressonância Magnética/estatística & dados numéricos , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagem por Ressonância Magnética/métodos , Redes Neurais de Computação
20.
Nat Commun ; 11(1): 4339, 2020 08 28.
Artigo em Inglês | MEDLINE | ID: mdl-32859909

RESUMO

DNA points accumulation for imaging in nanoscale topography (DNA-PAINT) facilitates multiplexing in superresolution microscopy but is practically limited by slow imaging speed. To address this issue, we propose the additions of ethylene carbonate (EC) to the imaging buffer, sequence repeats to the docking strand, and a spacer between the docking strand and the affinity agent. Collectively termed DNA-PAINT-ERS (E = EC, R = Repeating sequence, and S = Spacer), these strategies can be easily integrated into current DNA-PAINT workflows for both accelerated imaging speed and improved image quality through optimized DNA hybridization kinetics and efficiency. We demonstrate the general applicability of DNA-PAINT-ERS for fast, multiplexed superresolution imaging using previously validated oligonucleotide constructs with slight modifications.


Assuntos
Técnicas Citológicas/métodos , DNA/química , Microscopia de Fluorescência/métodos , Simulação de Acoplamento Molecular/métodos , Linhagem Celular , Humanos , Processamento de Imagem Assistida por Computador/métodos , Oligonucleotídeos , Coloração e Rotulagem/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA