Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
1.
Analyst ; 147(17): 3838-3853, 2022 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-35726910

RESUMO

Rapid, simple, inexpensive, accurate, and sensitive point-of-care (POC) detection of viral pathogens in bodily fluids is a vital component of controlling the spread of infectious diseases. The predominant laboratory-based methods for sample processing and nucleic acid detection face limitations that prevent them from gaining wide adoption for POC applications in low-resource settings and self-testing scenarios. Here, we report the design and characterization of an integrated system for rapid sample-to-answer detection of a viral pathogen in a droplet of whole blood comprised of a 2-stage microfluidic cartridge for sample processing and nucleic acid amplification, and a clip-on detection instrument that interfaces with the image sensor of a smartphone. The cartridge is designed to release viral RNA from Zika virus in whole blood using chemical lysis, followed by mixing with the assay buffer for performing reverse-transcriptase loop-mediated isothermal amplification (RT-LAMP) reactions in six parallel microfluidic compartments. The battery-powered handheld detection instrument uniformly heats the compartments from below, and an array of LEDs illuminates from above, while the generation of fluorescent reporters in the compartments is kinetically monitored by collecting a series of smartphone images. We characterize the assay time and detection limits for detecting Zika RNA and gamma ray-deactivated Zika virus spiked into buffer and whole blood and compare the performance of the same assay when conducted in conventional PCR tubes. Our approach for kinetic monitoring of the fluorescence-generating process in the microfluidic compartments enables spatial analysis of early fluorescent "bloom" events for positive samples, in an approach called "Spatial LAMP" (S-LAMP). We show that S-LAMP image analysis reduces the time required to designate an assay as a positive test, compared to conventional analysis of the average fluorescent intensity of the entire compartment. S-LAMP enables the RT-LAMP process to be as short as 22 minutes, resulting in a total sample-to-answer time in the range of 17-32 minutes to distinguish positive from negative samples, while demonstrating a viral RNA detection as low as 2.70 × 102 copies per µl, and a gamma-irradiated virus of 103 virus particles in a single 12.5 µl droplet blood sample.


Assuntos
Infecção por Zika virus , Zika virus , Humanos , Microfluídica , Técnicas de Diagnóstico Molecular , Técnicas de Amplificação de Ácido Nucleico/métodos , RNA Viral/genética , Sensibilidade e Especificidade , Smartphone , Instrumentos Cirúrgicos , Zika virus/genética , Infecção por Zika virus/diagnóstico
2.
Bioinformatics ; 35(14): i530-i537, 2019 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-31510662

RESUMO

MOTIVATION: Neural networks have been widely used to analyze high-throughput microscopy images. However, the performance of neural networks can be significantly improved by encoding known invariance for particular tasks. Highly relevant to the goal of automated cell phenotyping from microscopy image data is rotation invariance. Here we consider the application of two schemes for encoding rotation equivariance and invariance in a convolutional neural network, namely, the group-equivariant CNN (G-CNN), and a new architecture with simple, efficient conic convolution, for classifying microscopy images. We additionally integrate the 2D-discrete-Fourier transform (2D-DFT) as an effective means for encoding global rotational invariance. We call our new method the Conic Convolution and DFT Network (CFNet). RESULTS: We evaluated the efficacy of CFNet and G-CNN as compared to a standard CNN for several different image classification tasks, including simulated and real microscopy images of subcellular protein localization, and demonstrated improved performance. We believe CFNet has the potential to improve many high-throughput microscopy image analysis applications. AVAILABILITY AND IMPLEMENTATION: Source code of CFNet is available at: https://github.com/bchidest/CFNet. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Microscopia , Redes Neurais de Computação , Rotação , Software
3.
Artif Intell Med ; 149: 102787, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38462287

RESUMO

Traditional approaches to predicting breast cancer patients' survival outcomes were based on clinical subgroups, the PAM50 genes, or the histological tissue's evaluation. With the growth of multi-modality datasets capturing diverse information (such as genomics, histology, radiology and clinical data) about the same cancer, information can be integrated using advanced tools and have improved survival prediction. These methods implicitly exploit the key observation that different modalities originate from the same cancer source and jointly provide a complete picture of the cancer. In this work, we investigate the benefits of explicitly modelling multi-modality data as originating from the same cancer under a probabilistic framework. Specifically, we consider histology and genomics as two modalities originating from the same breast cancer under a probabilistic graphical model (PGM). We construct maximum likelihood estimates of the PGM parameters based on canonical correlation analysis (CCA) and then infer the underlying properties of the cancer patient, such as survival. Equivalently, we construct CCA-based joint embeddings of the two modalities and input them to a learnable predictor. Real-world properties of sparsity and graph-structures are captured in the penalized variants of CCA (pCCA) and are better suited for cancer applications. For generating richer multi-dimensional embeddings with pCCA, we introduce two novel embedding schemes that encourage orthogonality to generate more informative embeddings. The efficacy of our proposed prediction pipeline is first demonstrated via low prediction errors of the hidden variable and the generation of informative embeddings on simulated data. When applied to breast cancer histology and RNA-sequencing expression data from The Cancer Genome Atlas (TCGA), our model can provide survival predictions with average concordance-indices of up to 68.32% along with interpretability. We also illustrate how the pCCA embeddings can be used for survival analysis through Kaplan-Meier curves.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/genética , Análise de Correlação Canônica , Genômica , Análise de Sobrevida , Modelos Estatísticos
4.
Methods Enzymol ; 700: 235-273, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38971602

RESUMO

Hierarchic self-assembly is the main mechanism used to create diverse structures using soft materials. This is a case for both synthetic materials and biomolecular systems, as exemplified by the non-covalent organization of lipids into membranes. In nature, lipids often assemble into single bilayers, but other nanostructures are encountered, such as bilayer stacks and tubular and vesicular aggregates. Synthetic block copolymers can be engineered to recapitulate many of the structures, forms, and functions of lipid systems. When block copolymers are amphiphilic, they can be inserted or co-assembled into hybrid membranes that exhibit synergistic structural, permeability, and mechanical properties. One example is the emergence of lateral phase separation akin to the raft formation in biomembranes. When higher-order structures, such as hybrid membranes, are formed, this lateral phase separation can be correlated across membranes in the stack. This chapter outlines a set of important methods, such as X-ray Scattering, Atomic Force Microscopy, and Cryo-Electron Microscopy, that are relevant to characterizing and evaluating lateral and correlated phase separation in hybrid membranes at the nano and mesoscales. Understanding the phase behavior of polymer-lipid hybrid materials could lead to innovative advancements in biomimetic membrane separation systems.


Assuntos
Microscopia Crioeletrônica , Bicamadas Lipídicas , Microscopia de Força Atômica , Polímeros , Microscopia Crioeletrônica/métodos , Polímeros/química , Bicamadas Lipídicas/química , Microscopia de Força Atômica/métodos , Difração de Raios X/métodos , Separação de Fases
5.
IEEE Trans Image Process ; 32: 5245-5256, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37651500

RESUMO

Adaptive sampling that exploits the spatiotemporal redundancy in videos is critical for always-on action recognition on wearable devices with limited computing and battery resources. The commonly used fixed sampling strategy is not context-aware and may under-sample the visual content, and thus adversely impacts both computation efficiency and accuracy. Inspired by the concepts of foveal vision and pre-attentive processing from the human visual perception mechanism, we introduce a novel adaptive spatiotemporal sampling scheme for efficient action recognition. Our system pre-scans the global scene context at low-resolution and decides to skip or request high-resolution features at salient regions for further processing. We validate the system on EPIC-KITCHENS and UCF-101 (split-1) datasets for action recognition, and show that our proposed approach can greatly speed up inference with a tolerable loss of accuracy compared with those from state-of-the-art baselines. Source code is available in https://github.com/knmac/adaptive_spatiotemporal.

6.
IEEE Trans Image Process ; 31: 3553-3564, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35544506

RESUMO

Background foreground separation (BFS) is a popular computer vision problem where dynamic foreground objects are separated from the static background of a scene. Typically, this is performed using consumer cameras because of their low cost, human interpretability, and high resolution. Yet, cameras and the BFS algorithms that process their data have common failure modes due to lighting changes, highly reflective surfaces, and occlusion. One solution is to incorporate an additional sensor modality that provides robustness to such failure modes. In this paper, we explore the ability of a cost-effective radar system to augment the popular Robust PCA technique for BFS. We apply the emerging technique of algorithm unrolling to yield real-time computation, feedforward inference, and strong generalization in comparison with traditional deep learning methods. We benchmark on the RaDICaL dataset to demonstrate both quantitative improvements of incorporating radar data and qualitative improvements that confirm robustness to common failure modes of image-based methods.

7.
IEEE J Biomed Health Inform ; 26(8): 4020-4031, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35439148

RESUMO

The ability to use digitally recorded and quantified neurological exam information is important to help healthcare systems deliver better care, in-person and via telehealth, as they compensate for a growing shortage of neurologists. Current neurological digital biomarker pipelines, however, are narrowed down to a specific neurological exam component or applied for assessing specific conditions. In this paper, we propose an accessible vision-based exam and documentation solution called Digitized Neurological Examination (DNE) to expand exam biomarker recording options and clinical applications using a smartphone/tablet. Through our DNE software, healthcare providers in clinical settings and people at home are enabled to video capture an examination while performing instructed neurological tests, including finger tapping, finger to finger, forearm roll, and stand-up and walk. Our modular design of the DNE software supports integrations of additional tests. The DNE extracts from the recorded examinations the 2D/3D human-body pose and quantifies kinematic and spatio-temporal features. The features are clinically relevant and allow clinicians to document and observe the quantified movements and the changes of these metrics over time. A web server and a user interface for recordings viewing and feature visualizations are available. DNE was evaluated on a collected dataset of 21 subjects containing normal and simulated-impaired movements. The overall accuracy of DNE is demonstrated by classifying the recorded movements using various machine learning models. Our tests show an accuracy beyond 90% for upper-limb tests and 80% for the stand-up and walk tests.


Assuntos
Smartphone , Software , Dedos , Humanos , Aprendizado de Máquina , Exame Neurológico
8.
IEEE Trans Biomed Eng ; 68(5): 1450-1458, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33471747

RESUMO

Quantitative identification of the transitions between anaesthetic states is very essential for optimizing patient safety and quality care during surgery but poses a very challenging task. The state-of-the-art monitors are still not capable of providing their manifest variables, so the practitioners must diagnose them based on their own experience. The present paper proposes a novel real-time method to identify these transitions. Firstly, the Hurst method is used to pre-process the de-noised electro-encephalograph (EEG) signals. The maximum of Hurst's ranges is then accepted as the EEG real-time response, which induces a new real-time feature under moving average framework. Its maximum power spectral density is found to be very differentiated into the distinct transitions of anaesthetic states and thus can be used as the quantitative index for their identification.


Assuntos
Anestésicos , Eletroencefalografia , Humanos
9.
Ultrasound Med Biol ; 47(3): 556-568, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33358553

RESUMO

Quantitative ultrasound (QUS) was used to classify rabbits that were induced to have liver disease by placing them on a fatty diet for a defined duration and/or periodically injecting them with CCl4. The ground truth of the liver state was based on lipid liver percents estimated via the Folch assay and hydroxyproline concentration to quantify fibrosis. Rabbits were scanned ultrasonically in vivo using a SonixOne scanner and an L9-4/38 linear array. Liver fat percentage was classified based on the ultrasonic backscattered radiofrequency (RF) signals from the livers using either QUS or a 1-D convolutional neural network (CNN). Use of QUS parameters with linear regression and canonical correlation analysis demonstrated that the QUS parameters could differentiate between livers with lipid levels above or below 5%. However, the QUS parameters were not sensitive to fibrosis. The CNN was implemented by analyzing raw RF ultrasound signals without using separate reference data. The CNN outputs the classification of liver as either above or below a threshold of 5% fat level in the liver. The CNN outperformed the classification utilizing the QUS parameters combined with a support vector machine in differentiating between low and high lipid liver levels (i.e., accuracies of 74% versus 59% on the testing data). Therefore, although the CNN did not provide a physical interpretation of the tissue properties (e.g., attenuation of the medium or scatterer properties) the CNN had much higher accuracy in predicting fatty liver state and did not require an external reference scan.


Assuntos
Fígado Gorduroso/diagnóstico por imagem , Redes Neurais de Computação , Ultrassonografia/métodos , Animais , Gorduras na Dieta/administração & dosagem , Fígado Gorduroso/diagnóstico , Fígado/diagnóstico por imagem , Aprendizado de Máquina , Masculino , Hepatopatia Gordurosa não Alcoólica/diagnóstico , Hepatopatia Gordurosa não Alcoólica/diagnóstico por imagem , Coelhos
10.
Artigo em Inglês | MEDLINE | ID: mdl-31567079

RESUMO

The objective of this article is to demonstrate the feasibility of estimating the backscatter coefficient (BSC) using an in situ calibration source. Traditional methods of estimating the BSC in vivo using a reference phantom technique do not account for transmission losses due to intervening layers between the ultrasonic source and the tissue region to be interrogated, leading to increases in bias and variance of BSC-based estimates. To account for transmission losses, an in situ calibration approach is proposed. The in situ calibration technique employs a titanium sphere that is well-characterized ultrasonically, biocompatible, and embedded inside the sample. A set of experiments was conducted to evaluate the embedded titanium spheres as in situ calibration targets for BSC estimation. The first experiment quantified the backscattered signal strength from titanium spheres of three sizes: 0.5, 1, and 2 mm in diameter. The second set of experiments assessed the repeatability of BSC estimates from the titanium spheres and compared these BSCs to theory. The third set of experiments quantified the ability of the titanium bead to provide an in situ reference spectrum in the presence of a lossy layer on top of the sample. The final set of experiments quantified the ability of the bead to provide a calibration spectrum over multiple depths in the sample. All experiments were conducted using an L9-4/38 linear array connected to a SonixOne system. The strongest signal was observed from the 2-mm titanium bead with the signal-to-noise ratio (SNR) of 11.6 dB with respect to the background speckle. Using an analysis bandwidth of 2.5-5.5 MHz, the mean differences between the experimentally derived BSCs and BSCs derived from the Faran theory were 0.54 and 0.76 dB using the array and a single-element transducer, respectively. The BSCs estimated using the in situ calibration approach without the layer and with the layer and using the reference phantom approach with the layer were compared to the reference phantom approach without the layer present. The mean differences in BSCs were 0.15, 0.73, and -9.69 dB, respectively. The mean differences of the BSCs calculated from data blocks located at depths that were either 30 pulse lengths above or below the actual bead depth compared to the BSC calculated at bead depth were -1.55 and -1.48 dB, respectively. The results indicate that an in situ calibration target can account for overlaying tissue losses, thereby improving the robustness of BSC-based estimates.


Assuntos
Transdutores , Ultrassonografia , Calibragem , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Espalhamento de Radiação , Ultrassonografia/instrumentação , Ultrassonografia/métodos
11.
IEEE Trans Med Imaging ; 39(5): 1380-1391, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-31647422

RESUMO

Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Núcleo Celular , Humanos
12.
IEEE Trans Image Process ; 18(4): 703-16, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19278915

RESUMO

We propose a new approach to quantitatively analyze the rendering quality of image-based rendering (IBR) algorithms with depth information. The resulting error bounds for synthesized views depend on IBR configurations including the depth and intensity estimate errors, the scene geometry and texture, the number of actual cameras, their positions and resolution. Specifically, the IBR error is bounded by the summation of three terms, highlighting the impact of using multiple actual cameras, the impact of the noise level at the actual cameras, and the impact of the depth accuracy. We also quantify the impact of occlusions and intensity discontinuities. The proposed methodology is applicable to a large class of common IBR algorithms and can be applied locally. Experiments with synthetic and real scenes show that the developed error bounds accurately characterize the rendering errors. In particular, the error bounds correctly characterize the decay rates of synthesized views' mean absolute errors as O(lambda(-1)) and O(lambda(-2)), where lambda is the local density of actual samples, for 2-D and 3-D scenes, respectively. Finally, we discuss the implications of the proposed analysis on camera placement, budget allocation, and bit allocation.

13.
IEEE Trans Image Process ; 18(4): 840-53, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19278922

RESUMO

We present a new noniterative approach to synthetic aperture radar (SAR) autofocus, termed the multichannel autofocus (MCA) algorithm. The key in the approach is to exploit the multichannel redundancy of the defocusing operation to create a linear subspace, where the unknown perfectly focused image resides, expressed in terms of a known basis formed from the given defocused image. A unique solution for the perfectly focused image is then directly determined through a linear algebraic formulation by invoking an additional image support condition. The MCA approach is found to be computationally efficient and robust and does not require prior assumptions about the SAR scene used in existing methods. In addition, the vector-space formulation of MCA allows sharpness metric optimization to be easily incorporated within the restoration framework as a regularization term. We present experimental results characterizing the performance of MCA in comparison with conventional autofocus methods and discuss the practical implementation of the technique.

14.
IEEE Trans Med Imaging ; 38(1): 124-133, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30028696

RESUMO

In an increasing number of applications of focused ultrasound (FUS) therapy, such as opening of the blood-brain barrier or collapsing microbubbles in a tumor, elevation of tissue temperature is not involved. In these cases, real-time visualization of the field distribution of the FUS source would allow localization of the FUS beam within the targeted tissue and allow repositioning of the FUS beam during tissue motion. In this paper, in order to visualize the FUS beam in situ, a 6-MHz single-element transducer ( f /2) was used as the FUS source and aligned perpendicular to a linear array which passively received scattered ultrasound from the sample. An image of the reconstructed intensity field pattern of the FUS source using bistatic beamforming was then superimposed on a registered B-mode image of the sample acquired using the same linear array. The superimposed image is used to provide anatomical context of the FUS beam in the sample being treated. The intensity field pattern reconstructed from a homogeneous scattering phantom was compared with the field characteristics of the FUS source characterized by the wire technique. The beamwidth estimates at the FUS focus using the in situ reconstruction technique and the wire technique were 1.5 and 1.2 mm, respectively. The depth-of-field estimates for the in situ reconstruction technique and the wire technique were 11.8 and 16.8 mm, respectively. The FUS beams were also visualized in a two-layer phantom and a chicken breast. The novel reconstruction technique was able to accurately visualize the field of an FUS source in the context of the interrogated medium.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Ultrassonografia/métodos , Algoritmos , Modelos Biológicos , Imagens de Fantasmas
15.
Ultrasound Med Biol ; 45(8): 2049-2062, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31076231

RESUMO

Nonalcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease and can often lead to fibrosis, cirrhosis, cancer and complete liver failure. Liver biopsy is the current standard of care to quantify hepatic steatosis, but it comes with increased patient risk and only samples a small portion of the liver. Imaging approaches to assess NAFLD include proton density fat fraction estimated via magnetic resonance imaging (MRI) and shear wave elastography. However, MRI is expensive and shear wave elastography is not proven to be sensitive to fat content of the liver (Kramer et al. 2016). On the other hand, ultrasonic attenuation and the backscatter coefficient (BSC) have been observed to be sensitive to levels of fat in the liver (Lin et al. 2015; Paige et al. 2017). In this study, we assessed the use of attenuation and the BSC to quantify hepatic steatosis in vivo in a rabbit model of fatty liver. Rabbits were maintained on a high-fat diet for 0, 1, 2, 3 or 6 wk, with 3 rabbits per diet group (total N = 15). An array transducer (L9-4) with a center frequency of 4.5 MHz connected to a SonixOne scanner was used to gather radio frequency (RF) backscattered data in vivo from rabbits. The RF signals were used to estimate an average attenuation and BSC for each rabbit. Two approaches were used to parameterize the BSC (i.e., the effective scatterer diameter and effective acoustic concentration using a spherical Gaussian model and a model-free approach using a principal component analysis [PCA]). The 2 major components of the PCA from the BSCs, which captured 96% of the variance of the transformed data, were used to generate input features to a support vector machine for classification. Rabbits were separated into two liver fat-level classes, such that approximately half of the rabbits were in the low-lipid class (≤9% lipid liver level) and half of the rabbits in the high-lipid class (>9% lipid liver level). The slope and the midband fit of the attenuation coefficient provided statistically significant differences (p value = 0.00014 and p value = 0.007, using a two-sample t test) between low and high-lipid fat classes. The proposed model-free and model-based parameterization of the BSC and attenuation coefficient parameters yielded classification accuracies of 84.11 %, 82.93 % and 78.91 % for differentiating low-lipid versus high-lipid classes, respectively. The results suggest that attenuation and BSC analysis can differentiate low-fat versus high-fat livers in a rabbit model of fatty liver disease.


Assuntos
Hepatopatia Gordurosa não Alcoólica/diagnóstico por imagem , Ultrassonografia/métodos , Animais , Modelos Animais de Doenças , Fígado/diagnóstico por imagem , Coelhos
16.
IEEE Trans Image Process ; 17(6): 946-57, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18482889

RESUMO

We propose a wavelet-based codec for the static depth-image-based representation, which allows viewers to freely choose the viewpoint. The proposed codec jointly estimates and encodes the unknown depth map from multiple views using a novel rate-distortion (RD) optimization scheme. The rate constraint reduces the ambiguity of depth estimation by favoring piecewise-smooth depth maps. The optimization is efficiently solved by a novel dynamic programming along trees of integer wavelet coefficients. The codec encodes the image and the depth map jointly to decrease their redundancy and to provide a RD-optimized bitrate allocation between the two. The codec also offers scalability both in resolution and in quality. Experiments on real data show the effectiveness of the proposed codec.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Processamento de Sinais Assistido por Computador
17.
Pac Symp Biocomput ; 23: 319-330, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29218893

RESUMO

Connecting genotypes to image phenotypes is crucial for a comprehensive understanding of cancer. To learn such connections, new machine learning approaches must be developed for the better integration of imaging and genomic data. Here we propose a novel approach called Discriminative Bag-of-Cells (DBC) for predicting genomic markers using imaging features, which addresses the challenge of summarizing histopathological images by representing cells with learned discriminative types, or codewords. We also developed a reliable and efficient patch-based nuclear segmentation scheme using convolutional neural networks from which nuclear and cellular features are extracted. Applying DBC on TCGA breast cancer samples to predict basal subtype status yielded a class-balanced accuracy of 70% on a separate test partition of 213 patients. As data sets of imaging and genomic data become increasingly available, we believe DBC will be a useful approach for screening histopathological images for genomic markers. Source code of nuclear segmentation and DBC are available at: https://github.com/bchidest/DBC.


Assuntos
Genômica/estatística & dados numéricos , Interpretação de Imagem Assistida por Computador/métodos , Biomarcadores Tumorais/genética , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/genética , Biologia Computacional/métodos , Feminino , Estudos de Associação Genética , Humanos , Aprendizado de Máquina , Neoplasias/diagnóstico por imagem , Neoplasias/genética , Redes Neurais de Computação
18.
Artigo em Inglês | MEDLINE | ID: mdl-30582541

RESUMO

Most variational formulations for structure-texture image decomposition force structure images to have small norm in some functional spaces, and share a common notion of edges, i.e., large-gradients or -intensity differences. However, such definition makes it difficult to distinguish structure edges from oscillations that have fine spatial scale but high contrast. In this paper, we introduce a new model by learning deep variational prior for structure images without explicit training data. An alternating direction method of multiplier (ADMM) algorithm and its modular structure are adopted to plug deep variational priors into an iterative smoothing process. The central observations are that convolution neural networks (CNNs) can replace the total variation prior, and are indeed powerful to capture the natures of structure and texture. We show that our learned priors using CNNs successfully differentiate highamplitude details from structure edges, and avoid halo artifacts. Different from previous data-driven smoothing schemes, our formulation provides another degree of freedom to produce continuous smoothing effects. Experimental results demonstrate the effectiveness of our approach on various computational photography and image processing applications, including texture removal, detail manipulation, HDR tone-mapping, and nonphotorealistic abstraction.

19.
IEEE Trans Pattern Anal Mach Intell ; 40(1): 34-47, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28092524

RESUMO

A key challenge in feature correspondence is the difficulty in differentiating true and false matches at a local descriptor level. This forces adoption of strict similarity thresholds that discard many true matches. However, if analyzed at a global level, false matches are usually randomly scattered while true matches tend to be coherent (clustered around a few dominant motions), thus creating a coherence based separability constraint. This paper proposes a non-linear regression technique that can discover such a coherence based separability constraint from highly noisy matches and embed it into a correspondence likelihood model. Once computed, the model can filter the entire set of nearest neighbor matches (which typically contains over 90 percent false matches) for true matches. We integrate our technique into a full feature correspondence system which reliably generates large numbers of good quality correspondences over wide baselines where previous techniques provide few or no matches.

20.
IEEE Trans Image Process ; 16(4): 918-31, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-17405426

RESUMO

In 1992, Bamberger and Smith proposed the directional filter bank (DFB) for an efficient directional decomposition of 2-D signals. Due to the nonseparable nature of the system, extending the DFB to higher dimensions while still retaining its attractive features is a challenging and previously unsolved problem. We propose a new family of filter banks, named NDFB, that can achieve the directional decomposition of arbitrary N-dimensional (N > or =2) signals with a simple and efficient tree-structured construction. In 3-D, the ideal passbands of the proposed NDFB are rectangular-based pyramids radiating out from the origin at different orientations and tiling the entire frequency space. The proposed NDFB achieves perfect reconstruction via an iterated filter bank with a redundancy factor of N in N-D. The angular resolution of the proposed NDFB can be iteratively refined by invoking more levels of decomposition through a simple expansion rule. By combining the NDFB with a new multiscale pyramid, we propose the surfacelet transform, which can be used to efficiently capture and represent surface-like singularities in multidimensional data.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA