Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 17 de 17
1.
Nat Commun ; 14(1): 5009, 2023 08 17.
Article En | MEDLINE | ID: mdl-37591881

Continuous monitoring of arterial blood pressure (BP) outside of a clinical setting is crucial for preventing and diagnosing hypertension related diseases. However, current continuous BP monitoring instruments suffer from either bulky systems or poor user-device interfacial performance, hampering their applications in continuous BP monitoring. Here, we report a thin, soft, miniaturized system (TSMS) that combines a conformal piezoelectric sensor array, an active pressure adaptation unit, a signal processing module, and an advanced machine learning method, to allow real wearable, continuous wireless monitoring of ambulatory artery BP. By optimizing the materials selection, control/sampling strategy, and system integration, the TSMS exhibits improved interfacial performance while maintaining Grade A level measurement accuracy. Initial trials on 87 volunteers and clinical tracking of two hypertension individuals prove the capability of the TSMS as a reliable BP measurement product, and its feasibility and practical usability in precise BP control and personalized diagnosis schemes development.


Hypertension , Wearable Electronic Devices , Humans , Arterial Pressure , Blood Pressure , Hypertension/diagnosis , Arteries
2.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 8433-8452, 2023 Jul.
Article En | MEDLINE | ID: mdl-36441891

The minimal geodesic models established upon the eikonal equation framework are capable of finding suitable solutions in various image segmentation scenarios. Existing geodesic-based segmentation approaches usually exploit image features in conjunction with geometric regularization terms, such as euclidean curve length or curvature-penalized length, for computing geodesic curves. In this paper, we take into account a more complicated problem: finding curvature-penalized geodesic paths with a convexity shape prior. We establish new geodesic models relying on the strategy of orientation-lifting, by which a planar curve can be mapped to an high-dimensional orientation-dependent space. The convexity shape prior serves as a constraint for the construction of local geodesic metrics encoding a particular curvature constraint. Then the geodesic distances and the corresponding closed geodesic paths in the orientation-lifted space can be efficiently computed through state-of-the-art Hamiltonian fast marching method. In addition, we apply the proposed geodesic models to the active contours, leading to efficient interactive image segmentation algorithms that preserve the advantages of convexity shape prior and curvature penalization.

3.
Front Med (Lausanne) ; 9: 931860, 2022.
Article En | MEDLINE | ID: mdl-36072953

Diseases originate at the molecular-genetic layer, manifest through altered biochemical homeostasis, and develop symptoms later. Hence, symptomatic diagnosis is inadequate to explain the underlying molecular-genetic abnormality and individual genomic disparities. The current trends include molecular-genetic information relying on algorithms to recognize the disease subtypes through gene expressions. Despite their disposition toward disease-specific heterogeneity and cross-disease homogeneity, a gap still exists in describing the extent of homogeneity within the heterogeneous subpopulation of different diseases. They are limited to obtaining the holistic sense of the whole genome-based diagnosis resulting in inaccurate diagnosis and subsequent management. Addressing those ambiguities, our proposed framework, ReDisX, introduces a unique classification system for the patients based on their genomic signatures. In this study, it is a scalable machine learning algorithm deployed to re-categorize the patients with rheumatoid arthritis and coronary artery disease. It reveals heterogeneous subpopulations within a disease and homogenous subpopulations across different diseases. Besides, it identifies granzyme B (GZMB) as a subpopulation-differentiation marker that plausibly serves as a prominent indicator for GZMB-targeted drug repurposing. The ReDisX framework offers a novel strategy to redefine disease diagnosis through characterizing personalized genomic signatures. It may rejuvenate the landscape of precision and personalized diagnosis and a clue to drug repurposing.

4.
IUCrJ ; 6(Pt 2): 331-340, 2019 Mar 01.
Article En | MEDLINE | ID: mdl-30867930

Using X-ray free-electron lasers (XFELs), it is possible to determine three-dimensional structures of nanoscale particles using single-particle imaging methods. Classification algorithms are needed to sort out the single-particle diffraction patterns from the large amount of XFEL experimental data. However, different methods often yield inconsistent results. This study compared the performance of three classification algorithms: convolutional neural network, graph cut and diffusion map manifold embedding methods. The identified single-particle diffraction data of the PR772 virus particles were assembled in the three-dimensional Fourier space for real-space model reconstruction. The comparison showed that these three classification methods lead to different datasets and subsequently result in different electron density maps of the reconstructed models. Interestingly, the common dataset selected by these three methods improved the quality of the merged diffraction volume, as well as the resolutions of the reconstructed maps.

5.
IEEE Trans Med Imaging ; 35(4): 957-66, 2016 Apr.
Article En | MEDLINE | ID: mdl-26625408

Watershed segmentation is useful for a number of image segmentation problems with a wide range of practical applications. Traditionally, the tracking of the immersion front is done by applying a fast sorting algorithm. In this work, we explore a continuous approach based on a geometric description of the immersion front which gives rise to a partial differential equation. The main advantage of using a partial differential equation to track the immersion front is that the method becomes versatile and may easily be stabilized by introducing regularization terms. Coupling the geometric approach with a proper "merging strategy" creates a robust algorithm which minimizes over- and under-segmentation even without predefined markers. Since reliable markers defined prior to segmentation can be difficult to construct automatically for various reasons, being able to treat marker-free situations is a major advantage of the proposed method over earlier watershed formulations. The motivation for the methods developed in this paper is taken from high-throughput screening of cells. A fully automated segmentation of single cells enables the extraction of cell properties from large data sets, which can provide substantial insight into a biological model system. Applying smoothing to the boundaries can improve the accuracy in many image analysis tasks requiring a precise delineation of the plasma membrane of the cell. The proposed segmentation method is applied to real images containing fluorescently labeled cells, and the experimental results show that our implementation is robust and reliable for a variety of challenging segmentation tasks.


Algorithms , Image Processing, Computer-Assisted/methods , HeLa Cells , Humans , Microscopy, Fluorescence
6.
IEEE Trans Vis Comput Graph ; 19(2): 306-18, 2013 Feb.
Article En | MEDLINE | ID: mdl-22566468

A novel graph-cuts-based method is proposed for reconstructing open surfaces from unordered point sets. Through a Boolean operation on the crust around the data set, the open surface problem is translated to a watertight surface problem within a restricted region. Integrating the variational model, Delaunay-based tetrahedral mesh and multiphase technique, the proposed method can reconstruct open surfaces robustly and effectively. Furthermore, a surface reconstruction method with domain decomposition is presented, which is based on the new open surface reconstruction method. This method can handle more general surfaces, such as nonorientable surfaces. The algorithm is designed in a parallel-friendly way and necessary measures are taken to eliminate cracks and conflicts between the subdomains. Numerical examples are included to demonstrate the robustness and effectiveness of the proposed method on watertight, open orientable, open nonorientable surfaces and combinations of such.

7.
IEEE Trans Image Process ; 22(3): 1108-20, 2013 Mar.
Article En | MEDLINE | ID: mdl-23193456

This paper proposes a general weighted l(2)-l(0) norms energy minimization model to remove mixed noise such as Gaussian-Gaussian mixture, impulse noise, and Gaussian-impulse noise from the images. The approach is built upon maximum likelihood estimation framework and sparse representations over a trained dictionary. Rather than optimizing the likelihood functional derived from a mixture distribution, we present a new weighting data fidelity function, which has the same minimizer as the original likelihood functional but is much easier to optimize. The weighting function in the model can be determined by the algorithm itself, and it plays a role of noise detection in terms of the different estimated noise parameters. By incorporating the sparse regularization of small image patches, the proposed method can efficiently remove a variety of mixed or single noise while preserving the image textures well. In addition, a modified K-SVD algorithm is designed to address the weighted rank-one approximation. The experimental results demonstrate its better performance compared with some existing methods.


Algorithms , Artifacts , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Data Interpretation, Statistical , Dictionaries as Topic , Reproducibility of Results , Sensitivity and Specificity , Signal-To-Noise Ratio
8.
IEEE Trans Image Process ; 21(5): 2399-411, 2012 May.
Article En | MEDLINE | ID: mdl-22231178

This paper intends to extend the minimization algorithm developed by Bae, Yuan and Tai [IJCV, 2011] in several directions. First, we propose a new primal-dual approach for global minimization of the continuous Potts model with applications to the piecewise constant Mumford-Shah model for multiphase image segmentation. Different from the existing methods, we work directly with the binary setting without using convex relaxation, which is thereby termed as a direct approach. Second, we provide the sufficient and necessary conditions to guarantee a global optimum. Moreover, we provide efficient algorithms based on a reduction in the intermediate unknowns from the augmented Lagrangian formulation. As a result, the underlying algorithms involve significantly fewer parameters and unknowns than the naive use of augmented Lagrangian-based methods; hence, they are fast and easy to implement. Furthermore, they can produce global optimums under mild conditions.


Algorithms , Artificial Intelligence , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
9.
IEEE Trans Image Process ; 20(5): 1199-210, 2011 May.
Article En | MEDLINE | ID: mdl-21047714

Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.


Image Enhancement/methods , Algorithms , Imaging, Three-Dimensional/methods , Models, Statistical , Pattern Recognition, Automated
10.
IEEE Trans Vis Comput Graph ; 16(4): 647-62, 2010.
Article En | MEDLINE | ID: mdl-20467062

Curvature flow (planar geometric heat flow) has been extensively applied to image processing, computer vision, and material science. To extend the numerical schemes and algorithms of this flow on surfaces is very significant for corresponding motions of curves and images defined on surfaces. In this work, we are interested in the geodesic curvature flow over triangulated surfaces using a level set formulation. First, we present the geodesic curvature flow equation on general smooth manifolds based on an energy minimization of curves. The equation is then discretized by a semi-implicit finite volume method (FVM). For convenience of description, we call the discretized geodesic curvature flow as dGCF. The existence and uniqueness of dGCF are discussed. The regularization behavior of dGCF is also studied. Finally, we apply our dGCF to three problems: the closed-curve evolution on manifolds, the discrete scale-space construction, and the edge detection of images painted on triangulated surfaces. Our method works for compact triangular meshes of arbitrary geometry and topology, as long as there are no degenerate triangles. The implementation of the method is also simple.


Algorithms , Computer Graphics , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Models, Theoretical , Rheology/methods , Computer Simulation , User-Computer Interface
11.
IEEE Trans Image Process ; 18(10): 2289-302, 2009 Oct.
Article En | MEDLINE | ID: mdl-19535321

In this paper, we propose an interactive color natural image segmentation method. The method integrates color feature with multiscale nonlinear structure tensor texture (MSNST) feature and then uses GrabCut method to obtain the segmentations. The MSNST feature is used to describe the texture feature of an image and integrated into GrabCut framework to overcome the problem of the scale difference of textured images. In addition, we extend the Gaussian Mixture Model (GMM) to MSNST feature and GMM based on MSNST is constructed to describe the energy function so that the texture feature can be suitably integrated into GrabCut framework and fused with the color feature to achieve the more superior image segmentation performance than the original GrabCut method. For easier implementation and more efficient computation, the symmetric KL divergence is chosen to produce the estimates of the tensor statistics instead of the Riemannian structure of the space of tensor. The Conjugate norm was employed using Locality Preserving Projections (LPP) technique as the distance measure in the color space for more discriminating power. An adaptive fusing strategy is presented to effectively adjust the mixing factor so that the color and MSNST texture features are efficiently integrated to achieve more robust segmentation performance. Last, an iteration convergence criterion is proposed to reduce the time of the iteration of GrabCut algorithm dramatically with satisfied segmentation accuracy. Experiments using synthesis texture images and real natural scene images demonstrate the superior performance of our proposed method.


Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Subtraction Technique , Nonlinear Dynamics , Reproducibility of Results , Sensitivity and Specificity , Systems Integration
12.
IEEE Trans Med Imaging ; 28(5): 720-38, 2009 May.
Article En | MEDLINE | ID: mdl-19131295

This work presents a unified framework for whole cell segmentation of surface stained living cells from 3-D data sets of fluorescent images. Every step of the process is described, image acquisition, prefiltering, ridge enhancement, cell segmentation, and a segmentation evaluation. The segmentation results from two different automated approaches for segmentation are compared to manual segmentation of the same data using a rigorous evaluation scheme. This revealed that combination of the respective cell types with the most suitable microscopy method resulted in high success rates up to 97%. The described approach permits to automatically perform a statistical analysis of various parameters from living cells.


Cell Membrane/ultrastructure , Image Processing, Computer-Assisted/methods , Microscopy, Fluorescence , Algorithms , Animals , Cells, Cultured , Data Interpretation, Statistical , Fluorescent Dyes , Normal Distribution , PC12 Cells , Rats , Reproducibility of Results , Subtraction Technique
13.
Int J Biomed Imaging ; 2007: 26950, 2007.
Article En | MEDLINE | ID: mdl-18354724

In positron emission tomography (PET), a radioactive compound is injected into the body to promote a tissue-dependent emission rate. Expectation maximization (EM) reconstruction algorithms are iterative techniques which estimate the concentration coefficients that provide the best fitted solution, for example, a maximum likelihood estimate. In this paper, we combine the EM algorithm with a level set approach. The level set method is used to capture the coarse scale information and the discontinuities of the concentration coefficients. An intrinsic advantage of the level set formulation is that anatomical information can be efficiently incorporated and used in an easy and natural way. We utilize a multiple level set formulation to represent the geometry of the objects in the scene. The proposed algorithm can be applied to any PET configuration, without major modifications.

14.
Cytometry A ; 69(9): 961-72, 2006 Sep 01.
Article En | MEDLINE | ID: mdl-16969816

BACKGROUND: This paper presents an automated method for the identification of thin membrane tubes in 3D fluorescence images. These tubes, referred to as tunneling nanotubes (TNTs), are newly discovered intercellular structures that connect living cells through a membrane continuity. TNTs are 50-200 nm in diameter, crossing from one cell to another at their nearest distance. In microscopic images, they are seen as straight lines. It now emerges that the TNTs represent the underlying structure of a new type of cell-to-cell communication. METHODS: Our approach for the identification of TNTs is based on a combination of biological cell markers and known image processing techniques. Watershed segmentation and edge detectors are used to find cell borders, TNTs, and image artifacts. Mathematical morphology is employed at several stages of the processing chain. Two image channels are used for the calculations to improve classification of watershed regions into cells and background. One image channel displays cell borders and TNTs, the second is used for cell classification and displays the cytoplasmic compartments of the cells. The method for cell segmentation is 3D, and the TNT detection incorporates 3D information using various 2D projections. RESULTS: The TNT- and cell-detection were applied to numerous 3D stacks of images. A success rate of 67% was obtained compared with manual identification of the TNTs. The digitalized results were used to achieve statistical information of selected properties of TNTs. CONCLUSION: To further explore these structures, automated detection and quantification is desirable. Consequently, this automated recognition tool will be useful in biological studies on cell-to-cell communication where TNT quantification is essential.


Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Nanotubes , Algorithms , Animals , Cell Communication , Models, Biological , Nanotubes/analysis , Nanotubes/ultrastructure , PC12 Cells/ultrastructure , Rats
15.
IEEE Trans Image Process ; 15(5): 1171-81, 2006 May.
Article En | MEDLINE | ID: mdl-16671298

In this paper, we propose a PDE-based level set method. Traditionally, interfaces are represented by the zero level set of continuous level set functions. Instead, we let the interfaces be represented by discontinuities of piecewise constant level set functions. Each level set function can at convergence only take two values, i.e., it can only be 1 or -1; thus, our method is related to phase-field methods. Some of the properties of standard level set methods are preserved in the proposed method, while others are not. Using this new method for interface problems, we need to minimize a smooth convex functional under a quadratic constraint. The level set functions are discontinuous at convergence, but the minimization functional is smooth. We show numerical results using the method for segmentation of digital images.


Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Signal Processing, Computer-Assisted , Artificial Intelligence , Computer Graphics , Information Storage and Retrieval/methods , Logistic Models , Models, Statistical , Numerical Analysis, Computer-Assisted
16.
IEEE Trans Image Process ; 13(10): 1345-57, 2004 Oct.
Article En | MEDLINE | ID: mdl-15462144

In this work, we use partial differential equation techniques to remove noise from digital images. The removal is done in two steps. We first use a total-variation filter to smooth the normal vectors of the level curves of a noise image. After this, we try to find a surface to fit the smoothed normal vectors. For each of these two stages, the problem is reduced to a nonlinear partial differential equation. Finite difference schemes are used to solve these equations. A broad range of numerical examples are given in the paper.


Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Numerical Analysis, Computer-Assisted , Pattern Recognition, Automated , Subtraction Technique , Cluster Analysis , Computer Simulation , Imaging, Three-Dimensional/methods , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity , Signal Processing, Computer-Assisted , Stochastic Processes
17.
IEEE Trans Image Process ; 12(12): 1579-90, 2003.
Article En | MEDLINE | ID: mdl-18244712

In this paper, we introduce a new method for image smoothing based on a fourth-order PDE model. The method is tested on a broad range of real medical magnetic resonance images, both in space and time, as well as on nonmedical synthesized test images. Our algorithm demonstrates good noise suppression without destruction of important anatomical or functional detail, even at poor signal-to-noise ratio. We have also compared our method with related PDE models.

...