Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-37603474

RESUMEN

With the rise of machine learning, hyperspectral image (HSI) unmixing problems have been tackled using learning-based methods. However, physically meaningful unmixing results are not guaranteed without proper guidance. In this work, we propose an unsupervised framework inspired by deep image prior (DIP) that can be used for both linear and nonlinear blind unmixing models. The framework consists of three modules: 1) an Endmember estimation module using DIP (EDIP); 2) an Abundance estimation module using DIP (ADIP); and 3) a mixing module (MM). EDIP and ADIP modules generate endmembers and abundances, respectively, while MM produces a reconstruction of the HSI observations based on the postulated unmixing model. We introduce a composite loss function that applies to both linear and nonlinear unmixing models to generate meaningful unmixing results. In addition, we propose an adaptive loss weight strategy for better unmixing results in nonlinear mixing scenarios. The proposed methods outperform state-of-the-art unmixing algorithms in extensive experiments conducted on both synthetic and real datasets.

2.
Entropy (Basel) ; 25(7)2023 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-37510010

RESUMEN

It is well-known that a neural network learning process-along with its connections to fitting, compression, and generalization-is not yet well understood. In this paper, we propose a novel approach to capturing such neural network dynamics using information-bottleneck-type techniques, involving the replacement of mutual information measures (which are notoriously difficult to estimate in high-dimensional spaces) by other more tractable ones, including (1) the minimum mean-squared error associated with the reconstruction of the network input data from some intermediate network representation and (2) the cross-entropy associated with a certain class label given some network representation. We then conducted an empirical study in order to ascertain how different network models, network learning algorithms, and datasets may affect the learning dynamics. Our experiments show that our proposed approach appears to be more reliable in comparison with classical information bottleneck ones in capturing network dynamics during both the training and testing phases. Our experiments also reveal that the fitting and compression phases exist regardless of the choice of activation function. Additionally, our findings suggest that model architectures, training algorithms, and datasets that lead to better generalization tend to exhibit more pronounced fitting and compression phases.

3.
IEEE Trans Image Process ; 32: 2931-2946, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37200124

RESUMEN

X-radiography (X-ray imaging) is a widely used imaging technique in art investigation. It can provide information about the condition of a painting as well as insights into an artist's techniques and working methods, often revealing hidden information invisible to the naked eye. X-radiograpy of double-sided paintings results in a mixed X-ray image and this paper deals with the problem of separating this mixed image. Using the visible color images (RGB images) from each side of the painting, we propose a new Neural Network architecture, based upon 'connected' auto-encoders, designed to separate the mixed X-ray image into two simulated X-ray images corresponding to each side. This connected auto-encoders architecture is such that the encoders are based on convolutional learned iterative shrinkage thresholding algorithms (CLISTA) designed using algorithm unrolling techniques, whereas the decoders consist of simple linear convolutional layers; the encoders extract sparse codes from the visible image of the front and rear paintings and mixed X-ray image, whereas the decoders reproduce both the original RGB images and the mixed X-ray image. The learning algorithm operates in a totally self-supervised fashion without requiring a sample set that contains both the mixed X-ray images and the separated ones. The methodology was tested on images from the double-sided wing panels of the Ghent Altarpiece, painted in 1432 by the brothers Hubert and Jan van Eyck. These tests show that the proposed approach outperforms other state-of-the-art X-ray image separation methods for art investigation applications.

4.
IEEE Trans Image Process ; 31: 4458-4473, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35763481

RESUMEN

In this paper, we focus on X-ray images (X-radiographs) of paintings with concealed sub-surface designs (e.g., deriving from reuse of the painting support or revision of a composition by the artist), which therefore include contributions from both the surface painting and the concealed features. In particular, we propose a self-supervised deep learning-based image separation approach that can be applied to the X-ray images from such paintings to separate them into two hypothetical X-ray images. One of these reconstructed images is related to the X-ray image of the concealed painting, while the second one contains only information related to the X-ray image of the visible painting. The proposed separation network consists of two components: the analysis and the synthesis sub-networks. The analysis sub-network is based on learned coupled iterative shrinkage thresholding algorithms (LCISTA) designed using algorithm unrolling techniques, and the synthesis sub-network consists of several linear mappings. The learning algorithm operates in a totally self-supervised fashion without requiring a sample set that contains both the mixed X-ray images and the separated ones. The proposed method is demonstrated on a real painting with concealed content, Do na Isabel de Porcel by Francisco de Goya, to show its effectiveness.

5.
Int J Pharm ; 611: 121329, 2022 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-34852288

RESUMEN

Food-mediated changes to drug absorption, termed the food effect, are hard to predict and can have significant implications for the safety and efficacy of oral drug products in patients. Mimicking the prandial states of the human gastrointestinal tract in preclinical studies is challenging, poorly predictive and can produce difficult to interpret datasets. Machine learning (ML) has emerged from the computer science field and shows promise in interpreting complex datasets present in the pharmaceutical field. A ML-based approach aimed to predict the food effect based on an extensive dataset of over 311 drugs with more than 20 drug physicochemical properties, referred to as features. Machine learning techniques were tested; including logistic regression, support vector machine, k-Nearest neighbours and random forest. First a standard ML pipeline using a 80:20 split for training and testing was tried to predict no food effect, negative food effect and positive food effect, however this lead to specificities of less than 40%. To overcome this, a strategic ML pipeline was devised and three tasks were developed. Random forest achieved the strongest performance overall. High accuracies and sensitivities of 70%, 80% and 70% and specificities of 71%, 76% and 71% were achieved for classifying; (i) no food effect vs food effect, (ii) negative food vs positive food effect and (iii) no food effect vs negative food effect vs positive food effect, respectively. Feature importance using random forest ranked the features by importance for building the predictive tasks. The calculated dose number was the most important feature. Here, ML has provided an effective screening tool for predicting the food effect, with the potential to select lead compounds with no food effect, reduce the number of animal studies, and accelerate oral drug development studies.


Asunto(s)
Aprendizaje Automático , Máquina de Vectores de Soporte , Alimentos , Humanos
6.
IEEE Trans Biomed Circuits Syst ; 14(2): 221-231, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-32031948

RESUMEN

This paper presents an adaptable dictionary-based feature extraction approach for spike sorting offering high accuracy and low computational complexity for implantable applications. It extracts and learns identifiable features from evolving subspaces through matched unsupervised subspace filtering. To provide compatibility with the strict constraints in implantable devices such as the chip area and power budget, the dictionary contains arrays of {-1, 0 and 1} and the algorithm need only process addition and subtraction operations. Three types of such dictionary were considered. To quantify and compare the performance of the resulting three feature extractors with existing systems, a neural signal simulator based on several different libraries was developed. For noise levels σN between 0.05 and 0.3 and groups of 3 to 6 clusters, all three feature extractors provide robust high performance with average classification errors of less than 8% over five iterations, each consisting of 100 generated data segments. To our knowledge, the proposed adaptive feature extractors are the first able to classify reliably 6 clusters for implantable applications. An ASIC implementation of the best performing dictionary-based feature extractor was synthesized in a 65-nm CMOS process. It occupies an area of 0.09 mm2 and dissipates up to about 10.48 µW from a 1 V supply voltage, when operating with 8-bit resolution at 30 kHz operating frequency.


Asunto(s)
Procesamiento de Señales Asistido por Computador , Aprendizaje Automático no Supervisado , Potenciales de Acción/fisiología , Algoritmos , Ingeniería Biomédica/instrumentación , Electrodos Implantados , Modelos Neurológicos
7.
IEEE Trans Med Imaging ; 39(3): 621-633, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31395541

RESUMEN

Magnetic resonance (MR) imaging tasks often involve multiple contrasts, such as T1-weighted, T2-weighted and fluid-attenuated inversion recovery (FLAIR) data. These contrasts capture information associated with the same underlying anatomy and thus exhibit similarities in either structure level or gray level. In this paper, we propose a coupled dictionary learning based multi-contrast MRI reconstruction (CDLMRI) approach to leverage the dependency correlation between different contrasts for guided or joint reconstruction from their under-sampled k -space data. Our approach iterates between three stages: coupled dictionary learning, coupled sparse denoising, and enforcing k -space consistency. The first stage learns a set of dictionaries that not only are adaptive to the contrasts, but also capture correlations among multiple contrasts in a sparse transform domain. By capitalizing on the learned dictionaries, the second stage performs coupled sparse coding to remove the aliasing and noise in the corrupted contrasts. The third stage enforces consistency between the denoised contrasts and the measurements in the k -space domain. Numerical experiments, consisting of retrospective under-sampling of various MRI contrasts with a variety of sampling schemes, demonstrate that CDLMRI is capable of capturing structural dependencies between different contrasts. The learned priors indicate notable advantages in multi-contrast MR imaging and promising applications in quantitative MR imaging such as MR fingerprinting.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Simulación por Computador , Medios de Contraste , Humanos , Aprendizaje Automático
8.
Med Phys ; 46(11): 4951-4969, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31329307

RESUMEN

PURPOSE: Magnetic resonance fingerprinting (MRF) methods typically rely on dictionary matching to map the temporal MRF signals to quantitative tissue parameters. Such approaches suffer from inherent discretization errors, as well as high computational complexity as the dictionary size grows. To alleviate these issues, we propose a HYbrid Deep magnetic ResonAnce fingerprinting (HYDRA) approach, referred to as HYDRA. METHODS: HYDRA involves two stages: a model-based signature restoration phase and a learning-based parameter restoration phase. Signal restoration is implemented using low-rank based de-aliasing techniques while parameter restoration is performed using a deep nonlocal residual convolutional neural network. The designed network is trained on synthesized MRF data simulated with the Bloch equations and fast imaging with steady-state precession (FISP) sequences. In test mode, it takes a temporal MRF signal as input and produces the corresponding tissue parameters. RESULTS: We validated our approach on both synthetic data and anatomical data generated from a healthy subject. The results demonstrate that, in contrast to conventional dictionary matching-based MRF techniques, our approach significantly improves inference speed by eliminating the time-consuming dictionary matching operation, and alleviates discretization errors by outputting continuous-valued parameters. We further avoid the need to store a large dictionary, thus reducing memory requirements. CONCLUSIONS: Our approach demonstrates advantages in terms of inference speed, accuracy, and storage requirements over competing MRF methods.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética
9.
IEEE Trans Image Process ; 26(2): 751-764, 2017 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-27831873

RESUMEN

In support of art investigation, we propose a new source separation method that unmixes a single X-ray scan acquired from double-sided paintings. In this problem, the X-ray signals to be separated have similar morphological characteristics, which brings previous source separation methods to their limits. Our solution is to use photographs taken from the front-and back-side of the panel to drive the separation process. The crux of our approach relies on the coupling of the two imaging modalities (photographs and X-rays) using a novel coupled dictionary learning framework able to capture both common and disparate features across the modalities using parsimonious representations; the common component captures features shared by the multi-modal images, whereas the innovation component captures modality-specific information. As such, our model enables the formulation of appropriately regularized convex optimization procedures that lead to the accurate separation of the X-rays. Our dictionary learning framework can be tailored both to a single- and a multi-scale framework, with the latter leading to a significant performance improvement. Moreover, to improve further on the visual quality of the separated images, we propose to train coupled dictionaries that ignore certain parts of the painting corresponding to craquelure. Experimentation on synthetic and real data - taken from digital acquisition of the Ghent Altarpiece (1432) - confirms the superiority of our method against the state-of-the-art morphological component analysis technique that uses either fixed or trained dictionaries to perform image separation.

10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2015: 1516-9, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-26736559

RESUMEN

We propose a feature design framework that considers simultaneously performance and computational complexity. In particular, we incorporate these two metrics, which are very important to many low-energy on-chip applications such as implantable neural interfaces, onto an optimization problem. This allows us to strike a balance between the performance of the signal processing task and the computational complexity of the feature extraction process. Simulation results for neural spike sorting demonstrate that by leveraging proposed design framework, we can construct features that outperform other state-of-the-art, low-complexity feature designs, both in terms of classification error and complexity.


Asunto(s)
Neuronas , Potenciales de Acción , Algoritmos , Procesamiento de Señales Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...