Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 230
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 121(11): e2314697121, 2024 Mar 12.
Artículo en Inglés | MEDLINE | ID: mdl-38451944

RESUMEN

We propose a method for imaging in scattering media when large and diverse datasets are available. It has two steps. Using a dictionary learning algorithm the first step estimates the true Green's function vectors as columns in an unordered sensing matrix. The array data comes from many sparse sets of sources whose location and strength are not known to us. In the second step, the columns of the estimated sensing matrix are ordered for imaging using the multidimensional scaling algorithm with connectivity information derived from cross-correlations of its columns, as in time reversal. For these two steps to work together, we need data from large arrays of receivers so the columns of the sensing matrix are incoherent for the first step, as well as from sub-arrays so that they are coherent enough to obtain connectivity needed in the second step. Through simulation experiments, we show that the proposed method is able to provide images in complex media whose resolution is that of a homogeneous medium.

2.
Neuroimage ; 267: 119809, 2023 02 15.
Artículo en Inglés | MEDLINE | ID: mdl-36584759

RESUMEN

Human neuromagnetic activity is characterised by a complex combination of transient bursts with varying spatial and temporal characteristics. The characteristics of these transient bursts change during task performance and normal ageing in ways that can inform about underlying cortical sources. Many methods have been proposed to detect transient bursts, with the most successful ones being those that employ multi-channel, data-driven approaches to minimize bias in the detection procedure. There has been little research, however, into the application of these data-driven methods to large datasets for group-level analyses. In the current work, we apply a data-driven convolutional dictionary learning (CDL) approach to detect neuromagnetic transient bursts in a large group of healthy participants from the Cam-CAN dataset. CDL was used to extract repeating spatiotemporal motifs in 538 participants between the ages of 18-88 during a sensorimotor task. Motifs were then clustered across participants based on similarity, and relevant task-related clusters were analysed for age-related trends in their spatiotemporal characteristics. Seven task-related motifs resembling known transient burst types were identified through this analysis, including beta, mu, and alpha type bursts. All burst types showed positive trends in their activation levels with age that could be explained by increasing burst rate with age. This work validated the data-driven CDL approach for transient burst detection on a large dataset and identified robust information about the complex characteristics of human brain signals and how they change with age.


Asunto(s)
Encéfalo , Aprendizaje , Humanos , Adolescente , Adulto Joven , Adulto , Persona de Mediana Edad , Anciano , Anciano de 80 o más Años , Encéfalo/fisiología , Envejecimiento
3.
Hum Brain Mapp ; 44(8): 3410-3432, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-37070786

RESUMEN

Most fMRI inferences are based on analyzing the scans of a cohort. Thus, the individual variability of a subject is often overlooked in these studies. Recently, there has been a growing interest in individual differences in brain connectivity also known as individual connectome. Various studies have demonstrated the individual specific component of functional connectivity (FC), which has enormous potential to identify participants across consecutive testing sessions. Many machine learning and dictionary learning-based approaches have been used to extract these subject-specific components either from the blood oxygen level dependent (BOLD) signal or from the FC. In addition, several studies have reported that some resting-state networks have more individual-specific information than others. This study compares four different dictionary-learning algorithms that compute the individual variability from the network-specific FC computed from resting-state functional Magnetic Resonance Imaging (rs-fMRI) data having 10 scans per subject. The study also compares the effect of two FC normalization techniques, namely, Fisher Z normalization and degree normalization on the extracted subject-specific components. To quantitatively evaluate the extracted subject-specific component, a metric named Overlap is proposed, and it is used in combination with the existing differential identifiability I diff metric. It is based on the hypothesis that the subject-specific FC vectors should be similar within the same subject and different across different subjects. Results indicate that Fisher Z transformed subject-specific fronto-parietal and default mode network extracted using Common Orthogonal Basis Extraction (COBE) dictionary learning have the best features to identify a participant.


Asunto(s)
Conectoma , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Conectoma/métodos , Algoritmos , Individualidad
4.
Magn Reson Med ; 90(6): 2443-2453, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37466029

RESUMEN

PURPOSE: Temporal resolution of time-lapse MRI to track individual iron-labeled cells is limited by the required data-acquisition time to fill k-space and to reach sufficient SNR. Although motion of slowly patrolling monocytes can be resolved, detection of fast-moving immune cells requires improved acquisition and reconstruction strategies. THEORY AND METHODS: For accelerated MRI cell tracking, a Cartesian sampling scheme was designed, in which the fully sampled and undersampled k-space data for different acceleration factors were acquired simultaneously, and multiple undersampling ratios could be chosen retrospectively. Compressed-sensing reconstruction was applied using dictionary learning and low-rank constraints. Detection of iron-labeled monocytes was evaluated with simulations, rotating phantom experiments and in vivo mouse brain measurements at 9.4 T. RESULTS: Fully sampled and 2.4-times and 4.8-times accelerated images were reconstructed and had sufficient contrast-to-noise ratio (CNR) for single cells to be resolved and followed dynamically. The phantom experiments showed an improvement in CNR of 6.1% per µm/s in the 4.8-times undersampled images. Geometric distortion of cells caused by motion was visibly reduced in the accelerated images, which enabled detection of moving cells with velocities of up to 7.0 µm/s. In vivo, additional cells were resolved in the accelerated images due to the improved temporal resolution. CONCLUSION: The easy-to-implement flexible Cartesian sampling scheme with compressed-sensing reconstruction permits simultaneous acquisition of both fully sampled and high temporal resolution images. The CNR of moving cells is effectively improved, enabling the recovery of high velocity cells with sufficient contrast at virtually no cost.


Asunto(s)
Rastreo Celular , Imagen por Resonancia Magnética , Animales , Ratones , Estudios Retrospectivos , Imagen de Lapso de Tiempo , Imagen por Resonancia Magnética/métodos , Movimiento (Física) , Procesamiento de Imagen Asistido por Computador/métodos
5.
Sensors (Basel) ; 23(22)2023 Nov 14.
Artículo en Inglés | MEDLINE | ID: mdl-38005564

RESUMEN

(1) Background: The ability to recognize identities is an essential component of security. Electrocardiogram (ECG) signals have gained popularity for identity recognition because of their universal, unique, stable, and measurable characteristics. To ensure accurate identification of ECG signals, this paper proposes an approach which involves mixed feature sampling, sparse representation, and recognition. (2) Methods: This paper introduces a new method of identifying individuals through their ECG signals. This technique combines the extraction of fixed ECG features and specific frequency features to improve accuracy in ECG identity recognition. This approach uses the wavelet transform to extract frequency bands which contain personal information features from the ECG signals. These bands are reconstructed, and the single R-peak localization determines the ECG window. The signals are segmented and standardized based on the located windows. A sparse dictionary is created using the standardized ECG signals, and the KSVD (K-Orthogonal Matching Pursuit) algorithm is employed to project ECG target signals into a sparse vector-matrix representation. To extract the final representation of the target signals for identification, the sparse coefficient vectors in the signals are maximally pooled. For recognition, the co-dimensional bundle search method is used in this paper. (3) Results: This paper utilizes the publicly available European ST-T database for our study. Specifically, this paper selects ECG signals from 20, 50 and 70 subjects, each with 30 testing segments. The method proposed in this paper achieved recognition rates of 99.14%, 99.09%, and 99.05%, respectively. (4) Conclusion: The experiments indicate that the method proposed in this paper can accurately capture, represent and identify ECG signals.


Asunto(s)
Identificación Biométrica , Humanos , Identificación Biométrica/métodos , Algoritmos , Electrocardiografía/métodos , Análisis de Ondículas , Bases de Datos Factuales
6.
Sensors (Basel) ; 23(3)2023 Jan 28.
Artículo en Inglés | MEDLINE | ID: mdl-36772480

RESUMEN

Chronic obstructive pulmonary disease (COPD) concerns the serious decline of human lung functions. These have emerged as one of the most concerning health conditions over the last two decades, after cancer around the world. The early diagnosis of COPD, particularly of lung function degradation, together with monitoring the condition by physicians, and predicting the likelihood of exacerbation events in individual patients, remains an important challenge to overcome. The requirements for achieving scalable deployments of data-driven methods using artificial intelligence for meeting such a challenge in modern COPD healthcare have become of paramount and critical importance. In this study, we have established the experimental foundations for acquiring and indeed generating biomedical observation data, for good performance signal analysis and machine learning that will lead us to the intelligent diagnosis and monitoring of COPD conditions for individual patients. Further, we investigated on the multi-resolution analysis and compression of lung audio signals, while we performed their machine classification under two distinct experiments. These respectively refer to conditions involving (1) "Healthy" or "COPD" and (2) "Healthy", "COPD", or "Pneumonia" classes. Signal reconstruction with the extracted features for machine learning and testing was also performed for securing the integrity of the original audio recordings. These showed high levels of accuracy together with the performances of the selected machine learning-based classifiers using diverse metrics. Our study shows promising levels of accuracy in classifying Healthy and COPD and also Healthy, COPD, and Pneumonia conditions. Further work in this study will be imminently extended to new experiments using multi-modal sensing hardware and data fusion techniques for the development of the next generation diagnosis systems for COPD healthcare of the future.


Asunto(s)
Inteligencia Artificial , Enfermedad Pulmonar Obstructiva Crónica , Humanos , Pulmón , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico , Aprendizaje Automático , Probabilidad
7.
Sensors (Basel) ; 23(7)2023 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-37050627

RESUMEN

In recent decades, falls have posed multiple critical health issues, especially for the older population, with their emerging growth. Recent research has shown that a wrist-based fall detection system offers an accessory-like comfortable solution for Internet of Things (IoT)-based monitoring. Nevertheless, an autonomous device for anywhere-anytime may present an energy consumption concern. Hence, this paper proposes a novel energy-aware IoT-based architecture for Message Queuing Telemetry Transport (MQTT)-based gateway-less monitoring for wearable fall detection. Accordingly, a hybrid double prediction technique based on Supervised Dictionary Learning was implemented to reinforce the detection efficiency of our previous works. A controlled dataset was collected for training (offline), while a real set of measurements of the proposed system was used for validation (online). It achieved a noteworthy offline and online detection performance of 99.8% and 91%, respectively, overpassing most of the related works using only an accelerometer. In the worst case, the system showed a battery consumption optimization by a minimum of 27.32 working hours, significantly higher than other research prototypes. The approach presented here proves to be promising for real applications, which require a reliable and long-term anywhere-anytime solution.

8.
J Xray Sci Technol ; 31(3): 593-609, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36970929

RESUMEN

BACKGROUND: Low-Dose computed tomography (LDCT) reduces radiation damage to patients, however, the reconstructed images contain severe noise, which affects doctors' diagnosis of the disease. The convolutional dictionary learning has the advantage of the shift-invariant property. The deep convolutional dictionary learning algorithm (DCDicL) combines deep learning and convolutional dictionary learning, which has great suppression effects on Gaussian noise. However, applying DCDicL to LDCT images cannot get satisfactory results. OBJECTIVE: To address this challenge, this study proposes and tests an improved deep convolutional dictionary learning algorithm for LDCT image processing and denoising. METHODS: First, we use a modified DCDicL algorithm to improve the input network and make it do not need to input noise intensity parameter. Second, we use DenseNet121 to replace the shallow convolutional network to learn the prior on the convolutional dictionary, which can obtain more accurate convolutional dictionary. Last, in the loss function, we add MSSIM to enhance the detail retention ability of the model. RESULTS: The experimental results on the Mayo dataset show that the proposed model obtained an average value of 35.2975 dB in PSNR, which is 0.2954 -1.0573 dB higher than the mainstream LDCT algorithm, indicating the excellent denoising performance. CONCLUSION: The study demonstrates that the proposed new algorithm can effectively improve the quality of LDCT images acquired in the clinical practice.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Relación Señal-Ruido , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Redes Neurales de la Computación
9.
J Xray Sci Technol ; 31(6): 1165-1187, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37694333

RESUMEN

BACKGROUND: Recently, one promising approach to suppress noise/artifacts in low-dose CT (LDCT) images is the CNN-based approach, which learns the mapping function from LDCT to normal-dose CT (NDCT). However, most CNN-based methods are purely data-driven, thus lacking sufficient interpretability and often losing details. OBJECTIVE: To solve this problem, we propose a deep convolutional dictionary learning method for LDCT denoising, in which a novel convolutional dictionary learning model with adaptive window (CDL-AW) is designed, and a corresponding enhancement-based convolutional dictionary learning network (called ECDAW-Net) is constructed to unfold the CDL-AW model iteratively using the proximal gradient descent technique. METHODS: In detail, the adaptive window-constrained convolutional dictionary atom is proposed to alleviate spectrum leakage caused by data truncation during convolution. Furthermore, in the ECDAW-Net, a multi-scale edge extraction module that consists of LoG and Sobel convolution layers is proposed in the unfolding iteration, to supplement lost textures and details. Additionally, to further improve the detail retention ability, the ECDAW-Net is trained by the compound loss function of the pixel-level MSE loss and the proposed patch-level loss, which can assist to retain richer structural information. RESULTS: Applying ECDAW-Net to the Mayo dataset, we obtained the highest peak signal-to-noise ratio (33.94) and sub-optimal structural similarity (0.92). CONCLUSIONS: Compared with some state-of-art methods, the interpretable ECDAW-Net performs well in suppressing noise/artifacts and preserving textures of tissue.


Asunto(s)
Tomografía Computarizada por Rayos X , Relación Señal-Ruido
10.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 40(1): 110-117, 2023 Feb 25.
Artículo en Zh | MEDLINE | ID: mdl-36854555

RESUMEN

The extraction of neuroimaging features of migraine patients and the design of identification models are of great significance for the auxiliary diagnosis of related diseases. Compared with the commonly used image features, this study directly uses time-series signals to characterize the functional state of the brain in migraine patients and healthy controls, which can effectively utilize the temporal information and reduce the computational effort of classification model training. Firstly, Group Independent Component Analysis and Dictionary Learning were used to segment different brain areas for small-sample groups and then the regional average time-series signals were extracted. Next, the extracted time series were divided equally into multiple subseries to expand the model input sample. Finally, the time series were modeled using a bi-directional long-short term memory network to learn the pre-and-post temporal information within each time series to characterize the periodic brain state changes to improve the diagnostic accuracy of migraine. The results showed that the classification accuracy of migraine patients and healthy controls was 96.94%, the area under the curve was 0.98, and the computation time was relatively shorter. The experiments indicate that the method in this paper has strong applicability, and the combination of time-series feature extraction and bi-directional long-short term memory network model can be better used for the classification and diagnosis of migraine. This work provides a new idea for the lightweight diagnostic model based on small-sample neuroimaging data, and contributes to the exploration of the neural discrimination mechanism of related diseases.


Asunto(s)
Trastornos Migrañosos , Humanos , Factores de Tiempo , Trastornos Migrañosos/diagnóstico por imagen , Imagen por Resonancia Magnética , Encéfalo/diagnóstico por imagen , Neuroimagen
11.
BMC Genomics ; 23(1): 851, 2022 Dec 23.
Artículo en Inglés | MEDLINE | ID: mdl-36564711

RESUMEN

In the analysis of single-cell RNA-sequencing (scRNA-seq) data, how to effectively and accurately identify cell clusters from a large number of cell mixtures is still a challenge. Low-rank representation (LRR) method has achieved excellent results in subspace clustering. But in previous studies, most LRR-based methods usually choose the original data matrix as the dictionary. In addition, the methods based on LRR usually use spectral clustering algorithm to complete cell clustering. Therefore, there is a matching problem between the spectral clustering method and the affinity matrix, which is difficult to ensure the optimal effect of clustering. Considering the above two points, we propose the DLNLRR method to better identify the cell type. First, DLNLRR can update the dictionary during the optimization process instead of using the predefined fixed dictionary, so it can realize dictionary learning and LRR learning at the same time. Second, DLNLRR can realize subspace clustering without relying on spectral clustering algorithm, that is, we can perform clustering directly based on the low-rank matrix. Finally, we carry out a large number of experiments on real single-cell datasets and experimental results show that DLNLRR is superior to other scRNA-seq data analysis algorithms in cell type identification.


Asunto(s)
Algoritmos , Aprendizaje , Análisis por Conglomerados , Análisis de Datos , ARN/genética , Análisis de la Célula Individual , Análisis de Secuencia de ARN
12.
BMC Genomics ; 23(1): 56, 2022 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-35033004

RESUMEN

BACKGROUND: Pseudotime estimation from dynamic single-cell transcriptomic data enables characterisation and understanding of the underlying processes, for example developmental processes. Various pseudotime estimation methods have been proposed during the last years. Typically, these methods start with a dimension reduction step because the low-dimensional representation is usually easier to analyse. Approaches such as PCA, ICA or t-SNE belong to the most widely used methods for dimension reduction in pseudotime estimation methods. However, these methods usually make assumptions on the derived dimensions, which can result in important dataset properties being missed. In this paper, we suggest a new dictionary learning based approach, dynDLT, for dimension reduction and pseudotime estimation of dynamic transcriptomic data. Dictionary learning is a matrix factorisation approach that does not restrict the dependence of the derived dimensions. To evaluate the performance, we conduct a large simulation study and analyse 8 real-world datasets. RESULTS: The simulation studies reveal that firstly, dynDLT preserves the simulated patterns in low-dimension and the pseudotimes can be derived from the low-dimensional representation. Secondly, the results show that dynDLT is suitable for the detection of genes exhibiting the simulated dynamic patterns, thereby facilitating the interpretation of the compressed representation and thus the dynamic processes. For the real-world data analysis, we select datasets with samples that are taken at different time points throughout an experiment. The pseudotimes found by dynDLT have high correlations with the experimental times. We compare the results to other approaches used in pseudotime estimation, or those that are method-wise closely connected to dictionary learning: ICA, NMF, PCA, t-SNE, and UMAP. DynDLT has the best overall performance for the simulated and real-world datasets. CONCLUSIONS: We introduce dynDLT, a method that is suitable for pseudotime estimation. Its main advantages are: (1) It presents a model-free approach, meaning that it does not restrict the dependence of the derived dimensions; (2) Genes that are relevant in the detected dynamic processes can be identified from the dictionary matrix; (3) By a restriction of the dictionary entries to positive values, the dictionary atoms are highly interpretable.


Asunto(s)
Algoritmos , Transcriptoma , Simulación por Computador
13.
Magn Reson Med ; 88(3): 1068-1080, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35481596

RESUMEN

PURPOSE: To develop a B1-corrrected single flip-angle continuous acquisition strategy with free-breathing and cardiac self-gating for spiral T1 mapping, and compare it to a previous dual flip-angle technique. METHODS: Data were continuously acquired using a spiral-out trajectory, rotated by the golden angle in time. During the first 2 s, off-resonance Fermi RF pulses were applied to generate a Bloch-Siegert shift B1 map, and the subsequent data were acquired with an inversion RF pulse applied every 4 s to create a T1* map. The final T1 map was generated from the B1 and the T1* maps by using a look-up table that accounted for slice profile effects, yielding more accurate T1 values. T1 values were compared to those from inversion recovery (IR) spin echo (phantom only), MOLLI, SAturation-recovery single-SHot Acquisition (SASHA), and previously proposed dual flip-angle results. This strategy was evaluated in a phantom and 25 human subjects. RESULTS: The proposed technique showed good agreement with IR spin-echo results in the phantom experiment. For in-vivo studies, the proposed technique and the previously proposed dual flip-angle method were more similar to SASHA results than to MOLLI results. CONCLUSIONS: B1-corrected single flip-angle T1 mapping successfully acquired B1 and T1 maps in a free-breathing, continuous-IR spiral acquisition, providing a method with improved accuracy to measure T1 using a continuous Look-Locker acquisition, as compared to the previously proposed dual excitation flip-angle technique.


Asunto(s)
Imagen por Resonancia Magnética , Respiración , Corazón , Humanos , Imagen por Resonancia Magnética/métodos , Fantasmas de Imagen , Reproducibilidad de los Resultados
14.
NMR Biomed ; 35(2): e4628, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34642974

RESUMEN

Neurite orientation dispersion and density imaging (NODDI) enables the assessment of intracellular, extracellular, and free water signals from multi-shell diffusion MRI data. It is an insightful approach to characterize brain tissue microstructure. Single-shell reconstruction for NODDI parameters has been discouraged in previous studies caused by failure when fitting, especially for the neurite density index (NDI). Here, we investigated the possibility of creating robust NODDI parameter maps with single-shell data, using the isotropic volume fraction (fISO ) as a prior. Prior estimation was made independent of the NODDI model constraint using a dictionary learning approach. First, we used a stochastic sparse dictionary-based network (DictNet), which is trained with data obtained from in vivo and simulated diffusion MRI data, to predict fISO . In single-shell cases, the mean diffusivity and raw T2 signal with no diffusion weighting (S0 ) was incorporated in the dictionary for the fISO estimation. Then, the NODDI framework was used with the known fISO to estimate the NDI and orientation dispersion index (ODI). The fISO estimated using our model was compared with other fISO estimators in the simulation. Further, using both synthetic data simulation and human data collected on a 3 T scanner (both high-quality HCP and clinical dataset), we compared the performance of our dictionary-based learning prior NODDI (DLpN) with the original NODDI for both single-shell and multi-shell data. Our results suggest that DLpN-derived NDI and ODI parameters for single-shell protocols are comparable with original multi-shell NODDI, and the protocol with b = 2000 s/mm2 performs the best (error ~ 5% in white and gray matter). This may allow NODDI evaluation of studies on single-shell data by multi-shell scanning of two subjects for DictNet fISO training.


Asunto(s)
Imagen de Difusión por Resonancia Magnética/métodos , Neuritas , Recuento de Células , Simulación por Computador , Humanos
15.
J Nanobiotechnology ; 20(1): 292, 2022 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-35729633

RESUMEN

BACKGROUND: Increasing evidence suggests that platelets play a central role in cancer progression, with altered storage and selective release from platelets of specific tumor-promoting proteins as a major mechanism. Fluorescence-based super-resolution microscopy (SRM) can resolve nanoscale spatial distribution patterns of such proteins, and how they are altered in platelets upon different activations. Analysing such alterations by SRM thus represents a promising, minimally invasive strategy for platelet-based diagnosis and monitoring of cancer progression. However, broader applicability beyond specialized research labs will require objective, more automated imaging procedures. Moreover, for statistically significant analyses many SRM platelet images are needed, of several different platelet proteins. Such proteins, showing alterations in their distributions upon cancer progression additionally need to be identified. RESULTS: A fast, streamlined and objective procedure for SRM platelet image acquisition, analysis and classification was developed to overcome these limitations. By stimulated emission depletion SRM we imaged nanoscale patterns of six different platelet proteins; four different SNAREs (soluble N-ethylmaleimide factor attachment protein receptors) mediating protein secretion by membrane fusion of storage granules, and two angiogenesis regulating proteins, representing cargo proteins within these granules coupled to tumor progression. By a streamlined procedure, we recorded about 100 SRM images of platelets, for each of these six proteins, and for five different categories of platelets; incubated with cancer cells (MCF-7, MDA-MB-231, EFO-21), non-cancer cells (MCF-10A), or no cells at all. From these images, structural similarity and protein cluster parameters were determined, and probability functions of these parameters were generated for the different platelet categories. By comparing these probability functions between the categories, we could identify nanoscale alterations in the protein distributions, allowing us to classify the platelets into their correct categories, if they were co-incubated with cancer cells, non-cancer cells, or no cells at all. CONCLUSIONS: The fast, streamlined and objective acquisition and analysis procedure established in this work confirms the role of SNAREs and angiogenesis-regulating proteins in platelet-mediated cancer progression, provides additional fundamental knowledge on the interplay between tumor cells and platelets, and represent an important step towards using tumor-platelet interactions and redistribution of nanoscale protein patterns in platelets as a basis for cancer diagnostics.


Asunto(s)
Neoplasias , Proteínas SNARE , Plaquetas/metabolismo , Fusión de Membrana , Microscopía Fluorescente/métodos , Neoplasias/metabolismo , Proteínas SNARE/metabolismo
16.
Sensors (Basel) ; 22(6)2022 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-35336570

RESUMEN

Brain shift is an important obstacle to the application of image guidance during neurosurgical interventions. There has been a growing interest in intra-operative imaging to update the image-guided surgery systems. However, due to the innate limitations of the current imaging modalities, accurate brain shift compensation continues to be a challenging task. In this study, the application of intra-operative photoacoustic imaging and registration of the intra-operative photoacoustic with pre-operative MR images are proposed to compensate for brain deformation. Finding a satisfactory registration method is challenging due to the unpredictable nature of brain deformation. In this study, the co-sparse analysis model is proposed for photoacoustic-MR image registration, which can capture the interdependency of the two modalities. The proposed algorithm works based on the minimization of mapping transform via a pair of analysis operators that are learned by the alternating direction method of multipliers. The method was evaluated using an experimental phantom and ex vivo data obtained from a mouse brain. The results of the phantom data show about 63% improvement in target registration error in comparison with the commonly used normalized mutual information method. The results proved that intra-operative photoacoustic images could become a promising tool when the brain shift invalidates pre-operative MRI.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Algoritmos , Animales , Encéfalo/diagnóstico por imagen , Encéfalo/cirugía , Imagen por Resonancia Magnética/métodos , Ratones , Procedimientos Neuroquirúrgicos/métodos , Fantasmas de Imagen
17.
Sensors (Basel) ; 22(9)2022 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-35590885

RESUMEN

The comprehensive production of detailed bathymetric maps is important for disaster prevention, resource exploration, safe navigation, marine salvage, and monitoring of marine organisms. However, owing to observation difficulties, the amount of data on the world's seabed topography is scarce. Therefore, it is essential to develop methods that effectively use the limited data. In this study, based on dictionary learning and sparse coding, we modified the super-resolution technique and applied it to seafloor topographical maps. Improving on the conventional method, before dictionary learning, we performed pre-processing to separate the teacher image into a low-frequency component that has a general structure and a high-frequency component that captures the detailed topographical features. We learn the topographical features by training the dictionary. As a result, the root-mean-square error (RMSE) was reduced by 30% compared with bicubic interpolation and accuracy was improved, especially in the rugged part of the terrain. The proposed method, which learns a dictionary to capture topographical features and reconstructs them using a dictionary, produces super-resolution with high interpretability.


Asunto(s)
Algoritmos , Aprendizaje , Océanos y Mares
18.
Sensors (Basel) ; 23(1)2022 Dec 25.
Artículo en Inglés | MEDLINE | ID: mdl-36616804

RESUMEN

A reconstruction algorithm is proposed, based on multi-dictionary learning (MDL), to improve the reconstruction quality of acoustic tomography for complex temperature fields. Its aim is to improve the under-determination of the inverse problem by the sparse representation of the sound slowness signal (i.e., reciprocal of sound velocity). In the MDL algorithm, the K-SVD dictionary learning algorithm is used to construct corresponding sparse dictionaries for sound slowness signals of different types of temperature fields; the KNN peak-type classifier is employed for the joint use of multiple dictionaries; the orthogonal matching pursuit (OMP) algorithm is used to obtain the sparse representation of sound slowness signal in the sparse domain; then, the temperature distribution is obtained by using the relationship between sound slowness and temperature. Simulation and actual temperature distribution reconstruction experiments show that the MDL algorithm has smaller reconstruction errors and provides more accurate information about the temperature field, compared with the compressed sensing and improved orthogonal matching pursuit (CS-IMOMP) algorithm, which is an algorithm based on compressed sensing and improved orthogonal matching pursuit (in the CS-IMOMP, DFT dictionary is used), the least square algorithm (LSA) and the simultaneous iterative reconstruction technique (SIRT).


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Temperatura , Simulación por Computador , Acústica
19.
Sensors (Basel) ; 22(13)2022 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-35808451

RESUMEN

Health monitoring and related technologies are a rapidly growing area of research. To date, the electrocardiogram (ECG) remains a popular measurement tool in the evaluation and diagnosis of heart disease. The number of solutions involving ECG signal monitoring systems is growing exponentially in the literature. In this article, underestimated Orthogonal Matching Pursuit (OMP) algorithms are used, demonstrating the significant effect of concise representation parameters on improving the performance of the classification process. Cardiovascular disease classification models based on classical Machine Learning classifiers were defined and investigated. The study was undertaken on the recently published PTB-XL database, whose ECG signals were previously subjected to detailed analysis. The classification was realized for class 2, class 5, and class 15 cardiac diseases. A new method of detecting R-waves and, based on them, determining the location of QRS complexes was presented. Novel aggregation methods of ECG signal fragments containing QRS segments, necessary for tests for classical classifiers, were developed. As a result, it was proved that ECG signal subjected to algorithms of R wave detection, QRS complexes extraction, and resampling performs very well in classification using Decision Trees. The reason can be found in structuring the signal due to the actions mentioned above. The implementation of classification issues achieved the highest Accuracy of 90.4% in recognition of 2 classes, as compared to less than 78% for 5 classes and 71% for 15 classes.


Asunto(s)
Electrocardiografía , Procesamiento de Señales Asistido por Computador , Algoritmos , Arritmias Cardíacas/diagnóstico , Electrocardiografía/métodos , Humanos , Aprendizaje Automático
20.
J Xray Sci Technol ; 30(6): 1085-1097, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35938282

RESUMEN

OBJECTIVE: In order to solve the problem of image quality degradation of CT reconstruction under sparse angle projection, we propose to develop and test a new sparse angle CT reconstruction method based on group sparse. METHODS: In this method, the group-based sparse representation is introduced into the statistical iterative reconstruction framework as a regularization term to construct the objective function. The group-based sparse representation no longer takes a single patch as the minimum unit of sparse representation, while it uses Euclidean distance as a similarity measure, thus it divides similar patch into groups as basic units for sparse representation. This method fully considers the local sparsity and non-local self-similarity of image. The proposed method is compared with several commonly used CT image reconstruction methods including FBP, SART, SART-TV and GSR-SART with experiments carried out on Sheep_Logan phantom and abdominal and pelvic images. RESULTS: In three experiments, the visual effect of the proposed method is the best. Under 64 projection angles, the lowest RMSE is 0.004776 and the highest VIF is 0.948724. FSIM and SSIM are all higher than 0.98. Under 50 projection angles, the index of the proposed method remains achieving the best image quality. CONCLUSION: Qualitative and quantitative results of this study demonstrate that this new proposed method can not only remove strip artifacts, but also effectively protect image details.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Ovinos , Animales , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Artefactos , Tomografía Computarizada por Rayos X/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA