Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(5)2024 Feb 29.
Artículo en Inglés | MEDLINE | ID: mdl-38475130

RESUMEN

Optical microscopy techniques are among the most used methods in biomedical sample characterization. In their more advanced realization, optical microscopes demonstrate resolution down to the nanometric scale. These methods rely on the use of fluorescent sample labeling in order to break the diffraction limit. However, fluorescent molecules' phototoxicity or photobleaching is not always compatible with the investigated samples. To overcome this limitation, quantitative phase imaging techniques have been proposed. Among these, holographic imaging has demonstrated its ability to image living microscopic samples without staining. However, for a 3D assessment of samples, tomographic acquisitions are needed. Tomographic Diffraction Microscopy (TDM) combines holographic acquisitions with tomographic reconstructions. Relying on a 3D synthetic aperture process, TDM allows for 3D quantitative measurements of the complex refractive index of the investigated sample. Since its initial proposition by Emil Wolf in 1969, the concept of TDM has found a lot of applications and has become one of the hot topics in biomedical imaging. This review focuses on recent achievements in TDM development. Current trends and perspectives of the technique are also discussed.

2.
Sensors (Basel) ; 24(14)2024 Jul 10.
Artículo en Inglés | MEDLINE | ID: mdl-39065868

RESUMEN

An interpolation method, which estimates unknown values with constrained information, is based on mathematical calculations. In this study, we addressed interpolation from an image-based perspective and expanded the use of image inpainting to estimate values at unknown points. When chemical gas is dispersed through a chemical attack or terrorism, it is possible to determine the concentration of the gas at each location by utilizing the deployed sensors. By interpolating the concentrations, we can obtain the contours of gas concentration. Accurately distinguishing the contours of a contaminated region from a map enables the optimal response to minimize damage. However, areas with an insufficient number of sensors have less accurate contours than other areas. In order to achieve more accurate contour data, an image inpainting-based method is proposed to enhance reliability by erasing and reconstructing low-accuracy areas in the contour. Partial convolution is used as the machine learning approach for image-inpainting, with the modified loss function for optimization. In order to train the model, we developed a gas diffusion simulation model and generated a gas concentration contour dataset comprising 100,000 contour images. The results of the model were compared to those of Kriging interpolation, one of the conventional spatial interpolation methods, finally demonstrating 13.21% higher accuracy. This suggests that interpolation from an image-based perspective can achieve higher accuracy than numerical interpolation on well-trained data. The proposed method was validated using gas concentration contour data from the verified gas dispersion modeling software Nuclear Biological Chemical Reporting And Modeling System (NBC_RAMS), which was developed by the Agency for Defense Development, South Korea.

3.
Sensors (Basel) ; 23(5)2023 Mar 02.
Artículo en Inglés | MEDLINE | ID: mdl-36904939

RESUMEN

An event of sensor faults in sensor networks deployed in structures might result in the degradation of the structural health monitoring system and lead to difficulties in structural condition assessment. Reconstruction techniques of the data for missing sensor channels were widely adopted to restore a dataset from all sensor channels. In this study, a recurrent neural network (RNN) model combined with external feedback is proposed to enhance the accuracy and effectiveness of sensor data reconstruction for measuring the dynamic responses of structures. The model utilizes spatial correlation rather than spatiotemporal correlation by explicitly feeding the previously reconstructed time series of defective sensor channels back to the input dataset. Because of the nature of spatial correlation, the proposed method generates robust and precise results regardless of the hyperparameters set in the RNN model. To verify the performance of the proposed method, simple RNN, long short-term memory, and gated recurrent unit models were trained using the acceleration datasets obtained from laboratory-scaled three- and six-story shear building frames.

4.
Sensors (Basel) ; 22(19)2022 Sep 21.
Artículo en Inglés | MEDLINE | ID: mdl-36236251

RESUMEN

Exchanging gradient is a widely used method in modern multinode machine learning system (e.g., distributed training, Federated Learning). Gradients and weights of model has been presumed to be safe to delivery. However, some studies have shown that gradient inversion technique can reconstruct the input images on the pixel level. In this study, we review the research work of data leakage by gradient inversion technique and categorize existing works into three groups: (i) Bias Attacks, (ii) Optimization-Based Attacks, and (iii) Linear Equation Solver Attacks. According to the characteristics of these algorithms, we propose one privacy attack system, i.e., Single-Sample Reconstruction Attack System (SSRAS). This system can carry out image reconstruction regardless of whether the label can be determined. It can extends gradient inversion attack from a fully connected layer with bias terms to attack a fully connected layer and convolutional neural network with or without bias terms. We also propose Improved R-GAP Alogrithm, which can utlize DLG algorithm to derive ground truth. Furthermore, we introduce Rank Analysis Index (RA-I) to measure the possible of whether the user's raw image data can be reconstructed. This rank analysis derive virtual constraints Vi from weights. Compared with the most representative attack algorithms, this reconstruction attack system can recover a user's private training image with high fidelity and attack success rate. Experimental results also show the superiority of the attack system over some other state-of-the-art attack algorithms.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Privacidad
5.
Sensors (Basel) ; 22(5)2022 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-35270851

RESUMEN

The axle box in the bogie system of subway trains is a key component connecting primary damper and the axle. In order to extract deep features and large-scale fault features for rapid diagnosis, a novel fault reconstruction characteristics classification method based on deep residual network with a multi-scale stacked receptive field for rolling bearings of a subway train axle box is proposed. Firstly, multi-layer stacked convolutional kernels and methods to insert them into ultra-deep residual networks are developed. Then, the original vibration signals of four fault characteristics acquired are reconstructed with a Gramian angular summation field and trainable large-scale 2D time-series images are obtained. In the end, the experimental results show that ResNet-152-MSRF has a low complexity of network structure, less trainable parameters than general convolutional neural networks, and no significant increase in network parameters and calculation time after embedding multi-layer stacked convolutional kernels. Moreover, there is a significant improvement in accuracy compared to lower depths, and a slight improvement in accuracy compared to networks than unembedded multi-layer stacked convolutional kernels.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Algoritmos , Progresión de la Enfermedad , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
6.
Sensors (Basel) ; 22(23)2022 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-36502030

RESUMEN

The block varying pulse repetition frequency (BV-PRF) scheme applied to spaceborne squint sliding-spotlight synthetic aperture radar (SAR) can resolve large-range cell migration (RCM) and reduce azimuth signal non-uniformity. However, in the BV-PRF scheme, different raw data blocks have different PRFs, and the raw data in each block are insufficiently sampled. To resolve the two problems, a novel azimuth full-aperture pre-processing method is proposed to handle the SAR raw data formed by the BV-PRF scheme. The key point of the approach is the resampling of block data with different PRFs and the continuous splicing of azimuth data. The method mainly consists of four parts: de-skewing, resampling, azimuth continuous combination, and Doppler history recovery. After de-skewing, the raw data with different PRFs can be resampled individually to obtain a uniform azimuth sampling interval, and an appropriate azimuth time shift is introduced to ensure the continuous combination of the azimuth signal. Consequently, the resulting raw data are sufficiently and uniformly sampled in azimuth, which could be well handled by classical SAR-focusing algorithms. Simulation results on point targets validate the proposed azimuth pre-processing approach. Furthermore, compared with methods to process SAR data with continuous PRF, the proposed method is more effective.


Asunto(s)
Algoritmos , Radar , Movimiento Celular , Simulación por Computador , Imagen de Difusión por Resonancia Magnética
7.
Sensors (Basel) ; 22(3)2022 Jan 23.
Artículo en Inglés | MEDLINE | ID: mdl-35161604

RESUMEN

In the application of a bridge weigh-in-motion (WIM) system, the collected data may be temporarily or permanently lost due to sensor failure or system transmission failure. The high data loss rate weakens the distribution characteristics of the collected data and the ability of the monitoring system to conduct assessments on bridge condition. A deep learning-based model, or generative adversarial network (GAN), is proposed to reconstruct the missing data in the bridge WIM systems. The proposed GAN in this study can model the collected dataset and predict the missing data. Firstly, the data from stable measurements before the data loss are provided, and then the generator is trained to extract the retained features from the dataset and the data lost in the process are collected by using only the responses of the remaining functional sensors. The discriminator feeds back the recognition results to the generator in order to improve its reconstruction accuracy. In the model training, two loss functions, generation loss and confrontation loss, are used, and the general outline and potential distribution characteristics of the signal are well processed by the model. Finally, by applying the engineering data of the Hangzhou Jiangdong Bridge to the GAN model, this paper verifies the effectiveness of the proposed method. The results show that the final reconstructed dataset is in good agreement with the actual dataset in terms of total vehicle weight and axle weight. Furthermore, the approximate contour and potential distribution characteristics of the original dataset are reproduced. It is suggested that the proposed method can be used in real-life applications. This research can provide a promising method for the data reconstruction of bridge monitoring systems.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Movimiento (Física)
8.
Sensors (Basel) ; 21(14)2021 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-34300496

RESUMEN

UWB is a rapidly developing technology characterised by high positioning accuracy, additional data transferability, and communication security. Low costs and energy demand makes it a system that meets the requirements of smart cities (e.g., smart mobility). The analysis of the positioning accuracy of moving objects requires a ground truth. For the UWB system, it should have an accuracy of the order of millimetres. The generated data can be used to minimize the cost and time needed to perform field tests. However, there is no UWB simulators which can consider the variable characteristics of operation along with distance to reflect the operation of real systems. This article presents a 2D UWB simulator for outdoor open-air areas with obstacles and a method of analysing data from the real UWB system under line-of-sight (LOS) and non-line-of-sight conditions. Data are recorded at predefined outdoor reference distances, and by fitting normal distributions to this data and modelling the impact of position changes the real UWB system can be simulated and it makes it possible to create virtual measurements for other locations. Furthermore, the presented method of describing the path using time-dependent equations and obstacles using a set of inequalities allows for reconstructing the real test scenario with moving tags with high accuracy.

9.
Sensors (Basel) ; 21(21)2021 Nov 03.
Artículo en Inglés | MEDLINE | ID: mdl-34770632

RESUMEN

Suffering from structural deterioration and natural disasters, the resilience of civil structures in the face of extreme loadings inevitably drops, which may lead to catastrophic structural failure and presents great threats to public safety. Earthquake-induced extreme loading is one of the major reasons behind the structural failure of buildings. However, many buildings in earthquake-prone areas of China lack safety monitoring, and prevalent structural health monitoring systems are generally very expensive and complicated for extensive applications. To facilitate cost-effective building-safety monitoring, this study investigates a method using cost-effective MEMS accelerometers for buildings' rapid after-earthquake assessment. First, a parameter analysis of a cost-effective MEMS sensor is conducted to confirm its suitability for building-safety monitoring. Second, different from the existing investigations that tend to use a simplified building model or small-scaled frame structure excited by strong motions in laboratories, this study selects an in-service public building located in a typical earthquake-prone area after an analysis of earthquake risk in China. The building is instrumented with the selected cost-effective MEMS accelerometers, characterized by a low noise level and the capability to capture low-frequency small-amplitude dynamic responses. Furthermore, a rapid after-earthquake assessment scheme is proposed, which systematically includes fast missing data reconstruction, displacement response estimation based on an acceleration response integral, and safety assessment based on the maximum displacement and maximum inter-story drift ratio. Finally, the proposed method is successfully applied to a building-safety assessment by using earthquake-induced building responses suffering from missing data. This study is conducive to the extensive engineering application of MEMS-based cost-effective building monitoring and rapid after-earthquake assessment.


Asunto(s)
Terremotos , Sistemas Microelectromecánicos , Aceleración , Acelerometría , Análisis Costo-Beneficio
10.
Sensors (Basel) ; 21(10)2021 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-34069927

RESUMEN

The experiments conducted on the wind data provided by the European Centre for Medium-range Weather Forecasts show that 1% of the data is sufficient to reconstruct the other 99% with an average amplitude error of less than 0.5 m/s and an average angular error of less than 5 degrees. In a nutshell, our method provides an approach where a portion of the data is used as a proxy to estimate the measurements over the entire domain based only on a few measurements. In our study, we compare several machine learning techniques, namely: linear regression, K-nearest neighbours, decision trees and a neural network, and investigate the impact of sensor placement on the quality of the reconstruction. While methods provide comparable results the results show that sensor placement plays an important role. Thus, we propose that intelligent location selection for sensor placement can be done using k-means, and show that this indeed leads to increase in accuracy as compared to random sensor placement.

11.
Sensors (Basel) ; 22(1)2021 Dec 31.
Artículo en Inglés | MEDLINE | ID: mdl-35009842

RESUMEN

Structural health monitoring (SHM) with a dense sensor network and repeated vibration measurements produces lots of data that have to be stored. If the sensor network is redundant, data compression is possible by storing the signals of selected Bayesian virtual sensors only, from which the omitted signals can be reconstructed with higher accuracy than the actual measurement. The selection of the virtual sensors for storage is done individually for each measurement based on the reconstruction accuracy. Data compression and reconstruction for SHM is the main novelty of this paper. The stored and reconstructed signals are used for damage detection and localization in the time domain using spatial or spatiotemporal correlation. Whitening transformation is applied to the training data to take the environmental or operational influences into account. The first principal component of the residuals is used to localize damage and also to design the extreme value statistics control chart for damage detection. The proposed method was studied with a numerical model of a frame structure with a dense accelerometer or strain sensor network. Only five acceleration or three strain signals out of the total 59 signals were stored. The stored and reconstructed data outperformed the raw measurement data in damage detection and localization.


Asunto(s)
Compresión de Datos , Aceleración , Teorema de Bayes
12.
Sensors (Basel) ; 20(4)2020 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-32059454

RESUMEN

Data gathering is an essential concern in Wireless Sensor Networks (WSNs). This paper proposes an efficient data gathering method in clustered WSNs based on sparse sampling to reduce energy consumption and prolong the network lifetime. For data gathering scheme, we propose a method that can collect sparse sampled data in each time slot with a fixed percent of nodes remaining in sleep mode. For data reconstruction, a subspace approach is proposed to enforce an explicit low-rank constraint for data reconstruction from sparse sampled data. Subspace representing spatial distributions of the WSNs data can be estimated from previous reconstructed data. Incorporating total variation constraint, the proposed reconstruction method reconstructs current time slot data efficiently. The results of experiments indicate that the proposed method can reduce the energy consumption and prolong the network lifetime with satisfying recovery accuracy.

13.
Adv Exp Med Biol ; 1156: 67-84, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31338778

RESUMEN

In our chapter we are describing how to reconstruct three-dimensional anatomy from medical image data and how to build Statistical 3D Shape Models out of many such reconstructions yielding a new kind of anatomy that not only allows quantitative analysis of anatomical variation but also a visual exploration and educational visualization. Future digital anatomy atlases will not only show a static (average) anatomy but also its normal or pathological variation in three or even four dimensions, hence, illustrating growth and/or disease progression.Statistical Shape Models (SSMs) are geometric models that describe a collection of semantically similar objects in a very compact way. SSMs represent an average shape of many three-dimensional objects as well as their variation in shape. The creation of SSMs requires a correspondence mapping, which can be achieved e.g. by parameterization with a respective sampling. If a corresponding parameterization over all shapes can be established, variation between individual shape characteristics can be mathematically investigated.We will explain what Statistical Shape Models are and how they are constructed. Extensions of Statistical Shape Models will be motivated for articulated coupled structures. In addition to shape also the appearance of objects will be integrated into the concept. Appearance is a visual feature independent of shape that depends on observers or imaging techniques. Typical appearances are for instance the color and intensity of a visual surface of an object under particular lighting conditions, or measurements of material properties with computed tomography (CT) or magnetic resonance imaging (MRI). A combination of (articulated) Statistical Shape Models with statistical models of appearance lead to articulated Statistical Shape and Appearance Models (a-SSAMs).After giving various examples of SSMs for human organs, skeletal structures, faces, and bodies, we will shortly describe clinical applications where such models have been successfully employed. Statistical Shape Models are the foundation for the analysis of anatomical cohort data, where characteristic shapes are correlated to demographic or epidemiologic data. SSMs consisting of several thousands of objects offer, in combination with statistical methods or machine learning techniques, the possibility to identify characteristic clusters, thus being the foundation for advanced diagnostic disease scoring.


Asunto(s)
Anatomía , Imagenología Tridimensional , Modelos Anatómicos , Algoritmos , Anatomía/educación , Anatomía/métodos , Diagnóstico por Imagen , Humanos , Modelos Estadísticos
14.
Sensors (Basel) ; 19(4)2019 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-30813310

RESUMEN

Data-driven fault detection and identification methods are important in large-scale chemical processes. However, some traditional methods often fail to show superior performance owing to the self-limitations and the characteristics of process data, such as nonlinearity, non-Gaussian distribution, and multi-operating mode. To cope with these issues, the k-NN (k-Nearest Neighbor) fault detection method and extensions have been developed in recent years. Nevertheless, these methods are primarily used for fault detection, and few papers can be found that examine fault identification. In this paper, in order to extract effective fault information, the relationship between various faults and abnormal variables is studied, and an accurate "fault⁻symptom" table is presented. Then, a novel fault identification method based on k-NN variable contribution and CNN data reconstruction theories is proposed. When there is an abnormality, a variable contribution plot method based on k-NN is used to calculate the contribution index of each variable, and the feasibility of this method is verified by contribution decomposition theory, which includes a feasibility analysis of a single abnormal variable and multiple abnormal variables. Furthermore, to identify all the faulty variables, a CNN (Center-based Nearest Neighbor) data reconstruction method is proposed; the variables that have the larger contribution indices can be reconstructed using the CNN reconstruction method in turn. The proposed search strategy can guarantee that all faulty variables are found in each sample. The reliability and validity of the proposed method are verified by a numerical example and the Continuous Stirred Tank Reactor system.

15.
Proc Natl Acad Sci U S A ; 111(47): 16682-7, 2014 Nov 25.
Artículo en Inglés | MEDLINE | ID: mdl-25385623

RESUMEN

The variability of sea surface temperatures (SSTs) at multidecadal and longer timescales is poorly constrained, primarily because instrumental records are short and proxy records are noisy. Through applying a new noise filtering technique to a global network of late Holocene SST proxies, we estimate SST variability between annual and millennial timescales. Filtered estimates of SST variability obtained from coral, foraminifer, and alkenone records are shown to be consistent with one another and with instrumental records in the frequency bands at which they overlap. General circulation models, however, simulate SST variability that is systematically smaller than instrumental and proxy-based estimates. Discrepancies in variability are largest at low latitudes and increase with timescale, reaching two orders of magnitude for tropical variability at millennial timescales. This result implies major deficiencies in observational estimates or model simulations, or both, and has implications for the attribution of past variations and prediction of future change.

16.
Sci Prog ; 107(2): 368504231208497, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38660769

RESUMEN

The seismic data acquired are usually spatially undersampled due to the constraints of the field acquisition environment. However, the removal of multiple waves, offsets, and inversions requires high regularity and integrity of seismic data. Therefore, reasonable data reconstruction methods are usually applied to the missing data in the indoor processing stage to recover regular seismic data. The traditional reconstruction methods for seismic data reconstruction are generally based on some assumptions (e.g., assuming that the data satisfies linearity or sparsity, etc.) and have some limitations of use. To overcome the applicability problem of traditional seismic data reconstruction methods, this article proposes a generative adversarial network (GAN) seismic data reconstruction method based on moment reconstruction error constraints. The method can extract the deep features of the data nonlinearly without any assumptions. First, the error function in the GAN is improved, and the commonly used joint error function of adversarial loss plus L1/L2 amplitude reconstruction loss is improved to a new error function consisting of adversarial loss and moment reconstruction loss weighting. Then, an adversarial network data reconstruction generation method based on the moment reconstruction error constraint is given. Next, an experimental analysis of different types of data missing was carried out using theoretical model data, and the study method was analyzed by interpolation errors. Finally, actual seismic data is used to further validate the effect of the research method. The experimental results show that the improved algorithm performs superiorly in dealing with the data reconstruction problem. Compared with the error function of conventional GAN optimization, the reconstruction results of GAN based on the moment reconstruction error constraint have better amplitude preservation.

17.
Pharmaceuticals (Basel) ; 17(3)2024 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-38543058

RESUMEN

(1) Background: We aimed to estimate the pooled effectiveness and safety of vaccination in follicular lymphoma (FL) and discuss implications for immunotherapy development. (2) Methods: We included randomized trials (RCTs) of therapeutic vaccines in patients with FL. Progression-free survival (PFS) was the primary outcome. We searched databases (PubMed, Embase, Scopus, Web of Science Core, medRxiv) and registries (PROSPERO, CENTRAL, ClinicalTrials.gov, EuCTR, WHO ICTRP) and conducted online, citation, and manual searches. We assessed risks of bias across outcomes using RoB 2.0 and across studies using ROB-ME and a contour-enhanced funnel plot. (3) Results: Three RCTs were included (813 patients, both previously treated and untreated). Patients with a complete or partial response after chemotherapy were randomized to either a patient-specific recombinant idiotype keyhole limpet hemocyanin (Id-KLH) vaccine plus granulocyte-macrophage colony-stimulating factor (GM-CSF) or placebo immunotherapy (KLH + GM-CSF). Meta-analyses showed that PFS was worse with the vaccine, but not significantly: hazard ratio, 1.09 (95% CI 0.91-1.30). The GRADE certainty of evidence was moderate. Adverse event data were mixed. (4) Conclusions: We are moderately certain that Id-KLH results in little to no difference in PFS in FL. (5) Funding: Russian Science Foundation grant #22-25-00516. (6) Registration: PROSPERO CRD42023457528.

18.
Stud Health Technol Inform ; 316: 1594-1595, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176513

RESUMEN

This study addresses the missing data problem in the large-scale medical dataset MIMIC-IV, especially in situations where intubation-extubation events are paired. We employed a strategy involving patient scenario works that checked the temporal order and logical links of intubation/extubation data, and seven reconstruction rules for handling missing values. Through this, we reduced the overall loss rate from 36.89% (3321 records) to 13.37% (1204 records) and achieved a 37.26% data increase (+2117 records) compared to before reconstruction(6582).


Asunto(s)
Registros Electrónicos de Salud , Humanos , Intubación Intratraqueal
19.
Cureus ; 16(5): e60204, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38746484

RESUMEN

Although MitraClip has been studied in numerous trials, its evidence in the long term is based on a few original studies. We used an original technique of evidence synthesis to review long-term comparative trials evaluating MitraClip. We searched the PubMed database to select long-term comparative trials of MitraClip. The endpoint was all-cause mortality (minimum follow-up, one year). Included trials were analyzed using the IPDfromKM (reconstruct Individual Patient Data from published Kaplan-Meier survival curves) method to reconstruct individual patient data from Kaplan-Meier curves. Standard survival statistics were used to interpret these long-term efficacy data. The survival benefit per patient was estimated from the restricted mean survival time (RMST). Six comparative studies of MitraClip were included; 973 patients were treated with MitraClip (six arms), 717 with medical therapy (five arms), and 80 with surgical repair or replacement (one arm). In our main analysis, the outcomes observed in patients treated with MitraClip were significantly better than those of medical therapy (hazard ratio for all-cause mortality, 0.5276; 95% confidence interval, 0.4412 to 0.6309; p < 0.001); the number of patients treated with surgery was too small to make reliable comparisons. Median survival was 30.4 months for medical therapy versus not reached for the other two groups. RMST was 43.931 and 33.756 months for MitraClip and controls, respectively, yielding a gain per patient of 10.17 months (95% confidence interval, 7.47 to 12.88). In our simplified cost-effectiveness evaluation, a gain of approximately 10 months per patient compared favorably with the device cost. Our analysis provided an original interpretation of the long-term evidence available on MitraClip.

20.
Spectrochim Acta A Mol Biomol Spectrosc ; 311: 124015, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38359515

RESUMEN

Rice grains are often infected by Sitophilus oryzae due to improper storage, resulting in quality and quantity losses. The efficacy of terahertz time-domain spectroscopy (THz-TDS) technology in detecting Sitophilus oryzae at different stages of infestation in stored rice was employed in the current research. Terahertz (THz) spectra for rice grains infested by Sitophilus oryzae at different growth stages were acquired. Then, the convolutional denoising autoencoder (CDAE) was used to reconstruct THz spectra to reduce the noise-to-signal ratio. Finally, a random forest classification (RFC) model was developed to identify the infestation levels. Results showed that the RFC model based on the reconstructed second-order derivative spectrum with an accuracy of 84.78%, a specificity of 86.75%, a sensitivity of 86.36% and an F1-score of 85.87% performed better than the original first-order derivative THz spectrum with an accuracy of 89.13%, a specificity of 91.38%, a sensitivity of 88.18% and an F1-score of 89.16%. In addition, the convolutional layers inside the CDAE were visualized using feature maps to explain the improvement in results, illustrating that the CDAE can eliminate noise in the spectral data. Overall, THz spectra reconstructed with the CDAE provided a novel method for effective THz detection of infected grains.


Asunto(s)
Oryza , Espectroscopía de Terahertz , Gorgojos , Animales , Oryza/química , Espectroscopía de Terahertz/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA