Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
Comput Methods Programs Biomed ; 257: 108373, 2024 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-39276667

RESUMO

Tumors are an important health concern in modern times. Breast cancer is one of the most prevalent causes of death for women. Breast cancer is rapidly becoming the leading cause of mortality among women globally. Early detection of breast cancer allows patients to obtain appropriate therapy, increasing their probability of survival. The adoption of 3-Dimensional (3D) mammography for the medical identification of abnormalities in the breast reduced the number of deaths dramatically. Classification and accurate detection of lumps in the breast in 3D mammography is especially difficult due to factors such as inadequate contrast and normal fluctuations in tissue density. Several Computer-Aided Diagnosis (CAD) solutions are under development to help radiologists accurately classify abnormalities in the breast. In this paper, a breast cancer diagnosis model is implemented to detect breast cancer in cancer patients to prevent death rates. The 3D mammogram images are gathered from the internet. Then, the gathered images are given to the preprocessing phase. The preprocessing is done using a median filter and image scaling method. The purpose of the preprocessing phase is to enhance the quality of the images and remove any noise or artifacts that may interfere with the detection of abnormalities. The median filter helps to smooth out any irregularities in the images, while the image scaling method adjusts the size and resolution of the images for better analysis. Once the preprocessing is complete, the preprocessed image is given to the segmentation phase. The segmentation phase is crucial in medical image analysis as it helps to identify and separate different structures within the image, such as organs or tumors. This process involves dividing the preprocessed image into meaningful regions or segments based on intensity, color, texture, or other features. The segmentation process is done using Adaptive Thresholding with Region Growing Fusion Model (AT-RGFM)". This model combines the advantages of both thresholding and region-growing techniques to accurately identify and delineate specific structures within the image. By utilizing AT-RGFM, the segmentation phase can effectively differentiate between different parts of the image, allowing for more precise analysis and diagnosis. It plays a vital role in the medical image analysis process, providing crucial insights for healthcare professionals. Here, the Modified Garter Snake Optimization Algorithm (MGSOA) is used to optimize the parameters. It helps to optimize parameters for accurately identifying and delineating specific structures within medical images and also helps healthcare professionals in providing more precise analysis and diagnosis, ultimately playing a vital role in the medical image analysis process. MGSOA enhances the segmentation phase by effectively differentiating between different parts of the image, leading to more accurate results. Then, the segmented image is fed into the detection phase. The tumor detection is performed by the Vision Transformer-based Multiscale Adaptive EfficientNetB7 (ViT-MAENB7) model. This model utilizes a combination of advanced algorithms and deep learning techniques to accurately identify and locate tumors within the segmented medical image. By incorporating a multiscale adaptive approach, the ViT-MAENB7 model can analyze the image at various levels of detail, improving the overall accuracy of tumor detection. This crucial step in the medical image analysis process allows healthcare professionals to make more informed decisions regarding patient treatment and care. Here, the created MGSOA algorithm is used to optimize the parameters for enhancing the performance of the model. The suggested breast cancer diagnosis performance is compared to conventional cancer diagnosis models and it showed high accuracy. The accuracy of the developed MGSOA-ViT-MAENB7 is 96.6 %, and others model like RNN, LSTM, EffNet, and ViT-MAENet given the accuracy to be 90.31 %, 92.79 %, 94.46 % and 94.75 %. The developed model's ability to analyze images at multiple scales, combined with the optimization provided by the MGSOA algorithm, results in a highly accurate and efficient system for detecting tumors in medical images. This cutting-edge technology not only improves the accuracy of diagnosis but also helps healthcare professionals tailor treatment plans to individual patients, ultimately leading to better outcomes. By outperforming traditional cancer diagnosis models, the proposed model is revolutionizing the field of medical imaging and setting a new standard for precision and effectiveness in healthcare.

2.
Cancer Invest ; 42(8): 710-725, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39189645

RESUMO

This work proposed a liver cancer classification scheme that includes Preprocessing, Feature extraction, and classification stages. The source images are pre-processed using Gaussian filtering. For segmentation, this work proposes a LUV transformation-based adaptive thresholding-based segmentation process. After the segmentation, certain features are extracted that include multi-texon based features, Improved Local Ternary Pattern (LTP-based features), and GLCM features during this phase. In the Classification phase, an improved Deep Maxout model is proposed for liver cancer detection. The adopted scheme is evaluated over other schemes based on various metrics. While the learning rate is 60%, an improved deep maxout model achieved a higher F-measure value (0.94) for classifying liver cancer; however, the previous method like Support Vector Machine (SVM), Random Forest (RF), Recurrent Neural Network (RNN), Long Short Term Memory (LSTM), K-Nearest Neighbor (KNN), Deep maxout, Convolutional Neural Network (CNN), and DL model holds less F-measure value. An improved deep maxout model achieved minimal False Positive Rate (FPR), and False Negative Rate (FNR) values with the best outcomes compared to other existing models for liver cancer classification.


Assuntos
Neoplasias Hepáticas , Redes Neurais de Computação , Máquina de Vetores de Suporte , Humanos , Neoplasias Hepáticas/diagnóstico , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos
3.
Artigo em Inglês | MEDLINE | ID: mdl-39036745

RESUMO

The goal of this study was to develop an image analysis algorithm for quantifying the effects of remodeling on cortical bone during early fracture healing. An adaptive thresholding technique with boundary curvature and tortuosity control was developed to automatically identify the endocortical and pericortical boundaries in the presence of high-gradient bone mineral density (BMD) near the healing zone. The algorithm successfully segmented more than 47,000 microCT images from 12 healing ovine osteotomies and intact contralateral tibiae. Resampling techniques were used to achieve data dimensionality reduction on the segmented images, allowing characterization of radial and axial distributions of cortical BMD. Local (transverse slice) and total (whole bone) remodeling scores were produced. These surrogate measures of cortical remodeling derived from BMD revealed that cortical changes were detectable throughout the region covered by callus and that the localized loss of cortical BMD was highest near the osteotomy. Total remodeling score was moderately and significantly correlated with callus volume and mineral composition (r > 0.64, p < 0.05), suggesting that the cortex may be a source of mineral needed to build callus.

4.
Sensors (Basel) ; 24(12)2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38931631

RESUMO

To achieve high-precision geomagnetic matching navigation, a reliable geomagnetic anomaly basemap is essential. However, the accuracy of the geomagnetic anomaly basemap is often compromised by noise data that are inherent in the process of data acquisition and integration of multiple data sources. In order to address this challenge, a denoising approach utilizing an improved multiscale wavelet transform is proposed. The denoising process involves the iterative multiscale wavelet transform, which leverages the structural characteristics of the geomagnetic anomaly basemap to extract statistical information on model residuals. This information serves as the a priori knowledge for determining the Bayes estimation threshold necessary for obtaining an optimal wavelet threshold. Additionally, the entropy method is employed to integrate three commonly used evaluation indexes-the signal-to-noise ratio, root mean square (RMS), and smoothing degree. A fusion model of soft and hard threshold functions is devised to mitigate the inherent drawbacks of a single threshold function. During denoising, the Elastic Net regular term is introduced to enhance the accuracy and stability of the denoising results. To validate the proposed method, denoising experiments are conducted using simulation data from a sphere magnetic anomaly model and measured data from a Pacific Ocean sea area. The denoising performance of the proposed method is compared with Gaussian filter, mean filter, and soft and hard threshold wavelet transform algorithms. The experimental results, both for the simulated and measured data, demonstrate that the proposed method excels in denoising effectiveness; maintaining high accuracy; preserving image details while effectively removing noise; and optimizing the signal-to-noise ratio, structural similarity, root mean square error, and smoothing degree of the denoised image.

5.
Sensors (Basel) ; 24(3)2024 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-38339555

RESUMO

The zero-velocity update (ZUPT) algorithm is a pivotal advancement in pedestrian navigation accuracy, utilizing foot-mounted inertial sensors. Its key issue hinges on accurately identifying periods of zero-velocity during human movement. This paper introduces an innovative adaptive sliding window technique, leveraging the Fourier Transform to precisely isolate the pedestrian's gait frequency from spectral data. Building on this, the algorithm adaptively adjusts the zero-velocity detection threshold in accordance with the identified gait frequency. This adaptation significantly refines the accuracy in detecting zero-velocity intervals. Experimental evaluations reveal that this method outperforms traditional fixed-threshold approaches by enhancing precision and minimizing false positives. Experiments on single-step estimation show the adaptability of the algorithm to motion states such as slow, fast, and running. Additionally, the paper demonstrates pedestrian trajectory localization experiments under a variety of walking conditions. These tests confirm that the proposed method substantially improves the performance of the ZUPT algorithm, highlighting its potential for pedestrian navigation systems.

6.
Int Ophthalmol ; 44(1): 91, 2024 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-38367192

RESUMO

BACKGROUND: The timely diagnosis of medical conditions, particularly diabetic retinopathy, relies on the identification of retinal microaneurysms. However, the commonly used retinography method poses a challenge due to the diminutive dimensions and limited differentiation of microaneurysms in images. PROBLEM STATEMENT: Automated identification of microaneurysms becomes crucial, necessitating the use of comprehensive ad-hoc processing techniques. Although fluorescein angiography enhances detectability, its invasiveness limits its suitability for routine preventative screening. OBJECTIVE: This study proposes a novel approach for detecting retinal microaneurysms using a fundus scan, leveraging circular reference-based shape features (CR-SF) and radial gradient-based texture features (RG-TF). METHODOLOGY: The proposed technique involves extracting CR-SF and RG-TF for each candidate microaneurysm, employing a robust back-propagation machine learning method for training. During testing, extracted features from test images are compared with training features to categorize microaneurysm presence. RESULTS: The experimental assessment utilized four datasets (MESSIDOR, Diaretdb1, e-ophtha-MA, and ROC), employing various measures. The proposed approach demonstrated high accuracy (98.01%), sensitivity (98.74%), specificity (97.12%), and area under the curve (91.72%). CONCLUSION: The presented approach showcases a successful method for detecting retinal microaneurysms using a fundus scan, providing promising accuracy and sensitivity. This non-invasive technique holds potential for effective screening in diabetic retinopathy and other related medical conditions.


Assuntos
Retinopatia Diabética , Microaneurisma , Humanos , Retinopatia Diabética/diagnóstico , Microaneurisma/diagnóstico , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Fundo de Olho
7.
Entropy (Basel) ; 25(11)2023 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-37998238

RESUMO

Over the past few years, we have seen an increased need to analyze the dynamically changing behaviors of economic and financial time series. These needs have led to significant demand for methods that denoise non-stationary time series across time and for specific investment horizons (scales) and localized windows (blocks) of time. Wavelets have long been known to decompose non-stationary time series into their different components or scale pieces. Recent methods satisfying this demand first decompose the non-stationary time series using wavelet techniques and then apply a thresholding method to separate and capture the signal and noise components of the series. Traditionally, wavelet thresholding methods rely on the discrete wavelet transform (DWT), which is a static thresholding technique that may not capture the time series of the estimated variance in the additive noise process. We introduce a novel continuous wavelet transform (CWT) dynamically optimized multivariate thresholding method (WaveL2E). Applying this method, we are simultaneously able to separate and capture the signal and noise components while estimating the dynamic noise variance. Our method shows improved results when compared to well-known methods, especially for high-frequency signal-rich time series, typically observed in finance.

8.
Sensors (Basel) ; 23(20)2023 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-37896490

RESUMO

During short baseline measurements in the Real-Time Kinematic Global Navigation Satellite System (GNSS-RTK), multipath error has a significant impact on the quality of observed data. Aiming at the characteristics of multipath error in GNSS-RTK measurements, a novel method that combines improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and adaptive wavelet packet threshold denoising (AWPTD) is proposed to reduce the effects of multipath error in GNSS-RTK measurements through modal function decomposition, effective coefficient sieving, and adaptive thresholding denoising. It first utilizes the ICEEMDAN algorithm to decompose the observed data into a series of intrinsic mode functions (IMFs). Then, a novel IMF selection method is designed based on information entropy to accurately locate the IMFs containing multipath error information. Finally, an optimized adaptive denoising method is applied to the selected IMFs to preserve the original signal characteristics to the maximum possible extent and improve the accuracy of the multipath error correction model. This study shows that the ICEEMDAN-AWPTD algorithm provides a multipath error correction model with higher accuracy compared to singular filtering algorithms based on the results of simulation data and GNSS-RTK data. After the multipath correction, the accuracy of the E, N, and U coordinates increased by 49.2%, 65.1%, and 56.6%, respectively.

9.
Pathol Res Pract ; 248: 154694, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37494804

RESUMO

Histological analysis with microscopy is the gold standard to diagnose and stage cancer, where slides or whole slide images are analyzed for cell morphological and spatial features by pathologists. The nuclei of cancerous cells are characterized by nonuniform chromatin distribution, irregular shapes, and varying size. As nucleus area and shape alone carry prognostic value, detection and segmentation of nuclei are among the most important steps in disease grading. However, evaluation of nuclei is a laborious, time-consuming, and subjective process with large variation among pathologists. Recent advances in digital pathology have allowed significant applications in nuclei detection, segmentation, and classification, but automated image analysis is greatly affected by staining factors, scanner variability, and imaging artifacts, requiring robust image preprocessing, normalization, and segmentation methods for clinically satisfactory results. In this paper, we aimed to evaluate and compare the digital image analysis techniques used in clinical pathology and research in the setting of gastric cancer. A literature review was conducted to evaluate potential methods of improving nuclei detection. Digitized images of 35 patients from a retrospective cohort of gastric adenocarcinoma at Oulu University Hospital in 1987-2016 were annotated for nuclei (n = 9085) by expert pathologists and 14 images of different cancer types from public TCGA dataset with annotated nuclei (n = 7000) were used as a comparison to evaluate applicability in other cancer types. The detection and segmentation accuracy with the selected color normalization and stain separation techniques were compared between the methods. The extracted information can be supplemented by patient's medical data and fed to the existing statistical clinical tools or subjected to subsequent AI-assisted classification and prediction models. The performance of each method is evaluated by several metrics against the annotations done by expert pathologists. The F1-measure of 0.854 ± 0.068 is achieved with color normalization for the gastric cancer dataset, and 0.907 ± 0.044 with color deconvolution for the public dataset, showing comparable results to the earlier state-of-the-art works. The developed techniques serve as a basis for further research on application and interpretability of AI-assisted tools for gastric cancer diagnosis.


Assuntos
Corantes , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/patologia , Artefatos , Estudos Retrospectivos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Núcleo Celular/metabolismo
10.
Sensors (Basel) ; 23(13)2023 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-37447640

RESUMO

Modern home automation systems include features that enhance security, such as cameras and radars. This paper proposes an innovative home security system that can detect burglars by analyzing acoustic signals and instantly notifying the authorized person(s). The system architecture incorporates the concept of the Internet of Things (IoT), resulting in a network and a user-friendly system. The proposed system uses an adaptive detection algorithm, namely the "short-time-average through long-time-average" algorithm. The proposed algorithm is implemented by an IoT device (Arduino Duo) to detect people's acoustical activities for the purpose of home/office security. The performance of the proposed system is evaluated using 10 acoustic signals representing actual events and background noise. The acoustic signals were generated by the sounds of keys shaking, the falling of a small object, the shrinking of a plastic bag, speaking, footsteps, etc. The effects of different algorithms' parameters on the performance of the proposed system have been thoroughly investigated.


Assuntos
Acústica , Som , Humanos , Algoritmos , Automação , Internet
11.
Biomed Phys Eng Express ; 9(4)2023 06 14.
Artigo em Inglês | MEDLINE | ID: mdl-37279702

RESUMO

Background. In telecardiology, the bio-signal acquisition processing and communication for clinical purposes occupies larger storage and significant bandwidth over a communication channel. Electrocardiograph (ECG) compression with effective reproductivity is highly desired. In the present work, a compression technique for ECG signals with less distortion by using a non-decimated stationary wavelet with a run-length encoding scheme has been proposed.Method. In the present work non-decimated stationary wavelet transform (NSWT) method has been developed to compress the ECG signals. The signal is subdivided into N levels with different thresholding values. The wavelet coefficients having values larger than the threshold are evaluated and the remaining are suppressed. In the presented technique, the biorthogonal (bior) wavelet is employed as it improves the compression ratio as well percentage root means square ratio (PRD) when compared to the existing method and exhibits improved results. After pre-processing, the coefficients are subjected to the Savitzky-Golay filter to remove corrupted signals. The wavelet coefficients are then quantized using dead-zone quantization, which eliminates values that are close to zero. To encode these values, a run-length encoding (RLE) scheme is applied, resulting in compressed ECG signals.Results. The presented methodology has been evaluated on the MITDB arrhythmias database which contains 4800 ECG fragments from forty-eight clinical records. The proposed technique has achieved an average compression ratio of 33.12, PRD of 1.99, NPRD of 2.53, and QS of 16.57, making it a promising approach for various applications.Conclusion. The proposed technique exhibits a high compression ratio and reduces distortion compared to the existing method.


Assuntos
Compressão de Dados , Análise de Ondaletas , Algoritmos , Compressão de Dados/métodos , Processamento de Sinais Assistido por Computador , Eletrocardiografia/métodos
12.
Sensors (Basel) ; 23(11)2023 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-37299742

RESUMO

This paper demonstrates an intruder detection system using a strain-based optical fiber Bragg grating (FBG), machine learning (ML), and adaptive thresholding to classify the intruder as no intruder, intruder, or wind at low levels of signal-to-noise ratio. We demonstrate the intruder detection system using a portion of a real fence manufactured and installed around one of the engineering college's gardens at King Saud University. The experimental results show that adaptive thresholding can help improve the performance of machine learning classifiers, such as linear discriminant analysis (LDA) or logistic regression algorithms in identifying an intruder's existence at low optical signal-to-noise ratio (OSNR) scenarios. The proposed method can achieve an average accuracy of 99.17% when the OSNR level is <0.5 dB.


Assuntos
Aprendizado de Máquina , Fibras Ópticas , Humanos , Algoritmos , Análise Discriminante
13.
Comput Struct Biotechnol J ; 21: 1102-1114, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36789266

RESUMO

In the treatment of Non-Hodgkin lymphoma (NHL), multiple therapeutic options are available. Improving outcome predictions are essential to optimize treatment. The metabolic active tumor volume (MATV) has shown to be a prognostic factor in NHL. It is usually retrieved using semi-automated thresholding methods based on standardized uptake values (SUV), calculated from 18F-Fluorodeoxyglucose Positron Emission Tomography (18F-FDG PET) images. However, there is currently no consensus method for NHL. The aim of this study was to review literature on different segmentation methods used, and to evaluate selected methods by using an in house created software tool. A software tool, MUltiple SUV Threshold (MUST)-segmenter was developed where tumor locations are identified by placing seed-points on the PET images, followed by subsequent region growing. Based on a literature review, 9 SUV thresholding methods were selected and MATVs were extracted. The MUST-segmenter was utilized in a cohort of 68 patients with NHL. Differences in MATVs were assessed with paired t-tests, and correlations and distributions figures. High variability and significant differences between the MATVs based on different segmentation methods (p < 0.05) were observed in the NHL patients. Median MATVs ranged from 35 to 211 cc. No consensus for determining MATV is available based on the literature. Using the MUST-segmenter with 9 selected SUV thresholding methods, we demonstrated a large and significant variation in MATVs. Identifying the most optimal segmentation method for patients with NHL is essential to further improve predictions of toxicity, response, and treatment outcomes, which can be facilitated by the MUST-segmenter.

14.
Artigo em Inglês | MEDLINE | ID: mdl-35756693

RESUMO

Cyclic AMP (cAMP) is a second messenger that regulates a wide variety of cellular functions. There is increasing evidence suggesting that signaling specificity is due in part to cAMP compartmentalization. In the last 15 years, development of cAMP-specific Förster resonance energy transfer (FRET) probes have allowed us to visualize spatial distributions of intracellular cAMP signals. The use of FRET-based sensors is not without its limitations, as FRET probes display low signal to noise ratio (SNR). Hyperspectral imaging and analysis approaches have, in part, allowed us to overcome these limitations by improving the SNR of FRET measurements. Here we demonstrate that the combination of hyperspectral imaging approaches, linear unmixing, and adaptive thresholding allow us to visualize regions of elevated cAMP (regions of interest - ROIs) in an unbiased manner. We transfected cDNA encoding the H188 FRET-based cAMP probe into pulmonary microvascular endothelial cells. Application of isoproterenol and prostaglandin E1 (PGE1) triggered complex cAMP responses. Spatial and temporal aspects of cAMP responses were quantified using an adaptive thresholding approach and compared between agonist treatment groups. Our data indicate that both the origination sites and spatial/temporal distributions of cAMP signals are agonist dependent in PMVECs. We are currently analyzing the data in order to better quantify the distribution of cAMP signals triggered by different agonists.

15.
Biomed Tech (Berl) ; 67(2): 105-117, 2022 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-35363448

RESUMO

In recent years surface electromyography signals-based machine learning models are rapidly establishing. The efficacy of prosthetic arm growth for transhumeral amputees is aided by efficient classifiers. The paper aims to propose a stacking classifier-based classification system for sEMG shoulder movements. It presents the possibility of various shoulder motions classification of transhumeral amputees. To improve the system performance, adaptive threshold method and wavelet transformation have been applied for features extraction. Six different classifiers Support Vector Machines (SVM), Tree, Random Forest (RF), K-Nearest Neighbour (KNN), AdaBoost and Naïve Bayes (NB) are designed to extract the sEMG data classification accuracy. With cross-validation, the accuracy of RF, Tree and Ada Boost is 97%, 92% and 92% respectively. Stacking classifiers provides an accuracy as 99.4% after combining the best predicted multiple classifiers.


Assuntos
Amputados , Membros Artificiais , Algoritmos , Braço , Teorema de Bayes , Eletromiografia/métodos , Humanos , Ombro , Máquina de Vetores de Suporte
16.
Biosensors (Basel) ; 12(2)2022 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-35200342

RESUMO

OBJECTIVE: We have developed a peak detection algorithm for accurate determination of heart rate, using photoplethysmographic (PPG) signals from a smartwatch, even in the presence of various cardiac rhythms, including normal sinus rhythm (NSR), premature atrial contraction (PAC), premature ventricle contraction (PVC), and atrial fibrillation (AF). Given the clinical need for accurate heart rate estimation in patients with AF, we developed a novel approach that reduces heart rate estimation errors when compared to peak detection algorithms designed for NSR. METHODS: Our peak detection method is composed of a sequential series of algorithms that are combined to discriminate the various arrhythmias described above. Moreover, a novel Poincaré plot scheme is used to discriminate between basal heart rate AF and rapid ventricular response (RVR) AF, and to differentiate PAC/PVC from NSR and AF. Training of the algorithm was performed only with Samsung Simband smartwatch data, whereas independent testing data which had more samples than did the training data were obtained from Samsung's Gear S3 and Galaxy Watch 3. RESULTS: The new PPG peak detection algorithm provides significantly lower average heart rate and interbeat interval beat-to-beat estimation errors-30% and 66% lower-and mean heart rate and mean interbeat interval estimation errors-60% and 77% lower-when compared to the best of the seven other traditional peak detection algorithms that are known to be accurate for NSR. Our new PPG peak detection algorithm was the overall best performers for other arrhythmias. CONCLUSION: The proposed method for PPG peak detection automatically detects and discriminates between various arrhythmias among different waveforms of PPG data, delivers significantly lower heart rate estimation errors for participants with AF, and reduces the number of false negative peaks. SIGNIFICANCE: By enabling accurate determination of heart rate despite the presence of AF with rapid ventricular response or PAC/PVCs, we enable clinicians to make more accurate recommendations for heart rate control from PPG data.


Assuntos
Fibrilação Atrial , Complexos Ventriculares Prematuros , Algoritmos , Fibrilação Atrial/diagnóstico , Eletrocardiografia , Frequência Cardíaca/fisiologia , Humanos , Fotopletismografia/métodos , Complexos Ventriculares Prematuros/diagnóstico
17.
J Imaging ; 7(9)2021 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-34460799

RESUMO

We provide a comprehensive and in-depth overview of the various approaches applicable to the recognition of Data Matrix codes in arbitrary images. All presented methods use the typical "L" shaped Finder Pattern to locate the Data Matrix code in the image. Well-known image processing techniques such as edge detection, adaptive thresholding, or connected component labeling are used to identify the Finder Pattern. The recognition rate of the compared methods was tested on a set of images with Data Matrix codes, which is published together with the article. The experimental results show that methods based on adaptive thresholding achieved a better recognition rate than methods based on edge detection.

18.
Biomed Tech (Berl) ; 66(3): 293-304, 2021 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-34062633

RESUMO

The damage in the spinal cord due to vertebral fractures may result in loss of sensation and muscle function either permanently or temporarily. The neurological condition of the patient can be improved only with the early detection and the treatment of the injury in the spinal cord. This paper proposes a spinal cord segmentation and injury detection system based on the proposed Crow search-Rider Optimization-based DCNN (CS-ROA DCNN) method, which can detect the injury in the spinal cord in an effective manner. Initially, the segmentation of the CT image of the spinal cord is performed using the adaptive thresholding method, followed by which the localization of the disc is performed using the Sparse FCM clustering algorithm (Sparse-FCM). The localized discs are subjected to a feature extraction process, where the features necessary for the classification process are extracted. The classification process is done using DCNN trained using the proposed CS-ROA, which is the integration of the Crow Search Algorithm (CSA) and Rider Optimization Algorithm (ROA). The experimentation is performed using the evaluation metrics, such as accuracy, sensitivity, and specificity. The proposed method achieved the high accuracy, sensitivity, and specificity of 0.874, 0.8961, and 0.8828, respectively that shows the effectiveness of the proposed CS-ROA DCNN method in spinal cord injury detection.


Assuntos
Traumatismos da Medula Espinal/fisiopatologia , Medula Espinal/fisiologia , Algoritmos , Animais , Corvos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação
19.
Entropy (Basel) ; 23(4)2021 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-33919807

RESUMO

When applying a diagnostic technique to complex systems, whose dynamics, constraints, and environment evolve over time, being able to re-evaluate the residuals that are capable of detecting defaults and proposing the most appropriate ones can quickly prove to make sense. For this purpose, the concept of adaptive diagnosis is introduced. In this work, the contributions of information theory are investigated in order to propose a Fault-Tolerant multi-sensor data fusion framework. This work is part of studies proposing an architecture combining a stochastic filter for state estimation with a diagnostic layer with the aim of proposing a safe and accurate state estimation from potentially inconsistent or erroneous sensors measurements. From the design of the residuals, using α-Rényi Divergence (α-RD), to the optimization of the decision threshold, through the establishment of a function that is dedicated to the choice of α at each moment, we detail each step of the proposed automated decision-support framework. We also dwell on: (1) the consequences of the degree of freedom provided by this α parameter and on (2) the application-dictated policy to design the α tuning function playing on the overall performance of the system (detection rate, false alarms, and missed detection rates). Finally, we present a real application case on which this framework has been tested. The problem of multi-sensor localization, integrating sensors whose operating range is variable according to the environment crossed, is a case study to illustrate the contributions of such an approach and show the performance.

20.
Stat Med ; 40(15): 3499-3515, 2021 07 10.
Artigo em Inglês | MEDLINE | ID: mdl-33840134

RESUMO

Microbial communities analysis is drawing growing attention due to the rapid development fire of high-throughput sequencing techniques nowadays. The observed data has the following typical characteristics: it is high-dimensional, compositional (lying in a simplex) and even would be leptokurtic and highly skewed due to the existence of overly abundant taxa, which makes the conventional correlation analysis infeasible to study the co-occurrence and co-exclusion relationship between microbial taxa. In this article, we address the challenges of covariance estimation for this kind of data. Assuming the basis covariance matrix lying in a well-recognized class of sparse covariance matrices, we adopt a proxy matrix known as centered log-ratio covariance matrix in the literature. We construct a Median-of-Means estimator for the centered log-ratio covariance matrix and propose a thresholding procedure that is adaptive to the variability of individual entries. By imposing a much weaker finite fourth moment condition compared with the sub-Gaussianity condition in the literature, we derive the optimal rate of convergence under the spectral norm. In addition, we also provide theoretical guarantee on support recovery. The adaptive thresholding procedure of the MOM estimator is easy to implement and gains robustness when outliers or heavy-tailedness exist. Thorough simulation studies are conducted to show the advantages of the proposed procedure over some state-of-the-arts methods. At last, we apply the proposed method to analyze a microbiome dataset in human gut.


Assuntos
Microbiota , Simulação por Computador , Sequenciamento de Nucleotídeos em Larga Escala , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA