Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
Cancer Invest ; : 1-16, 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39189645

RESUMO

This work proposed a liver cancer classification scheme that includes Preprocessing, Feature extraction, and classification stages. The source images are pre-processed using Gaussian filtering. For segmentation, this work proposes a LUV transformation-based adaptive thresholding-based segmentation process. After the segmentation, certain features are extracted that include multi-texon based features, Improved Local Ternary Pattern (LTP-based features), and GLCM features during this phase. In the Classification phase, an improved Deep Maxout model is proposed for liver cancer detection. The adopted scheme is evaluated over other schemes based on various metrics. While the learning rate is 60%, an improved deep maxout model achieved a higher F-measure value (0.94) for classifying liver cancer; however, the previous method like Support Vector Machine (SVM), Random Forest (RF), Recurrent Neural Network (RNN), Long Short Term Memory (LSTM), K-Nearest Neighbor (KNN), Deep maxout, Convolutional Neural Network (CNN), and DL model holds less F-measure value. An improved deep maxout model achieved minimal False Positive Rate (FPR), and False Negative Rate (FNR) values with the best outcomes compared to other existing models for liver cancer classification.

2.
Sensors (Basel) ; 24(12)2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38931631

RESUMO

To achieve high-precision geomagnetic matching navigation, a reliable geomagnetic anomaly basemap is essential. However, the accuracy of the geomagnetic anomaly basemap is often compromised by noise data that are inherent in the process of data acquisition and integration of multiple data sources. In order to address this challenge, a denoising approach utilizing an improved multiscale wavelet transform is proposed. The denoising process involves the iterative multiscale wavelet transform, which leverages the structural characteristics of the geomagnetic anomaly basemap to extract statistical information on model residuals. This information serves as the a priori knowledge for determining the Bayes estimation threshold necessary for obtaining an optimal wavelet threshold. Additionally, the entropy method is employed to integrate three commonly used evaluation indexes-the signal-to-noise ratio, root mean square (RMS), and smoothing degree. A fusion model of soft and hard threshold functions is devised to mitigate the inherent drawbacks of a single threshold function. During denoising, the Elastic Net regular term is introduced to enhance the accuracy and stability of the denoising results. To validate the proposed method, denoising experiments are conducted using simulation data from a sphere magnetic anomaly model and measured data from a Pacific Ocean sea area. The denoising performance of the proposed method is compared with Gaussian filter, mean filter, and soft and hard threshold wavelet transform algorithms. The experimental results, both for the simulated and measured data, demonstrate that the proposed method excels in denoising effectiveness; maintaining high accuracy; preserving image details while effectively removing noise; and optimizing the signal-to-noise ratio, structural similarity, root mean square error, and smoothing degree of the denoised image.

3.
Sensors (Basel) ; 24(3)2024 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-38339555

RESUMO

The zero-velocity update (ZUPT) algorithm is a pivotal advancement in pedestrian navigation accuracy, utilizing foot-mounted inertial sensors. Its key issue hinges on accurately identifying periods of zero-velocity during human movement. This paper introduces an innovative adaptive sliding window technique, leveraging the Fourier Transform to precisely isolate the pedestrian's gait frequency from spectral data. Building on this, the algorithm adaptively adjusts the zero-velocity detection threshold in accordance with the identified gait frequency. This adaptation significantly refines the accuracy in detecting zero-velocity intervals. Experimental evaluations reveal that this method outperforms traditional fixed-threshold approaches by enhancing precision and minimizing false positives. Experiments on single-step estimation show the adaptability of the algorithm to motion states such as slow, fast, and running. Additionally, the paper demonstrates pedestrian trajectory localization experiments under a variety of walking conditions. These tests confirm that the proposed method substantially improves the performance of the ZUPT algorithm, highlighting its potential for pedestrian navigation systems.

4.
Int Ophthalmol ; 44(1): 91, 2024 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-38367192

RESUMO

BACKGROUND: The timely diagnosis of medical conditions, particularly diabetic retinopathy, relies on the identification of retinal microaneurysms. However, the commonly used retinography method poses a challenge due to the diminutive dimensions and limited differentiation of microaneurysms in images. PROBLEM STATEMENT: Automated identification of microaneurysms becomes crucial, necessitating the use of comprehensive ad-hoc processing techniques. Although fluorescein angiography enhances detectability, its invasiveness limits its suitability for routine preventative screening. OBJECTIVE: This study proposes a novel approach for detecting retinal microaneurysms using a fundus scan, leveraging circular reference-based shape features (CR-SF) and radial gradient-based texture features (RG-TF). METHODOLOGY: The proposed technique involves extracting CR-SF and RG-TF for each candidate microaneurysm, employing a robust back-propagation machine learning method for training. During testing, extracted features from test images are compared with training features to categorize microaneurysm presence. RESULTS: The experimental assessment utilized four datasets (MESSIDOR, Diaretdb1, e-ophtha-MA, and ROC), employing various measures. The proposed approach demonstrated high accuracy (98.01%), sensitivity (98.74%), specificity (97.12%), and area under the curve (91.72%). CONCLUSION: The presented approach showcases a successful method for detecting retinal microaneurysms using a fundus scan, providing promising accuracy and sensitivity. This non-invasive technique holds potential for effective screening in diabetic retinopathy and other related medical conditions.


Assuntos
Retinopatia Diabética , Microaneurisma , Humanos , Retinopatia Diabética/diagnóstico , Microaneurisma/diagnóstico , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Fundo de Olho
5.
Sensors (Basel) ; 23(13)2023 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-37447640

RESUMO

Modern home automation systems include features that enhance security, such as cameras and radars. This paper proposes an innovative home security system that can detect burglars by analyzing acoustic signals and instantly notifying the authorized person(s). The system architecture incorporates the concept of the Internet of Things (IoT), resulting in a network and a user-friendly system. The proposed system uses an adaptive detection algorithm, namely the "short-time-average through long-time-average" algorithm. The proposed algorithm is implemented by an IoT device (Arduino Duo) to detect people's acoustical activities for the purpose of home/office security. The performance of the proposed system is evaluated using 10 acoustic signals representing actual events and background noise. The acoustic signals were generated by the sounds of keys shaking, the falling of a small object, the shrinking of a plastic bag, speaking, footsteps, etc. The effects of different algorithms' parameters on the performance of the proposed system have been thoroughly investigated.


Assuntos
Acústica , Som , Humanos , Algoritmos , Automação , Internet
6.
Sensors (Basel) ; 23(20)2023 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-37896490

RESUMO

During short baseline measurements in the Real-Time Kinematic Global Navigation Satellite System (GNSS-RTK), multipath error has a significant impact on the quality of observed data. Aiming at the characteristics of multipath error in GNSS-RTK measurements, a novel method that combines improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and adaptive wavelet packet threshold denoising (AWPTD) is proposed to reduce the effects of multipath error in GNSS-RTK measurements through modal function decomposition, effective coefficient sieving, and adaptive thresholding denoising. It first utilizes the ICEEMDAN algorithm to decompose the observed data into a series of intrinsic mode functions (IMFs). Then, a novel IMF selection method is designed based on information entropy to accurately locate the IMFs containing multipath error information. Finally, an optimized adaptive denoising method is applied to the selected IMFs to preserve the original signal characteristics to the maximum possible extent and improve the accuracy of the multipath error correction model. This study shows that the ICEEMDAN-AWPTD algorithm provides a multipath error correction model with higher accuracy compared to singular filtering algorithms based on the results of simulation data and GNSS-RTK data. After the multipath correction, the accuracy of the E, N, and U coordinates increased by 49.2%, 65.1%, and 56.6%, respectively.

7.
Sensors (Basel) ; 23(11)2023 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-37299742

RESUMO

This paper demonstrates an intruder detection system using a strain-based optical fiber Bragg grating (FBG), machine learning (ML), and adaptive thresholding to classify the intruder as no intruder, intruder, or wind at low levels of signal-to-noise ratio. We demonstrate the intruder detection system using a portion of a real fence manufactured and installed around one of the engineering college's gardens at King Saud University. The experimental results show that adaptive thresholding can help improve the performance of machine learning classifiers, such as linear discriminant analysis (LDA) or logistic regression algorithms in identifying an intruder's existence at low optical signal-to-noise ratio (OSNR) scenarios. The proposed method can achieve an average accuracy of 99.17% when the OSNR level is <0.5 dB.


Assuntos
Aprendizado de Máquina , Fibras Ópticas , Humanos , Algoritmos , Análise Discriminante
8.
Entropy (Basel) ; 25(11)2023 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-37998238

RESUMO

Over the past few years, we have seen an increased need to analyze the dynamically changing behaviors of economic and financial time series. These needs have led to significant demand for methods that denoise non-stationary time series across time and for specific investment horizons (scales) and localized windows (blocks) of time. Wavelets have long been known to decompose non-stationary time series into their different components or scale pieces. Recent methods satisfying this demand first decompose the non-stationary time series using wavelet techniques and then apply a thresholding method to separate and capture the signal and noise components of the series. Traditionally, wavelet thresholding methods rely on the discrete wavelet transform (DWT), which is a static thresholding technique that may not capture the time series of the estimated variance in the additive noise process. We introduce a novel continuous wavelet transform (CWT) dynamically optimized multivariate thresholding method (WaveL2E). Applying this method, we are simultaneously able to separate and capture the signal and noise components while estimating the dynamic noise variance. Our method shows improved results when compared to well-known methods, especially for high-frequency signal-rich time series, typically observed in finance.

9.
Stat Med ; 40(15): 3499-3515, 2021 07 10.
Artigo em Inglês | MEDLINE | ID: mdl-33840134

RESUMO

Microbial communities analysis is drawing growing attention due to the rapid development fire of high-throughput sequencing techniques nowadays. The observed data has the following typical characteristics: it is high-dimensional, compositional (lying in a simplex) and even would be leptokurtic and highly skewed due to the existence of overly abundant taxa, which makes the conventional correlation analysis infeasible to study the co-occurrence and co-exclusion relationship between microbial taxa. In this article, we address the challenges of covariance estimation for this kind of data. Assuming the basis covariance matrix lying in a well-recognized class of sparse covariance matrices, we adopt a proxy matrix known as centered log-ratio covariance matrix in the literature. We construct a Median-of-Means estimator for the centered log-ratio covariance matrix and propose a thresholding procedure that is adaptive to the variability of individual entries. By imposing a much weaker finite fourth moment condition compared with the sub-Gaussianity condition in the literature, we derive the optimal rate of convergence under the spectral norm. In addition, we also provide theoretical guarantee on support recovery. The adaptive thresholding procedure of the MOM estimator is easy to implement and gains robustness when outliers or heavy-tailedness exist. Thorough simulation studies are conducted to show the advantages of the proposed procedure over some state-of-the-arts methods. At last, we apply the proposed method to analyze a microbiome dataset in human gut.


Assuntos
Microbiota , Simulação por Computador , Sequenciamento de Nucleotídeos em Larga Escala , Humanos
10.
Entropy (Basel) ; 23(4)2021 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-33919807

RESUMO

When applying a diagnostic technique to complex systems, whose dynamics, constraints, and environment evolve over time, being able to re-evaluate the residuals that are capable of detecting defaults and proposing the most appropriate ones can quickly prove to make sense. For this purpose, the concept of adaptive diagnosis is introduced. In this work, the contributions of information theory are investigated in order to propose a Fault-Tolerant multi-sensor data fusion framework. This work is part of studies proposing an architecture combining a stochastic filter for state estimation with a diagnostic layer with the aim of proposing a safe and accurate state estimation from potentially inconsistent or erroneous sensors measurements. From the design of the residuals, using α-Rényi Divergence (α-RD), to the optimization of the decision threshold, through the establishment of a function that is dedicated to the choice of α at each moment, we detail each step of the proposed automated decision-support framework. We also dwell on: (1) the consequences of the degree of freedom provided by this α parameter and on (2) the application-dictated policy to design the α tuning function playing on the overall performance of the system (detection rate, false alarms, and missed detection rates). Finally, we present a real application case on which this framework has been tested. The problem of multi-sensor localization, integrating sensors whose operating range is variable according to the environment crossed, is a case study to illustrate the contributions of such an approach and show the performance.

11.
Biomed Eng Online ; 19(1): 73, 2020 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-32933534

RESUMO

BACKGROUND: Intracranial aneurysm is a common type of cerebrovascular disease with a risk of devastating subarachnoid hemorrhage if it is ruptured. Accurate computer-aided detection of aneurysms can help doctors improve the diagnostic accuracy, and it is very helpful in reducing the risk of subarachnoid hemorrhage. Aneurysms are detected in 2D or 3D images from different modalities. 3D images can provide more vascular information than 2D images, and it is more difficult to detect. The detection performance of 2D images is related to the angle of view; it may take several angles to determine the aneurysm. As the gold standard for the diagnosis of vascular diseases, the detection on digital subtraction angiography (DSA) has more clinical value than other modalities. In this study, we proposed an adaptive multiscale filter to detect intracranial aneurysms on 3D-DSA. METHODS: Adaptive aneurysm detection consists of three parts. The first part is a filter based on Hessian matrix eigenvalues, whose parameters are automatically obtained by Bayesian optimization. The second part is aneurysm extraction based on region growth and adaptive thresholding. The third part is the iterative detection strategy for multiple aneurysms. RESULTS: The proposed method was quantitatively evaluated on data sets of 145 patients. The results showed a detection precision of 94.6%, and a sensitivity of 96.4% with a false-positive rate of 6.2%. Among aneurysms smaller than 5 mm, 93.9% were found. Compared with aneurysm detection on 2D-DSA, automatic detection on 3D-DSA can effectively reduce the misdiagnosis rate and obtain more accurate detection results. Compared with other modalities detection, we also get similar or better detection performance. CONCLUSIONS: The experimental results show that the proposed method is stable and reliable for aneurysm detection, which provides an option for doctors to accurately diagnose aneurysms.


Assuntos
Angiografia Digital , Imageamento Tridimensional/métodos , Aneurisma Intracraniano/diagnóstico por imagem , Automação , Teorema de Bayes , Humanos
12.
Sensors (Basel) ; 19(9)2019 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-31071989

RESUMO

This paper presents an automatic parameter tuning procedure specially developed for a dynamic adaptive thresholding algorithm for fruit detection. One of the major algorithm strengths is its high detection performances using a small set of training images. The algorithm enables robust detection in highly-variable lighting conditions. The image is dynamically split into variably-sized regions, where each region has approximately homogeneous lighting conditions. Nine thresholds were selected to accommodate three different illumination levels for three different dimensions in four color spaces: RGB, HSI, LAB, and NDI. Each color space uses a different method to represent a pixel in an image: RGB (Red, Green, Blue), HSI (Hue, Saturation, Intensity), LAB (Lightness, Green to Red and Blue to Yellow) and NDI (Normalized Difference Index, which represents the normal difference between the RGB color dimensions). The thresholds were selected by quantifying the required relation between the true positive rate and false positive rate. A tuning process was developed to determine the best fit values of the algorithm parameters to enable easy adaption to different kinds of fruits (shapes, colors) and environments (illumination conditions). Extensive analyses were conducted on three different databases acquired in natural growing conditions: red apples (nine images with 113 apples), green grape clusters (129 images with 1078 grape clusters), and yellow peppers (30 images with 73 peppers). These databases are provided as part of this paper for future developments. The algorithm was evaluated using cross-validation with 70% images for training and 30% images for testing. The algorithm successfully detected apples and peppers in variable lighting conditions resulting with an F-score of 93.17% and 99.31% respectively. Results show the importance of the tuning process for the generalization of the algorithm to different kinds of fruits and environments. In addition, this research revealed the importance of evaluating different color spaces since for each kind of fruit, a different color space might be superior over the others. The LAB color space is most robust to noise. The algorithm is robust to changes in the threshold learned by the training process and to noise effects in images.


Assuntos
Algoritmos , Frutas/anatomia & histologia , Automação , Capsicum/anatomia & histologia , Cor , Bases de Dados como Assunto , Processamento de Imagem Assistida por Computador , Malus/anatomia & histologia , Curva ROC , Vitis/anatomia & histologia
13.
Biomed Eng Online ; 17(1): 89, 2018 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-29925379

RESUMO

BACKGROUND: Accurate nuclei detection and segmentation in histological images is essential for many clinical purposes. While manual annotations are time-consuming and operator-dependent, full automated segmentation remains a challenging task due to the high variability of cells intensity, size and morphology. Most of the proposed algorithms for the automated segmentation of nuclei were designed for specific organ or tissues. RESULTS: The aim of this study was to develop and validate a fully multiscale method, named MANA (Multiscale Adaptive Nuclei Analysis), for nuclei segmentation in different tissues and magnifications. MANA was tested on a dataset of H&E stained tissue images with more than 59,000 annotated nuclei, taken from six organs (colon, liver, bone, prostate, adrenal gland and thyroid) and three magnifications (10×, 20×, 40×). Automatic results were compared with manual segmentations and three open-source software designed for nuclei detection. For each organ, MANA obtained always an F1-score higher than 0.91, with an average F1 of 0.9305 ± 0.0161. The average computational time was about 20 s independently of the number of nuclei to be detected (anyway, higher than 1000), indicating the efficiency of the proposed technique. CONCLUSION: To the best of our knowledge, MANA is the first fully automated multi-scale and multi-tissue algorithm for nuclei detection. Overall, the robustness and versatility of MANA allowed to achieve, on different organs and magnifications, performances in line or better than those of state-of-art algorithms optimized for single tissues.


Assuntos
Núcleo Celular/metabolismo , Amarelo de Eosina-(YS)/metabolismo , Hematoxilina/metabolismo , Processamento de Imagem Assistida por Computador , Algoritmos , Humanos , Coloração e Rotulagem
14.
J Med Syst ; 40(4): 82, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26811073

RESUMO

Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection.


Assuntos
Neoplasias da Mama/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Mamografia/métodos , Análise de Ondaletas , Algoritmos , Neoplasias da Mama/patologia , Bases de Dados Factuais , Feminino , Humanos , Reprodutibilidade dos Testes
15.
Artigo em Inglês | MEDLINE | ID: mdl-39036745

RESUMO

The goal of this study was to develop an image analysis algorithm for quantifying the effects of remodeling on cortical bone during early fracture healing. An adaptive thresholding technique with boundary curvature and tortuosity control was developed to automatically identify the endocortical and pericortical boundaries in the presence of high-gradient bone mineral density (BMD) near the healing zone. The algorithm successfully segmented more than 47,000 microCT images from 12 healing ovine osteotomies and intact contralateral tibiae. Resampling techniques were used to achieve data dimensionality reduction on the segmented images, allowing characterization of radial and axial distributions of cortical BMD. Local (transverse slice) and total (whole bone) remodeling scores were produced. These surrogate measures of cortical remodeling derived from BMD revealed that cortical changes were detectable throughout the region covered by callus and that the localized loss of cortical BMD was highest near the osteotomy. Total remodeling score was moderately and significantly correlated with callus volume and mineral composition (r > 0.64, p < 0.05), suggesting that the cortex may be a source of mineral needed to build callus.

16.
Cytometry A ; 83(11): 1001-16, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24105983

RESUMO

In this article, we explore adaptive global and local segmentation techniques for a lab-on-chip nutrition monitoring system (NutriChip). The experimental setup consists of Caco-2 intestinal cells that can be artificially stimulated to trigger an immune response. The eventual response is optically monitored using immunofluoresence techniques targeting toll-like receptor 2 (TLR2). Two problems of interest need to be addressed by means of image processing. First, a new cell sample must be properly classified as stimulated or not. Second, the location of the stained TLR2 must be recovered in case the sample has been stimulated. The algorithmic approach to solving these problems is based on the ability of a segmentation technique to properly segment fluorescent spots. The sample classification is based on the amount and intensity of the segmented pixels, while the various segmenting blobs provide an approximate localization of TLR2. A novel local thresholding algorithm and three well-known spot segmentation techniques are compared in this study. Quantitative assessment of these techniques based on real and synthesized data demonstrates the improved segmentation capabilities of the proposed algorithm.


Assuntos
Biomarcadores , Imunofluorescência , Processamento de Imagem Assistida por Computador , Receptor 2 Toll-Like/isolamento & purificação , Células CACO-2 , Humanos , Receptor 2 Toll-Like/genética
17.
Biomed Phys Eng Express ; 9(4)2023 06 14.
Artigo em Inglês | MEDLINE | ID: mdl-37279702

RESUMO

Background. In telecardiology, the bio-signal acquisition processing and communication for clinical purposes occupies larger storage and significant bandwidth over a communication channel. Electrocardiograph (ECG) compression with effective reproductivity is highly desired. In the present work, a compression technique for ECG signals with less distortion by using a non-decimated stationary wavelet with a run-length encoding scheme has been proposed.Method. In the present work non-decimated stationary wavelet transform (NSWT) method has been developed to compress the ECG signals. The signal is subdivided into N levels with different thresholding values. The wavelet coefficients having values larger than the threshold are evaluated and the remaining are suppressed. In the presented technique, the biorthogonal (bior) wavelet is employed as it improves the compression ratio as well percentage root means square ratio (PRD) when compared to the existing method and exhibits improved results. After pre-processing, the coefficients are subjected to the Savitzky-Golay filter to remove corrupted signals. The wavelet coefficients are then quantized using dead-zone quantization, which eliminates values that are close to zero. To encode these values, a run-length encoding (RLE) scheme is applied, resulting in compressed ECG signals.Results. The presented methodology has been evaluated on the MITDB arrhythmias database which contains 4800 ECG fragments from forty-eight clinical records. The proposed technique has achieved an average compression ratio of 33.12, PRD of 1.99, NPRD of 2.53, and QS of 16.57, making it a promising approach for various applications.Conclusion. The proposed technique exhibits a high compression ratio and reduces distortion compared to the existing method.


Assuntos
Compressão de Dados , Análise de Ondaletas , Algoritmos , Compressão de Dados/métodos , Processamento de Sinais Assistido por Computador , Eletrocardiografia/métodos
18.
Pathol Res Pract ; 248: 154694, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37494804

RESUMO

Histological analysis with microscopy is the gold standard to diagnose and stage cancer, where slides or whole slide images are analyzed for cell morphological and spatial features by pathologists. The nuclei of cancerous cells are characterized by nonuniform chromatin distribution, irregular shapes, and varying size. As nucleus area and shape alone carry prognostic value, detection and segmentation of nuclei are among the most important steps in disease grading. However, evaluation of nuclei is a laborious, time-consuming, and subjective process with large variation among pathologists. Recent advances in digital pathology have allowed significant applications in nuclei detection, segmentation, and classification, but automated image analysis is greatly affected by staining factors, scanner variability, and imaging artifacts, requiring robust image preprocessing, normalization, and segmentation methods for clinically satisfactory results. In this paper, we aimed to evaluate and compare the digital image analysis techniques used in clinical pathology and research in the setting of gastric cancer. A literature review was conducted to evaluate potential methods of improving nuclei detection. Digitized images of 35 patients from a retrospective cohort of gastric adenocarcinoma at Oulu University Hospital in 1987-2016 were annotated for nuclei (n = 9085) by expert pathologists and 14 images of different cancer types from public TCGA dataset with annotated nuclei (n = 7000) were used as a comparison to evaluate applicability in other cancer types. The detection and segmentation accuracy with the selected color normalization and stain separation techniques were compared between the methods. The extracted information can be supplemented by patient's medical data and fed to the existing statistical clinical tools or subjected to subsequent AI-assisted classification and prediction models. The performance of each method is evaluated by several metrics against the annotations done by expert pathologists. The F1-measure of 0.854 ± 0.068 is achieved with color normalization for the gastric cancer dataset, and 0.907 ± 0.044 with color deconvolution for the public dataset, showing comparable results to the earlier state-of-the-art works. The developed techniques serve as a basis for further research on application and interpretability of AI-assisted tools for gastric cancer diagnosis.


Assuntos
Corantes , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/patologia , Artefatos , Estudos Retrospectivos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Núcleo Celular/metabolismo
19.
Comput Struct Biotechnol J ; 21: 1102-1114, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36789266

RESUMO

In the treatment of Non-Hodgkin lymphoma (NHL), multiple therapeutic options are available. Improving outcome predictions are essential to optimize treatment. The metabolic active tumor volume (MATV) has shown to be a prognostic factor in NHL. It is usually retrieved using semi-automated thresholding methods based on standardized uptake values (SUV), calculated from 18F-Fluorodeoxyglucose Positron Emission Tomography (18F-FDG PET) images. However, there is currently no consensus method for NHL. The aim of this study was to review literature on different segmentation methods used, and to evaluate selected methods by using an in house created software tool. A software tool, MUltiple SUV Threshold (MUST)-segmenter was developed where tumor locations are identified by placing seed-points on the PET images, followed by subsequent region growing. Based on a literature review, 9 SUV thresholding methods were selected and MATVs were extracted. The MUST-segmenter was utilized in a cohort of 68 patients with NHL. Differences in MATVs were assessed with paired t-tests, and correlations and distributions figures. High variability and significant differences between the MATVs based on different segmentation methods (p < 0.05) were observed in the NHL patients. Median MATVs ranged from 35 to 211 cc. No consensus for determining MATV is available based on the literature. Using the MUST-segmenter with 9 selected SUV thresholding methods, we demonstrated a large and significant variation in MATVs. Identifying the most optimal segmentation method for patients with NHL is essential to further improve predictions of toxicity, response, and treatment outcomes, which can be facilitated by the MUST-segmenter.

20.
Biomed Tech (Berl) ; 67(2): 105-117, 2022 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-35363448

RESUMO

In recent years surface electromyography signals-based machine learning models are rapidly establishing. The efficacy of prosthetic arm growth for transhumeral amputees is aided by efficient classifiers. The paper aims to propose a stacking classifier-based classification system for sEMG shoulder movements. It presents the possibility of various shoulder motions classification of transhumeral amputees. To improve the system performance, adaptive threshold method and wavelet transformation have been applied for features extraction. Six different classifiers Support Vector Machines (SVM), Tree, Random Forest (RF), K-Nearest Neighbour (KNN), AdaBoost and Naïve Bayes (NB) are designed to extract the sEMG data classification accuracy. With cross-validation, the accuracy of RF, Tree and Ada Boost is 97%, 92% and 92% respectively. Stacking classifiers provides an accuracy as 99.4% after combining the best predicted multiple classifiers.


Assuntos
Amputados , Membros Artificiais , Algoritmos , Braço , Teorema de Bayes , Eletromiografia/métodos , Humanos , Ombro , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA