Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 70
Filtrar
1.
Neuroradiology ; 64(10): 2085-2089, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35809100

RESUMO

A 23-year-old previously healthy man (Patient 1) and a 33-year-old woman with a past history of depression (Patient 2) developed neurological symptoms approximately 1 week after receipt of the first COVID-19 mRNA vaccination and deteriorated over the next week. Patient 1 reported nausea, headache, a high fever, and retrograde amnesia. Patient 2 reported visual disturbance, headache, dysarthria, a left forearm tremor, dysesthesia of the mouth and distal limbs, and visual agnosia. PCR test results for SARS-CoV-2 were negative. Complete blood cell count, biochemistry, and antibody test and cerebrospinal fluid test findings were unremarkable. Diffusion-weighted and fluid-attenuated inversion recovery MRI of the brain showed a high signal intensity lesion at the midline of the splenium of the corpus callosum compatible with cytotoxic lesions of the corpus callosum (CLOCCs). High-dose intravenous methylprednisolone improved their symptoms and imaging findings. CLOCCs should be considered in patients with neurological manifestation after COVID-19 vaccination.


Assuntos
Antineoplásicos , Vacinas contra COVID-19 , COVID-19 , Encefalite , Adulto , COVID-19/prevenção & controle , Vacinas contra COVID-19/efeitos adversos , Corpo Caloso/diagnóstico por imagem , Corpo Caloso/patologia , Feminino , Cefaleia , Humanos , Imageamento por Ressonância Magnética , Masculino , SARS-CoV-2 , Vacinação , Adulto Jovem
2.
Magn Reson Med ; 85(4): 2188-2200, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33107119

RESUMO

PURPOSE: To assess the correlation and differences between common amide proton transfer (APT) quantification methods in the diagnosis of ischemic stroke. METHODS: Five APT quantification methods, including asymmetry analysis and its variants as well as two Lorentzian model-based methods, were applied to data acquired from six rats that underwent middle cerebral artery occlusion scanned at 9.4T. Diffusion and perfusion-weighted images, and water relaxation time maps were also acquired to study the relationship of these conventional imaging modalities with the different APT quantification methods. RESULTS: The APT ischemic area estimates had varying sizes (Jaccard index: 0.544 ≤ J ≤ 0.971) and had varying correlations in their distributions (Pearson correlation coefficient: 0.104 ≤ r ≤ 0.995), revealing discrepancies in the quantified ischemic areas. The Lorentzian methods produced the highest contrast-to-noise ratios (CNRs; 1.427 ≤ CNR ≤ 2.002), but generated APT ischemic areas that were comparable in size to the cerebral blood flow (CBF) deficit areas; asymmetry analysis and its variants produced APT ischemic areas that were smaller than the CBF deficit areas but larger than the apparent diffusion coefficient deficit areas, though having lower CNRs (0.561 ≤ CNR ≤ 1.083). CONCLUSION: There is a need to further investigate the accuracy and correlation of each quantification method with the pathophysiology using a larger scale multi-imaging modality and multi-time-point clinical study. Future studies should include the magnetization transfer ratio asymmetry results alongside the findings of the study to facilitate the comparison of results between different centers and also the published literature.


Assuntos
Isquemia Encefálica , Neoplasias Encefálicas , AVC Isquêmico , Acidente Vascular Cerebral , Amidas , Animais , Isquemia Encefálica/diagnóstico por imagem , Imageamento por Ressonância Magnética , Prótons , Ratos , Acidente Vascular Cerebral/diagnóstico por imagem
4.
PeerJ Comput Sci ; 10: e2082, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38855257

RESUMO

Background: Breast cancer remains a pressing global health concern, necessitating accurate diagnostics for effective interventions. Deep learning models (AlexNet, ResNet-50, VGG16, GoogLeNet) show remarkable microcalcification identification (>90%). However, distinct architectures and methodologies pose challenges. We propose an ensemble model, merging unique perspectives, enhancing precision, and understanding critical factors for breast cancer intervention. Evaluation favors GoogleNet and ResNet-50, driving their selection for combined functionalities, ensuring improved precision, and dependability in microcalcification detection in clinical settings. Methods: This study presents a comprehensive mammogram preprocessing framework using an optimized deep learning ensemble approach. The proposed framework begins with artifact removal using Otsu Segmentation and morphological operation. Subsequent steps include image resizing, adaptive median filtering, and deep convolutional neural network (D-CNN) development via transfer learning with ResNet-50 model. Hyperparameters are optimized, and ensemble optimization (AlexNet, GoogLeNet, VGG16, ResNet-50) are constructed to identify the localized area of microcalcification. Rigorous evaluation protocol validates the efficacy of individual models, culminating in the ensemble model demonstrating superior predictive accuracy. Results: Based on our analysis, the proposed ensemble model exhibited exceptional performance in the classification of microcalcifications. This was evidenced by the model's average confidence score, which indicated a high degree of dependability and certainty in differentiating these critical characteristics. The proposed model demonstrated a noteworthy average confidence level of 0.9305 in the classification of microcalcification, outperforming alternative models and providing substantial insights into the dependability of the model. The average confidence of the ensemble model in classifying normal cases was 0.8859, which strengthened the model's consistent and dependable predictions. In addition, the ensemble models attained remarkably high performances in terms of accuracy, precision, recall, F1-score, and area under the curve (AUC). Conclusion: The proposed model's thorough dataset integration and focus on average confidence ratings within classes improve clinical diagnosis accuracy and effectiveness for breast cancer. This study introduces a novel methodology that takes advantage of an ensemble model and rigorous evaluation standards to substantially improve the accuracy and dependability of breast cancer diagnostics, specifically in the detection of microcalcifications.

5.
PeerJ Comput Sci ; 10: e1874, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38481705

RESUMO

Epilepsy is a chronic, non-communicable disease caused by paroxysmal abnormal synchronized electrical activity of brain neurons, and is one of the most common neurological diseases worldwide. Electroencephalography (EEG) is currently a crucial tool for epilepsy diagnosis. With the development of artificial intelligence, multi-view learning-based EEG analysis has become an important method for automatic epilepsy recognition because EEG contains difficult types of features such as time-frequency features, frequency-domain features and time-domain features. However, current multi-view learning still faces some challenges, such as the difference between samples of the same class from different views is greater than the difference between samples of different classes from the same view. In view of this, in this study, we propose a shared hidden space-driven multi-view learning algorithm. The algorithm uses kernel density estimation to construct a shared hidden space and combines the shared hidden space with the original space to obtain an expanded space for multi-view learning. By constructing the expanded space and utilizing the information of both the shared hidden space and the original space for learning, the relevant information of samples within and across views can thereby be fully utilized. Experimental results on a dataset of epilepsy provided by the University of Bonn show that the proposed algorithm has promising performance, with an average classification accuracy value of 0.9787, which achieves at least 4% improvement compared to single-view methods.

6.
Interdiscip Sci ; 16(1): 39-57, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37486420

RESUMO

Breast cancer is commonly diagnosed with mammography. Using image segmentation algorithms to separate lesion areas in mammography can facilitate diagnosis by doctors and reduce their workload, which has important clinical significance. Because large, accurately labeled medical image datasets are difficult to obtain, traditional clustering algorithms are widely used in medical image segmentation as an unsupervised model. Traditional unsupervised clustering algorithms have limited learning knowledge. Moreover, some semi-supervised fuzzy clustering algorithms cannot fully mine the information of labeled samples, which results in insufficient supervision. When faced with complex mammography images, the above algorithms cannot accurately segment lesion areas. To address this, a semi-supervised fuzzy clustering based on knowledge weighting and cluster center learning (WSFCM_V) is presented. According to prior knowledge, three learning modes are proposed: a knowledge weighting method for cluster centers, Euclidean distance weights for unlabeled samples, and learning from the cluster centers of labeled sample sets. These strategies improve the clustering performance. On real breast molybdenum target images, the WSFCM_V algorithm is compared with currently popular semi-supervised and unsupervised clustering algorithms. WSFCM_V has the best evaluation index values. Experimental results demonstrate that compared with the existing clustering algorithms, WSFCM_V has a higher segmentation accuracy than other clustering algorithms, both for larger lesion regions like tumor areas and for smaller lesion areas like calcification point areas.


Assuntos
Lógica Fuzzy , Molibdênio , Humanos , Mamografia , Algoritmos , Análise por Conglomerados , Processamento de Imagem Assistida por Computador/métodos
7.
PeerJ Comput Sci ; 10: e2180, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39145215

RESUMO

Background: Bacterial image analysis plays a vital role in various fields, providing valuable information and insights for studying bacterial structural biology, diagnosing and treating infectious diseases caused by pathogenic bacteria, discovering and developing drugs that can combat bacterial infections, etc. As a result, it has prompted efforts to automate bacterial image analysis tasks. By automating analysis tasks and leveraging more advanced computational techniques, such as deep learning (DL) algorithms, bacterial image analysis can contribute to rapid, more accurate, efficient, reliable, and standardised analysis, leading to enhanced understanding, diagnosis, and control of bacterial-related phenomena. Methods: Three object detection networks of DL algorithms, namely SSD-MobileNetV2, EfficientDet, and YOLOv4, were developed to automatically detect Escherichia coli (E. coli) bacteria from microscopic images. The multi-task DL framework is developed to classify the bacteria according to their respective growth stages, which include rod-shaped cells, dividing cells, and microcolonies. Data preprocessing steps were carried out before training the object detection models, including image augmentation, image annotation, and data splitting. The performance of the DL techniques is evaluated using the quantitative assessment method based on mean average precision (mAP), precision, recall, and F1-score. The performance metrics of the models were compared and analysed. The best DL model was then selected to perform multi-task object detections in identifying rod-shaped cells, dividing cells, and microcolonies. Results: The output of the test images generated from the three proposed DL models displayed high detection accuracy, with YOLOv4 achieving the highest confidence score range of detection and being able to create different coloured bounding boxes for different growth stages of E. coli bacteria. In terms of statistical analysis, among the three proposed models, YOLOv4 demonstrates superior performance, achieving the highest mAP of 98% with the highest precision, recall, and F1-score of 86%, 97%, and 91%, respectively. Conclusions: This study has demonstrated the effectiveness, potential, and applicability of DL approaches in multi-task bacterial image analysis, focusing on automating the detection and classification of bacteria from microscopic images. The proposed models can output images with bounding boxes surrounding each detected E. coli bacteria, labelled with their growth stage and confidence level of detection. All proposed object detection models have achieved promising results, with YOLOv4 outperforming the other models.

8.
Magn Reson Med Sci ; 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39143021

RESUMO

The quantitative analysis of pulsed-chemical exchange saturation transfer (CEST) using a full model-based method is computationally challenging, as it involves dealing with varying RF values in pulsed saturation. A power equivalent continuous approximation of B1 power was usually applied to accelerate the analysis. In line with recent consensus recommendations from the CEST community for pulsed-CEST at 3T, particularly recommending a high RF saturation power (B1 = 2.0 µT) for the clinical application in brain tumors, this technical note investigated the feasibility of using average power (AP) as the continuous approximation. The simulated results revealed excellent performance of the AP continuous approximation in low saturation power scenarios, but discrepancies were observed in the z-spectra for the high saturation power cases. Cautions should be taken, or it may lead to inaccurate fitted parameters, and the difference can be more than 10% in the high saturation power cases.

9.
Diagnostics (Basel) ; 14(14)2024 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-39061647

RESUMO

This project employs artificial intelligence, including machine learning and deep learning, to assess COVID-19 readmission risk in Malaysia. It offers tools to mitigate healthcare resource strain and enhance patient outcomes. This study outlines a methodology for classifying COVID-19 readmissions. It starts with dataset description and pre-processing, while the data balancing was computed through Random Oversampling, Borderline SMOTE, and Adaptive Synthetic Sampling. Nine machine learning and ten deep learning techniques are applied, with five-fold cross-validation for evaluation. Optuna is used for hyperparameter selection, while the consistency in training hyperparameters is maintained. Evaluation metrics encompass accuracy, AUC, and training/inference times. Results were based on stratified five-fold cross-validation and different data-balancing methods. Notably, CatBoost consistently excelled in accuracy and AUC across all tables. Using ROS, CatBoost achieved the highest accuracy (0.9882 ± 0.0020) with an AUC of 1.0000 ± 0.0000. CatBoost maintained its superiority in BSMOTE and ADASYN as well. Deep learning approaches performed well, with SAINT leading in ROS and TabNet leading in BSMOTE and ADASYN. Decision Tree ensembles like Random Forest and XGBoost consistently showed strong performance.

10.
Complex Intell Systems ; 9(3): 2713-2745, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-34777967

RESUMO

Computed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.

11.
Comput Methods Programs Biomed ; 242: 107807, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37778138

RESUMO

BACKGROUND AND OBJECTIVE: Knee osteoarthritis (OA) is a debilitating musculoskeletal disorder that causes functional disability. Automatic knee OA diagnosis has great potential of enabling timely and early intervention, that can potentially reverse the degenerative process of knee OA. Yet, it is a tedious task, concerning the heterogeneity of the disorder. Most of the proposed techniques demonstrated single OA diagnostic task widely based on Kellgren Lawrence (KL) standard, a composite score of only a few imaging features (i.e. osteophytes, joint space narrowing and subchondral bone changes). However, only one key disease pattern was tackled. The KL standard fails to represent disease pattern of individual OA features, particularly osteophytes, joint-space narrowing, and pain intensity that play a fundamental role in OA manifestation. In this study, we aim to develop a multitask model using convolutional neural network (CNN) feature extractors and machine learning classifiers to detect nine important OA features: KL grade, knee osteophytes (both knee, medial fibular: OSFM, medial tibial: OSTM, lateral fibular: OSFL, and lateral tibial: OSTL), joint-space narrowing (medial: JSM, and lateral: JSL), and patient-reported pain intensity from plain radiography. METHODS: We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers. RESULTS: Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction. CONCLUSIONS: The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.


Assuntos
Osteoartrite do Joelho , Osteófito , Humanos , Osteoartrite do Joelho/diagnóstico por imagem , Osteófito/diagnóstico por imagem , Articulação do Joelho , Radiografia , Tíbia
12.
Front Bioeng Biotechnol ; 11: 1164655, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37122858

RESUMO

Knee osteoarthritis is one of the most common musculoskeletal diseases and is usually diagnosed with medical imaging techniques. Conventionally, case identification using plain radiography is practiced. However, we acknowledge that knee osteoarthritis is a 3D complexity; hence, magnetic resonance imaging will be the ideal modality to reveal the hidden osteoarthritis features from a three-dimensional view. In this work, the feasibility of well-known convolutional neural network (CNN) structures (ResNet, DenseNet, VGG, and AlexNet) to distinguish knees with and without osteoarthritis (OA) is investigated. Using 3D convolutional layers, we demonstrated the potential of 3D convolutional neural networks of 13 different architectures in knee osteoarthritis diagnosis. We used transfer learning by transforming 2D pre-trained weights into 3D as initial weights for the training of the 3D models. The performance of the models was compared and evaluated based on the performance metrics [balanced accuracy, precision, F1 score, and area under receiver operating characteristic (AUC) curve]. This study suggested that transfer learning indeed enhanced the performance of the models, especially for ResNet and DenseNet models. Transfer learning-based models presented promising results, with ResNet34 achieving the best overall accuracy of 0.875 and an F1 score of 0.871. The results also showed that shallow networks yielded better performance than deeper neural networks, demonstrated by ResNet18, DenseNet121, and VGG11 with AUC values of 0.945, 0.914, and 0.928, respectively. This encourages the application of clinical diagnostic aid for knee osteoarthritis using 3DCNN even in limited hardware conditions.

13.
Life (Basel) ; 13(1)2023 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-36676073

RESUMO

The segmentation of the left ventricle (LV) is one of the fundamental procedures that must be performed to obtain quantitative measures of the heart, such as its volume, area, and ejection fraction. In clinical practice, the delineation of LV is still often conducted semi-automatically, leaving it open to operator subjectivity. The automatic LV segmentation from echocardiography images is a challenging task due to poorly defined boundaries and operator dependency. Recent research has demonstrated that deep learning has the capability to employ the segmentation process automatically. However, the well-known state-of-the-art segmentation models still lack in terms of accuracy and speed. This study aims to develop a single-stage lightweight segmentation model that precisely and rapidly segments the LV from 2D echocardiography images. In this research, a backbone network is used to acquire both low-level and high-level features. Two parallel blocks, known as the spatial feature unit and the channel feature unit, are employed for the enhancement and improvement of these features. The refined features are merged by an integrated unit to segment the LV. The performance of the model and the time taken to segment the LV are compared to other established segmentation models, DeepLab, FCN, and Mask RCNN. The model achieved the highest values of the dice similarity index (0.9446), intersection over union (0.8445), and accuracy (0.9742). The evaluation metrics and processing time demonstrate that the proposed model not only provides superior quantitative results but also trains and segments the LV in less time, indicating its improved performance over competing segmentation models.

14.
Front Comput Neurosci ; 17: 1038636, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36814932

RESUMO

Alzheimer's disease (AD) is a neurodegenerative disorder that causes memory degradation and cognitive function impairment in elderly people. The irreversible and devastating cognitive decline brings large burdens on patients and society. So far, there is no effective treatment that can cure AD, but the process of early-stage AD can slow down. Early and accurate detection is critical for treatment. In recent years, deep-learning-based approaches have achieved great success in Alzheimer's disease diagnosis. The main objective of this paper is to review some popular conventional machine learning methods used for the classification and prediction of AD using Magnetic Resonance Imaging (MRI). The methods reviewed in this paper include support vector machine (SVM), random forest (RF), convolutional neural network (CNN), autoencoder, deep learning, and transformer. This paper also reviews pervasively used feature extractors and different types of input forms of convolutional neural network. At last, this review discusses challenges such as class imbalance and data leakage. It also discusses the trade-offs and suggestions about pre-processing techniques, deep learning, conventional machine learning methods, new techniques, and input type selection.

15.
Comput Intell Neurosci ; 2023: 4208231, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36756163

RESUMO

Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.


Assuntos
Aprendizado Profundo , Cardiopatias , Humanos , Ventrículos do Coração/diagnóstico por imagem , Coração , Redes Neurais de Computação , Imageamento por Ressonância Magnética , Processamento de Imagem Assistida por Computador/métodos
16.
PeerJ Comput Sci ; 9: e1306, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346549

RESUMO

Background: The environment has been significantly impacted by rapid urbanization, leading to a need for changes in climate change and pollution indicators. The 4IR offers a potential solution to efficiently manage these impacts. Smart city ecosystems can provide well-designed, sustainable, and safe cities that enable holistic climate change and global warming solutions through various community-centred initiatives. These include smart planning techniques, smart environment monitoring, and smart governance. An air quality intelligence platform, which operates as a complete measurement site for monitoring and governing air quality, has shown promising results in providing actionable insights. This article aims to highlight the potential of machine learning models in predicting air quality, providing data-driven strategic and sustainable solutions for smart cities. Methods: This study proposed an end-to-end air quality predictive model for smart city applications, utilizing four machine learning techniques and two deep learning techniques. These include Ada Boost, SVR, RF, KNN, MLP regressor and LSTM. The study was conducted in four different urban cities in Selangor, Malaysia, including Petaling Jaya, Banting, Klang, and Shah Alam. The model considered the air quality data of various pollution markers such as PM2.5, PM10, O3, and CO. Additionally, meteorological data including wind speed and wind direction were also considered, and their interactions with the pollutant markers were quantified. The study aimed to determine the correlation variance of the dependent variable in predicting air pollution and proposed a feature optimization process to reduce dimensionality and remove irrelevant features to enhance the prediction of PM2.5, improving the existing LSTM model. The study estimates the concentration of pollutants in the air based on training and highlights the contribution of feature optimization in air quality predictions through feature dimension reductions. Results: In this section, the results of predicting the concentration of pollutants (PM2.5, PM10, O3, and CO) in the air are presented in R2 and RMSE. In predicting the PM10 and PM2.5concentration, LSTM performed the best overall high R2values in the four study areas with the R2 values of 0.998, 0.995, 0.918, and 0.993 in Banting, Petaling, Klang and Shah Alam stations, respectively. The study indicated that among the studied pollution markers, PM2.5,PM10, NO2, wind speed and humidity are the most important elements to monitor. By reducing the number of features used in the model the proposed feature optimization process can make the model more interpretable and provide insights into the most critical factor affecting air quality. Findings from this study can aid policymakers in understanding the underlying causes of air pollution and develop more effective smart strategies for reducing pollution levels.

17.
PeerJ Comput Sci ; 9: e1279, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346641

RESUMO

Background: The advancement of biomedical research generates myriad healthcare-relevant data, including medical records and medical device maintenance information. The COVID-19 pandemic significantly affects the global mortality rate, creating an enormous demand for medical devices. As information technology has advanced, the concept of intelligent healthcare has steadily gained prominence. Smart healthcare utilises a new generation of information technologies, such as the Internet of Things (loT), big data, cloud computing, and artificial intelligence, to completely transform the traditional medical system. With the intention of presenting the concept of smart healthcare, a predictive model is proposed to predict medical device failure for intelligent management of healthcare services. Methods: Present healthcare device management can be improved by proposing a predictive machine learning model that prognosticates the tendency of medical device failures toward smart healthcare. The predictive model is developed based on 8,294 critical medical devices from 44 different types of equipment extracted from 15 healthcare facilities in Malaysia. The model classifies the device into three classes; (i) class 1, where the device is unlikely to fail within the first 3 years of purchase, (ii) class 2, where the device is likely to fail within 3 years from purchase date, and (iii) class 3 where the device is likely to fail more than 3 years after purchase. The goal is to establish a precise maintenance schedule and reduce maintenance and resource costs based on the time to the first failure event. A machine learning and deep learning technique were compared, and the best robust model for smart healthcare was proposed. Results: This study compares five algorithms in machine learning and three optimizers in deep learning techniques. The best optimized predictive model is based on ensemble classifier and SGDM optimizer, respectively. An ensemble classifier model produces 77.90%, 87.60%, and 75.39% for accuracy, specificity, and precision compared to 70.30%, 83.71%, and 67.15% for deep learning models. The ensemble classifier model improves to 79.50%, 88.36%, and 77.43% for accuracy, specificity, and precision after significant features are identified. The result concludes although machine learning has better accuracy than deep learning, more training time is required, which is 11.49 min instead of 1 min 5 s when deep learning is applied. The model accuracy shall be improved by introducing unstructured data from maintenance notes and is considered the author's future work because dealing with text data is time-consuming. The proposed model has proven to improve the devices' maintenance strategy with a Malaysian Ringgit (MYR) cost reduction of approximately MYR 326,330.88 per year. Therefore, the maintenance cost would drastically decrease if this smart predictive model is included in the healthcare management system.

18.
Quant Imaging Med Surg ; 13(9): 5902-5920, 2023 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-37711826

RESUMO

Background: Renal cancer is one of the leading causes of cancer-related deaths worldwide, and early detection of renal cancer can significantly improve the patients' survival rate. However, the manual analysis of renal tissue in the current clinical practices is labor-intensive, prone to inter-pathologist variations and easy to miss the important cancer markers, especially in the early stage. Methods: In this work, we developed deep convolutional neural network (CNN) based heterogeneous ensemble models for automated analysis of renal histopathological images without detailed annotations. The proposed method would first segment the histopathological tissue into patches with different magnification factors, then classify the generated patches into normal and tumor tissues using the pre-trained CNNs and lastly perform the deep ensemble learning to determine the final classification. The heterogeneous ensemble models consisted of CNN models from five deep learning architectures, namely VGG, ResNet, DenseNet, MobileNet, and EfficientNet. These CNN models were fine-tuned and used as base learners, they exhibited different performances and had great diversity in histopathological image analysis. The CNN models with superior classification accuracy (Acc) were then selected to undergo ensemble learning for the final classification. The performance of the investigated ensemble approaches was evaluated against the state-of-the-art literature. Results: The performance evaluation demonstrated the superiority of the proposed best performing ensembled model: five-CNN based weighted averaging model, with an Acc (99%), specificity (Sp) (98%), F1-score (F1) (99%) and area under the receiver operating characteristic (ROC) curve (98%) but slightly inferior recall (Re) (99%) compared to the literature. Conclusions: The outstanding robustness of the developed ensemble model with a superiorly high-performance scores in the evaluated metrics suggested its reliability as a diagnosis system for assisting the pathologists in analyzing the renal histopathological tissues. It is expected that the proposed ensemble deep CNN models can greatly improve the early detection of renal cancer by making the diagnosis process more efficient, and less misdetection and misdiagnosis; subsequently, leading to higher patients' survival rate.

19.
Sci Rep ; 13(1): 20518, 2023 11 22.
Artigo em Inglês | MEDLINE | ID: mdl-37993544

RESUMO

Debates persist regarding the impact of Stain Normalization (SN) on recent breast cancer histopathological studies. While some studies propose no influence on classification outcomes, others argue for improvement. This study aims to assess the efficacy of SN in breast cancer histopathological classification, specifically focusing on Invasive Ductal Carcinoma (IDC) grading using Convolutional Neural Networks (CNNs). The null hypothesis asserts that SN has no effect on the accuracy of CNN-based IDC grading, while the alternative hypothesis suggests the contrary. We evaluated six SN techniques, with five templates selected as target images for the conventional SN techniques. We also utilized seven ImageNet pre-trained CNNs for IDC grading. The performance of models trained with and without SN was compared to discern the influence of SN on classification outcomes. The analysis unveiled a p-value of 0.11, indicating no statistically significant difference in Balanced Accuracy Scores between models trained with StainGAN-normalized images, achieving a score of 0.9196 (the best-performing SN technique), and models trained with non-normalized images, which scored 0.9308. As a result, we did not reject the null hypothesis, indicating that we found no evidence to support a significant discrepancy in effectiveness between stain-normalized and non-normalized datasets for IDC grading tasks. This study demonstrates that SN has a limited impact on IDC grading, challenging the assumption of performance enhancement through SN.


Assuntos
Neoplasias da Mama , Carcinoma Ductal de Mama , Carcinoma Ductal , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Mama/patologia , Redes Neurais de Computação , Coloração e Rotulagem , Carcinoma Ductal de Mama/patologia
20.
IEEE/ACM Trans Comput Biol Bioinform ; 20(4): 2387-2397, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35025748

RESUMO

With the development of sensors, more and more multimodal data are accumulated, especially in biomedical and bioinformatics fields. Therefore, multimodal data analysis becomes very important and urgent. In this study, we combine multi-kernel learning and transfer learning, and propose a feature-level multi-modality fusion model with insufficient training samples. To be specific, we firstly extend kernel Ridge regression to its multi-kernel version under the lp-norm constraint to explore complementary patterns contained in multimodal data. Then we use marginal probability distribution adaption to minimize the distribution differences between the source domain and the target domain to solve the problem of insufficient training samples. Based on epilepsy EEG data provided by the University of Bonn, we construct 12 multi-modality & transfer scenarios to evaluate our model. Experimental results show that compared with baselines, our model performs better on most scenarios.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa