Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
PeerJ Comput Sci ; 10: e1874, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38481705

RESUMEN

Epilepsy is a chronic, non-communicable disease caused by paroxysmal abnormal synchronized electrical activity of brain neurons, and is one of the most common neurological diseases worldwide. Electroencephalography (EEG) is currently a crucial tool for epilepsy diagnosis. With the development of artificial intelligence, multi-view learning-based EEG analysis has become an important method for automatic epilepsy recognition because EEG contains difficult types of features such as time-frequency features, frequency-domain features and time-domain features. However, current multi-view learning still faces some challenges, such as the difference between samples of the same class from different views is greater than the difference between samples of different classes from the same view. In view of this, in this study, we propose a shared hidden space-driven multi-view learning algorithm. The algorithm uses kernel density estimation to construct a shared hidden space and combines the shared hidden space with the original space to obtain an expanded space for multi-view learning. By constructing the expanded space and utilizing the information of both the shared hidden space and the original space for learning, the relevant information of samples within and across views can thereby be fully utilized. Experimental results on a dataset of epilepsy provided by the University of Bonn show that the proposed algorithm has promising performance, with an average classification accuracy value of 0.9787, which achieves at least 4% improvement compared to single-view methods.

2.
Interdiscip Sci ; 16(1): 39-57, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37486420

RESUMEN

Breast cancer is commonly diagnosed with mammography. Using image segmentation algorithms to separate lesion areas in mammography can facilitate diagnosis by doctors and reduce their workload, which has important clinical significance. Because large, accurately labeled medical image datasets are difficult to obtain, traditional clustering algorithms are widely used in medical image segmentation as an unsupervised model. Traditional unsupervised clustering algorithms have limited learning knowledge. Moreover, some semi-supervised fuzzy clustering algorithms cannot fully mine the information of labeled samples, which results in insufficient supervision. When faced with complex mammography images, the above algorithms cannot accurately segment lesion areas. To address this, a semi-supervised fuzzy clustering based on knowledge weighting and cluster center learning (WSFCM_V) is presented. According to prior knowledge, three learning modes are proposed: a knowledge weighting method for cluster centers, Euclidean distance weights for unlabeled samples, and learning from the cluster centers of labeled sample sets. These strategies improve the clustering performance. On real breast molybdenum target images, the WSFCM_V algorithm is compared with currently popular semi-supervised and unsupervised clustering algorithms. WSFCM_V has the best evaluation index values. Experimental results demonstrate that compared with the existing clustering algorithms, WSFCM_V has a higher segmentation accuracy than other clustering algorithms, both for larger lesion regions like tumor areas and for smaller lesion areas like calcification point areas.


Asunto(s)
Lógica Difusa , Molibdeno , Humanos , Mamografía , Algoritmos , Análisis por Conglomerados , Procesamiento de Imagen Asistido por Computador/métodos
3.
Quant Imaging Med Surg ; 13(12): 7879-7892, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-38106293

RESUMEN

Background: When an ischemic stroke happens, it triggers a complex signalling cascade that may eventually lead to neuronal cell death if no reperfusion. Recently, the relayed nuclear Overhauser enhancement effect at -1.6 ppm [NOE(-1.6 ppm)] has been postulated may allow for a more in-depth analysis of the ischemic injury. This study assessed the potential utility of NOE(-1.6 ppm) in an ischemic stroke model. Methods: Diffusion-weighted imaging, perfusion-weighted imaging, and chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) data were acquired from five rats that underwent scans at 9.4 T after middle cerebral artery occlusion. Results: The apparent diffusion coefficient (ADC), cerebral blood flow (CBF), and apparent exchange-dependent relaxations (AREX) at 3.5 ppm and NOE(-1.6 ppm) were quantified. AREX(3.5 ppm) and NOE(-1.6 ppm) were found to be hypointense and exhibited different signal patterns within the ischemic tissue. The NOE(-1.6 ppm) deficit areas were equal to or larger than the ADC deficit areas, but smaller than the AREX(3.5 ppm) deficit areas. This suggested that NOE(-1.6 ppm) might further delineate the acidotic tissue estimated using AREX(3.5 ppm). Since NOE(-1.6 ppm) is closely related to membrane phospholipids, NOE(-1.6 ppm) potentially highlighted at-risk tissue affected by lipid peroxidation and membrane damage. Altogether, the ADC/NOE(-1.6 ppm)/AREX(3.5 ppm)/CBF mismatches revealed four zones of increasing sizes within the ischemic tissue, potentially reflecting different pathophysiological information. Conclusions: Using CEST coupled with ADC and CBF, the ischemic tissue may thus potentially be separated into four zones to better understand the pathophysiology after stroke and improve ischemic tissue fate definition. Further verification of the potential utility of NOE(-1.6 ppm) may therefore lead to a more precise diagnosis.

4.
Sci Rep ; 13(1): 20518, 2023 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-37993544

RESUMEN

Debates persist regarding the impact of Stain Normalization (SN) on recent breast cancer histopathological studies. While some studies propose no influence on classification outcomes, others argue for improvement. This study aims to assess the efficacy of SN in breast cancer histopathological classification, specifically focusing on Invasive Ductal Carcinoma (IDC) grading using Convolutional Neural Networks (CNNs). The null hypothesis asserts that SN has no effect on the accuracy of CNN-based IDC grading, while the alternative hypothesis suggests the contrary. We evaluated six SN techniques, with five templates selected as target images for the conventional SN techniques. We also utilized seven ImageNet pre-trained CNNs for IDC grading. The performance of models trained with and without SN was compared to discern the influence of SN on classification outcomes. The analysis unveiled a p-value of 0.11, indicating no statistically significant difference in Balanced Accuracy Scores between models trained with StainGAN-normalized images, achieving a score of 0.9196 (the best-performing SN technique), and models trained with non-normalized images, which scored 0.9308. As a result, we did not reject the null hypothesis, indicating that we found no evidence to support a significant discrepancy in effectiveness between stain-normalized and non-normalized datasets for IDC grading tasks. This study demonstrates that SN has a limited impact on IDC grading, challenging the assumption of performance enhancement through SN.


Asunto(s)
Neoplasias de la Mama , Carcinoma Ductal de Mama , Carcinoma Ductal , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Mama/patología , Redes Neurales de la Computación , Coloración y Etiquetado , Carcinoma Ductal de Mama/patología
5.
Comput Methods Programs Biomed ; 242: 107807, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37778138

RESUMEN

BACKGROUND AND OBJECTIVE: Knee osteoarthritis (OA) is a debilitating musculoskeletal disorder that causes functional disability. Automatic knee OA diagnosis has great potential of enabling timely and early intervention, that can potentially reverse the degenerative process of knee OA. Yet, it is a tedious task, concerning the heterogeneity of the disorder. Most of the proposed techniques demonstrated single OA diagnostic task widely based on Kellgren Lawrence (KL) standard, a composite score of only a few imaging features (i.e. osteophytes, joint space narrowing and subchondral bone changes). However, only one key disease pattern was tackled. The KL standard fails to represent disease pattern of individual OA features, particularly osteophytes, joint-space narrowing, and pain intensity that play a fundamental role in OA manifestation. In this study, we aim to develop a multitask model using convolutional neural network (CNN) feature extractors and machine learning classifiers to detect nine important OA features: KL grade, knee osteophytes (both knee, medial fibular: OSFM, medial tibial: OSTM, lateral fibular: OSFL, and lateral tibial: OSTL), joint-space narrowing (medial: JSM, and lateral: JSL), and patient-reported pain intensity from plain radiography. METHODS: We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers. RESULTS: Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction. CONCLUSIONS: The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.


Asunto(s)
Osteoartritis de la Rodilla , Osteofito , Humanos , Osteoartritis de la Rodilla/diagnóstico por imagen , Osteofito/diagnóstico por imagen , Articulación de la Rodilla , Radiografía , Tibia
6.
Quant Imaging Med Surg ; 13(9): 5902-5920, 2023 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-37711826

RESUMEN

Background: Renal cancer is one of the leading causes of cancer-related deaths worldwide, and early detection of renal cancer can significantly improve the patients' survival rate. However, the manual analysis of renal tissue in the current clinical practices is labor-intensive, prone to inter-pathologist variations and easy to miss the important cancer markers, especially in the early stage. Methods: In this work, we developed deep convolutional neural network (CNN) based heterogeneous ensemble models for automated analysis of renal histopathological images without detailed annotations. The proposed method would first segment the histopathological tissue into patches with different magnification factors, then classify the generated patches into normal and tumor tissues using the pre-trained CNNs and lastly perform the deep ensemble learning to determine the final classification. The heterogeneous ensemble models consisted of CNN models from five deep learning architectures, namely VGG, ResNet, DenseNet, MobileNet, and EfficientNet. These CNN models were fine-tuned and used as base learners, they exhibited different performances and had great diversity in histopathological image analysis. The CNN models with superior classification accuracy (Acc) were then selected to undergo ensemble learning for the final classification. The performance of the investigated ensemble approaches was evaluated against the state-of-the-art literature. Results: The performance evaluation demonstrated the superiority of the proposed best performing ensembled model: five-CNN based weighted averaging model, with an Acc (99%), specificity (Sp) (98%), F1-score (F1) (99%) and area under the receiver operating characteristic (ROC) curve (98%) but slightly inferior recall (Re) (99%) compared to the literature. Conclusions: The outstanding robustness of the developed ensemble model with a superiorly high-performance scores in the evaluated metrics suggested its reliability as a diagnosis system for assisting the pathologists in analyzing the renal histopathological tissues. It is expected that the proposed ensemble deep CNN models can greatly improve the early detection of renal cancer by making the diagnosis process more efficient, and less misdetection and misdiagnosis; subsequently, leading to higher patients' survival rate.

8.
PeerJ Comput Sci ; 9: e1325, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37346512

RESUMEN

Oil palm is a key agricultural resource in Malaysia. However, palm disease, most prominently basal stem rot caused at least RM 255 million of annual economic loss. Basal stem rot is caused by a fungus known as Ganoderma boninense. An infected tree shows few symptoms during early stage of infection, while potentially suffers an 80% lifetime yield loss and the tree may be dead within 2 years. Early detection of basal stem rot is crucial since disease control efforts can be done. Laboratory BSR detection methods are effective, but the methods have accuracy, biosafety, and cost concerns. This review article consists of scientific articles related to the oil palm tree disease, basal stem rot, Ganoderma Boninense, remote sensors and deep learning that are listed in the Web of Science since year 2012. About 110 scientific articles were found that is related to the index terms mentioned and 60 research articles were found to be related to the objective of this research thus included in this review article. From the review, it was found that the potential use of deep learning methods were rarely explored. Some research showed unsatisfactory results due to limitations on dataset. However, based on studies related to other plant diseases, deep learning in combination with data augmentation techniques showed great potentials, showing remarkable detection accuracy. Therefore, the feasibility of analyzing oil palm remote sensor data using deep learning models together with data augmentation techniques should be studied. On a commercial scale, deep learning used together with remote sensors and unmanned aerial vehicle technologies showed great potential in the detection of basal stem rot disease.

9.
PeerJ Comput Sci ; 9: e1306, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37346549

RESUMEN

Background: The environment has been significantly impacted by rapid urbanization, leading to a need for changes in climate change and pollution indicators. The 4IR offers a potential solution to efficiently manage these impacts. Smart city ecosystems can provide well-designed, sustainable, and safe cities that enable holistic climate change and global warming solutions through various community-centred initiatives. These include smart planning techniques, smart environment monitoring, and smart governance. An air quality intelligence platform, which operates as a complete measurement site for monitoring and governing air quality, has shown promising results in providing actionable insights. This article aims to highlight the potential of machine learning models in predicting air quality, providing data-driven strategic and sustainable solutions for smart cities. Methods: This study proposed an end-to-end air quality predictive model for smart city applications, utilizing four machine learning techniques and two deep learning techniques. These include Ada Boost, SVR, RF, KNN, MLP regressor and LSTM. The study was conducted in four different urban cities in Selangor, Malaysia, including Petaling Jaya, Banting, Klang, and Shah Alam. The model considered the air quality data of various pollution markers such as PM2.5, PM10, O3, and CO. Additionally, meteorological data including wind speed and wind direction were also considered, and their interactions with the pollutant markers were quantified. The study aimed to determine the correlation variance of the dependent variable in predicting air pollution and proposed a feature optimization process to reduce dimensionality and remove irrelevant features to enhance the prediction of PM2.5, improving the existing LSTM model. The study estimates the concentration of pollutants in the air based on training and highlights the contribution of feature optimization in air quality predictions through feature dimension reductions. Results: In this section, the results of predicting the concentration of pollutants (PM2.5, PM10, O3, and CO) in the air are presented in R2 and RMSE. In predicting the PM10 and PM2.5concentration, LSTM performed the best overall high R2values in the four study areas with the R2 values of 0.998, 0.995, 0.918, and 0.993 in Banting, Petaling, Klang and Shah Alam stations, respectively. The study indicated that among the studied pollution markers, PM2.5,PM10, NO2, wind speed and humidity are the most important elements to monitor. By reducing the number of features used in the model the proposed feature optimization process can make the model more interpretable and provide insights into the most critical factor affecting air quality. Findings from this study can aid policymakers in understanding the underlying causes of air pollution and develop more effective smart strategies for reducing pollution levels.

10.
PeerJ Comput Sci ; 9: e1279, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37346641

RESUMEN

Background: The advancement of biomedical research generates myriad healthcare-relevant data, including medical records and medical device maintenance information. The COVID-19 pandemic significantly affects the global mortality rate, creating an enormous demand for medical devices. As information technology has advanced, the concept of intelligent healthcare has steadily gained prominence. Smart healthcare utilises a new generation of information technologies, such as the Internet of Things (loT), big data, cloud computing, and artificial intelligence, to completely transform the traditional medical system. With the intention of presenting the concept of smart healthcare, a predictive model is proposed to predict medical device failure for intelligent management of healthcare services. Methods: Present healthcare device management can be improved by proposing a predictive machine learning model that prognosticates the tendency of medical device failures toward smart healthcare. The predictive model is developed based on 8,294 critical medical devices from 44 different types of equipment extracted from 15 healthcare facilities in Malaysia. The model classifies the device into three classes; (i) class 1, where the device is unlikely to fail within the first 3 years of purchase, (ii) class 2, where the device is likely to fail within 3 years from purchase date, and (iii) class 3 where the device is likely to fail more than 3 years after purchase. The goal is to establish a precise maintenance schedule and reduce maintenance and resource costs based on the time to the first failure event. A machine learning and deep learning technique were compared, and the best robust model for smart healthcare was proposed. Results: This study compares five algorithms in machine learning and three optimizers in deep learning techniques. The best optimized predictive model is based on ensemble classifier and SGDM optimizer, respectively. An ensemble classifier model produces 77.90%, 87.60%, and 75.39% for accuracy, specificity, and precision compared to 70.30%, 83.71%, and 67.15% for deep learning models. The ensemble classifier model improves to 79.50%, 88.36%, and 77.43% for accuracy, specificity, and precision after significant features are identified. The result concludes although machine learning has better accuracy than deep learning, more training time is required, which is 11.49 min instead of 1 min 5 s when deep learning is applied. The model accuracy shall be improved by introducing unstructured data from maintenance notes and is considered the author's future work because dealing with text data is time-consuming. The proposed model has proven to improve the devices' maintenance strategy with a Malaysian Ringgit (MYR) cost reduction of approximately MYR 326,330.88 per year. Therefore, the maintenance cost would drastically decrease if this smart predictive model is included in the healthcare management system.

11.
Diagnostics (Basel) ; 13(10)2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37238277

RESUMEN

Gastric cancer is a leading cause of cancer-related deaths worldwide, underscoring the need for early detection to improve patient survival rates. The current clinical gold standard for detection is histopathological image analysis, but this process is manual, laborious, and time-consuming. As a result, there has been growing interest in developing computer-aided diagnosis to assist pathologists. Deep learning has shown promise in this regard, but each model can only extract a limited number of image features for classification. To overcome this limitation and improve classification performance, this study proposes ensemble models that combine the decisions of several deep learning models. To evaluate the effectiveness of the proposed models, we tested their performance on the publicly available gastric cancer dataset, Gastric Histopathology Sub-size Image Database. Our experimental results showed that the top 5 ensemble model achieved state-of-the-art detection accuracy in all sub-databases, with the highest detection accuracy of 99.20% in the 160 × 160 pixels sub-database. These results demonstrated that ensemble models could extract important features from smaller patch sizes and achieve promising performance. Overall, our proposed work could assist pathologists in detecting gastric cancer through histopathological image analysis and contribute to early gastric cancer detection to improve patient survival rates.

12.
Front Bioeng Biotechnol ; 11: 1164655, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37122858

RESUMEN

Knee osteoarthritis is one of the most common musculoskeletal diseases and is usually diagnosed with medical imaging techniques. Conventionally, case identification using plain radiography is practiced. However, we acknowledge that knee osteoarthritis is a 3D complexity; hence, magnetic resonance imaging will be the ideal modality to reveal the hidden osteoarthritis features from a three-dimensional view. In this work, the feasibility of well-known convolutional neural network (CNN) structures (ResNet, DenseNet, VGG, and AlexNet) to distinguish knees with and without osteoarthritis (OA) is investigated. Using 3D convolutional layers, we demonstrated the potential of 3D convolutional neural networks of 13 different architectures in knee osteoarthritis diagnosis. We used transfer learning by transforming 2D pre-trained weights into 3D as initial weights for the training of the 3D models. The performance of the models was compared and evaluated based on the performance metrics [balanced accuracy, precision, F1 score, and area under receiver operating characteristic (AUC) curve]. This study suggested that transfer learning indeed enhanced the performance of the models, especially for ResNet and DenseNet models. Transfer learning-based models presented promising results, with ResNet34 achieving the best overall accuracy of 0.875 and an F1 score of 0.871. The results also showed that shallow networks yielded better performance than deeper neural networks, demonstrated by ResNet18, DenseNet121, and VGG11 with AUC values of 0.945, 0.914, and 0.928, respectively. This encourages the application of clinical diagnostic aid for knee osteoarthritis using 3DCNN even in limited hardware conditions.

13.
Comput Intell Neurosci ; 2023: 4208231, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36756163

RESUMEN

Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.


Asunto(s)
Aprendizaje Profundo , Cardiopatías , Humanos , Ventrículos Cardíacos/diagnóstico por imagen , Corazón , Redes Neurales de la Computación , Imagen por Resonancia Magnética , Procesamiento de Imagen Asistido por Computador/métodos
14.
Front Comput Neurosci ; 17: 1038636, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36814932

RESUMEN

Alzheimer's disease (AD) is a neurodegenerative disorder that causes memory degradation and cognitive function impairment in elderly people. The irreversible and devastating cognitive decline brings large burdens on patients and society. So far, there is no effective treatment that can cure AD, but the process of early-stage AD can slow down. Early and accurate detection is critical for treatment. In recent years, deep-learning-based approaches have achieved great success in Alzheimer's disease diagnosis. The main objective of this paper is to review some popular conventional machine learning methods used for the classification and prediction of AD using Magnetic Resonance Imaging (MRI). The methods reviewed in this paper include support vector machine (SVM), random forest (RF), convolutional neural network (CNN), autoencoder, deep learning, and transformer. This paper also reviews pervasively used feature extractors and different types of input forms of convolutional neural network. At last, this review discusses challenges such as class imbalance and data leakage. It also discusses the trade-offs and suggestions about pre-processing techniques, deep learning, conventional machine learning methods, new techniques, and input type selection.

15.
Life (Basel) ; 13(1)2023 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-36676073

RESUMEN

The segmentation of the left ventricle (LV) is one of the fundamental procedures that must be performed to obtain quantitative measures of the heart, such as its volume, area, and ejection fraction. In clinical practice, the delineation of LV is still often conducted semi-automatically, leaving it open to operator subjectivity. The automatic LV segmentation from echocardiography images is a challenging task due to poorly defined boundaries and operator dependency. Recent research has demonstrated that deep learning has the capability to employ the segmentation process automatically. However, the well-known state-of-the-art segmentation models still lack in terms of accuracy and speed. This study aims to develop a single-stage lightweight segmentation model that precisely and rapidly segments the LV from 2D echocardiography images. In this research, a backbone network is used to acquire both low-level and high-level features. Two parallel blocks, known as the spatial feature unit and the channel feature unit, are employed for the enhancement and improvement of these features. The refined features are merged by an integrated unit to segment the LV. The performance of the model and the time taken to segment the LV are compared to other established segmentation models, DeepLab, FCN, and Mask RCNN. The model achieved the highest values of the dice similarity index (0.9446), intersection over union (0.8445), and accuracy (0.9742). The evaluation metrics and processing time demonstrate that the proposed model not only provides superior quantitative results but also trains and segments the LV in less time, indicating its improved performance over competing segmentation models.

16.
Complex Intell Systems ; 9(3): 2713-2745, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-34777967

RESUMEN

Computed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.

17.
IEEE/ACM Trans Comput Biol Bioinform ; 20(4): 2387-2397, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35025748

RESUMEN

With the development of sensors, more and more multimodal data are accumulated, especially in biomedical and bioinformatics fields. Therefore, multimodal data analysis becomes very important and urgent. In this study, we combine multi-kernel learning and transfer learning, and propose a feature-level multi-modality fusion model with insufficient training samples. To be specific, we firstly extend kernel Ridge regression to its multi-kernel version under the lp-norm constraint to explore complementary patterns contained in multimodal data. Then we use marginal probability distribution adaption to minimize the distribution differences between the source domain and the target domain to solve the problem of insufficient training samples. Based on epilepsy EEG data provided by the University of Bonn, we construct 12 multi-modality & transfer scenarios to evaluate our model. Experimental results show that compared with baselines, our model performs better on most scenarios.

18.
Front Psychiatry ; 13: 1042641, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36532166

RESUMEN

Background: The importance of strategies and services by caregivers and family members substantially impact the psychological and emotional wellbeing of autistic children. The rapid research developments in clinical and non-clinical methods benefit the features of autistic children. Among various internal and external factors, the influence of the built environment also impacts the characteristics of autistic children. This study investigates primarily the psychological effect of light and colors on the mood and behavior of autistic children to identify the most favorable and preferred indoor lights and color shades. Methods: A questionnaire survey was conducted at an autism center among autistic children and their parents. This study included autistic children aged between 6 and 16 (45 males, 42 females, mean age 8.7 years, standard deviation 2.3). Eighty-seven participants were involved in the survey to determine the sensory perceptions, intolerance, preferences, and sensitivities of children with an autism spectrum disorder toward colors and lighting. The margin of error at the statistical analysis's 95% confidence level is ± 0.481. Results: As per this case report, the children have various color preferences and respond differently to different shades. Different hues have varying effects on autistic children, with many neutral tones and mellow shades proven to be autistic-friendly with their calming and soothing effect, while bright, bold, and intense colors are refreshing and stimulating. The stimulus of bright-lighting causes behavioral changes in autistic children prone to light sensitivity. Conclusion: The insights gained from this interaction with parents and caretakers of autistic children could be helpful for designers to incorporate specific autistic-friendly design elements that make productive interior spaces. A complete understanding of the effect of factors like color and lighting on the learning ability and engagement of autistic children in an indoor environment is essential for designers and clinicians. The main findings of this study could be helpful for a designer and clinicians to address designing an autism-friendly built environment with a color palette and lighting scheme conducive to their wellbeing and to maximize their cognitive functioning.

19.
Artículo en Inglés | MEDLINE | ID: mdl-36429697

RESUMEN

PURPOSE: Mental health assessments that combine patients' facial expressions and behaviors have been proven effective, but screening large-scale student populations for mental health problems is time-consuming and labor-intensive. This study aims to provide an efficient and accurate intelligent method for further psychological diagnosis and treatment, which combines artificial intelligence technologies to assist in evaluating the mental health problems of college students. MATERIALS AND METHODS: We propose a mixed-method study of mental health assessment that combines psychological questionnaires with facial emotion analysis to comprehensively evaluate the mental health of students on a large scale. The Depression Anxiety and Stress Scale-21(DASS-21) is used for the psychological questionnaire. The facial emotion recognition model is implemented by transfer learning based on neural networks, and the model is pre-trained using FER2013 and CFEE datasets. Among them, the FER2013 dataset consists of 48 × 48-pixel face gray images, a total of 35,887 face images. The CFEE dataset contains 950,000 facial images with annotated action units (au). Using a random sampling strategy, we sent online questionnaires to 400 college students and received 374 responses, and the response rate was 93.5%. After pre-processing, 350 results were available, including 187 male and 153 female students. First, the facial emotion data of students were collected in an online questionnaire test. Then, a pre-trained model was used for emotion recognition. Finally, the online psychological questionnaire scores and the facial emotion recognition model scores were collated to give a comprehensive psychological evaluation score. RESULTS: The experimental results of the facial emotion recognition model proposed to show that its classification results are broadly consistent with the mental health survey results. This model can be used to improve efficiency. In particular, the accuracy of the facial emotion recognition model proposed in this paper is higher than that of the general mental health model, which only uses the traditional single questionnaire. Furthermore, the absolute errors of this study in the three symptoms of depression, anxiety, and stress are lower than other mental health survey results and are only 0.8%, 8.1%, 3.5%, and 1.8%, respectively. CONCLUSION: The mixed method combining intelligent methods and scales for mental health assessment has high recognition accuracy. Therefore, it can support efficient large-scale screening of students' psychological problems.


Asunto(s)
Inteligencia Artificial , Salud Mental , Humanos , Masculino , Femenino , Estudiantes/psicología , Expresión Facial , Ansiedad/diagnóstico
20.
Sci Rep ; 12(1): 19200, 2022 11 10.
Artículo en Inglés | MEDLINE | ID: mdl-36357456

RESUMEN

Computer-aided Invasive Ductal Carcinoma (IDC) grading classification systems based on deep learning have shown that deep learning may achieve reliable accuracy in IDC grade classification using histopathology images. However, there is a dearth of comprehensive performance comparisons of Convolutional Neural Network (CNN) designs on IDC in the literature. As such, we would like to conduct a comparison analysis of the performance of seven selected CNN models: EfficientNetB0, EfficientNetV2B0, EfficientNetV2B0-21k, ResNetV1-50, ResNetV2-50, MobileNetV1, and MobileNetV2 with transfer learning. To implement each pre-trained CNN architecture, we deployed the corresponded feature vector available from the TensorFlowHub, integrating it with dropout and dense layers to form a complete CNN model. Our findings indicated that the EfficientNetV2B0-21k (0.72B Floating-Point Operations and 7.1 M parameters) outperformed other CNN models in the IDC grading task. Nevertheless, we discovered that practically all selected CNN models perform well in the IDC grading task, with an average balanced accuracy of 0.936 ± 0.0189 on the cross-validation set and 0.9308 ± 0.0211on the test set.


Asunto(s)
Carcinoma Ductal , Redes Neurales de la Computación , Humanos , Publicaciones , Aprendizaje Automático
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...