Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Mol Divers ; 27(1): 71-80, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35254585

RESUMO

In computational chemistry, the high-dimensional molecular descriptors contribute to the curse of dimensionality issue. Binary whale optimization algorithm (BWOA) is a recently proposed metaheuristic optimization algorithm that has been efficiently applied in feature selection. The main contribution of this paper is a new version of the nonlinear time-varying Sigmoid transfer function to improve the exploitation and exploration activities in the standard whale optimization algorithm (WOA). A new BWOA algorithm, namely BWOA-3, is introduced to solve the descriptors selection problem, which becomes the second contribution. To validate BWOA-3 performance, a high-dimensional drug dataset is employed. The proficiency of the proposed BWOA-3 and the comparative optimization algorithms are measured based on convergence speed, the length of the selected feature subset, and classification performance (accuracy, specificity, sensitivity, and f-measure). In addition, statistical significance tests are also conducted using the Friedman test and Wilcoxon signed-rank test. The comparative optimization algorithms include two BWOA variants, binary bat algorithm (BBA), binary gray wolf algorithm (BGWOA), and binary manta-ray foraging algorithm (BMRFO). As the final contribution, from all experiments, this study has successfully revealed the superiority of BWOA-3 in solving the descriptors selection problem and improving the Amphetamine-type Stimulants (ATS) drug classification performance.


Assuntos
Algoritmos , Baleias , Animais
2.
Sensors (Basel) ; 23(12)2023 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-37420825

RESUMO

The milling machine serves an important role in manufacturing because of its versatility in machining. The cutting tool is a critical component of machining because it is responsible for machining accuracy and surface finishing, impacting industrial productivity. Monitoring the cutting tool's life is essential to avoid machining downtime caused due to tool wear. To prevent the unplanned downtime of the machine and to utilize the maximum life of the cutting tool, the accurate prediction of the remaining useful life (RUL) cutting tool is essential. Different artificial intelligence (AI) techniques estimate the RUL of cutting tools in milling operations with improved prediction accuracy. The IEEE NUAA Ideahouse dataset has been used in this paper for the RUL estimation of the milling cutter. The accuracy of the prediction is based on the quality of feature engineering performed on the unprocessed data. Feature extraction is a crucial phase in RUL prediction. In this work, the authors considers the time-frequency domain (TFD) features such as short-time Fourier-transform (STFT) and different wavelet transforms (WT) along with deep learning (DL) models such as long short-term memory (LSTM), different variants of LSTN, convolutional neural network (CNN), and hybrid models that are a combination of CCN with LSTM variants for RUL estimation. The TFD feature extraction with LSTM variants and hybrid models performs well for the milling cutting tool RUL estimation.


Assuntos
Aprendizado Profundo , Comportamento de Utilização de Ferramentas , Inteligência Artificial , Comércio , Engenharia
3.
Appl Intell (Dordr) ; 53(7): 8354-8369, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35937201

RESUMO

Fake news detection mainly relies on the extraction of article content features with neural networks. However, it has brought some challenges to reduce the noisy data and redundant features, and learn the long-distance dependencies. To solve the above problems, Dual-channel Convolutional Neural Networks with Attention-pooling for Fake News Detection (abbreviated as DC-CNN) is proposed. This model benefits from Skip-Gram and Fasttext. It can effectively reduce noisy data and improve the learning ability of the model for non-derived words. A parallel dual-channel pooling layer was proposed to replace the traditional CNN pooling layer in DC-CNN. The Max-pooling layer, as one of the channels, maintains the advantages in learning local information between adjacent words. The Attention-pooling layer with multi-head attention mechanism serves as another pooling channel to enhance the learning of context semantics and global dependencies. This model benefits from the learning advantages of the two channels and solves the problem that pooling layer is easy to lose local-global feature correlation. This model is tested on two different COVID-19 fake news datasets, and the experimental results show that our model has the optimal performance in dealing with noisy data and balancing the correlation between local features and global features.

4.
Appl Intell (Dordr) ; : 1-18, 2023 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-36820069

RESUMO

Although the Internet and social media provide people with a range of opportunities and benefits in a variety of ways, the proliferation of fake news has negatively affected society and individuals. Many efforts have been invested to detect the fake news. However, to learn the representation of fake news by context information, it has brought many challenges for fake news detection due to the feature sparsity and ineffectively capturing the non-consecutive and long-range context. In this paper, we have proposed Intra-graph and Inter-graph Joint Information Propagation Network (abbreviated as IIJIPN) with Third-order Text Graph Tensor for fake news detection. Specifically, data augmentation is firstly utilized to solve the data imbalance and strengthen the small corpus. In the stage of feature extraction, Third-order Text Graph Tensor with sequential, syntactic, and semantic features is proposed to describe contextual information at different language properties. After constructing the text graphs for each text feature, Intra-graph and Inter-graph Joint Information Propagation is used for encoding the text: intra-graph information propagation is performed in each graph to realize homogeneous information interaction, and high-order homogeneous information interaction in each graph can be achieved by stacking propagation layer; inter-graph information propagation is performed among text graphs to realize heterogeneous information interaction by connecting the nodes across the graphs. Finally, news representations are generated by attention mechanism consisting of graph-level attention and node-level attention mechanism, and then news representations are fed into a fake news classifier. The experimental results on four public datasets indicate that our model has outperformed state-of-the-art methods. Our source code is available at https://github.com/cuibenkuan/IIJIPN.

5.
Sensors (Basel) ; 22(9)2022 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-35591237

RESUMO

This paper proposes a dual-channel network of a sustainable Closed-Loop Supply Chain (CLSC) for rice considering energy sources and consumption tax. A Mixed Integer Linear Programming (MILP) model is formulated for optimizing the total cost, the amount of pollutants, and the number of job opportunities created in the proposed supply chain network under the uncertainty of cost, supply, and demand. In addition, to deal with uncertainty, fuzzy logic is used. Moreover, four multi-objective metaheuristic algorithms are employed to solve the model, which include a novel multi-objective version of the recently proposed metaheuristic algorithm known as Multi-Objective Reptile Search Optimizer (MORSO), Multi-Objective Simulated Annealing (MOSA), Multi-Objective Particle Swarm Optimization (MOPSO), and Multi-Objective Grey Wolf (MOGWO). All the algorithms are evaluated using LP-metric in small sizes and their results and performance are compared based on criteria such as Max Spread (MS), Spread of Non-Dominance Solution (SNS), the number of Pareto solutions (NPS), Mean Ideal Distance (MID), and CPU time. In addition, to achieve better results, the parameters of all algorithms are tuned by the Taguchi method. The programmed model is implemented using a real case study in Iran to confirm its accuracy and efficiency. To further evaluate the current model, some key parameters are subject to sensitivity analysis. Empirical results indicate that MORSO performed very well and by constructing solar panel sites and producing energy out of rice waste up to 19% of electricity can be saved.


Assuntos
Algoritmos , Lógica Fuzzy , Irã (Geográfico) , Energia Renovável , Incerteza
6.
Sensors (Basel) ; 22(12)2022 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-35746156

RESUMO

The emerging areas of IoT and sensor networks bring lots of software applications on a daily basis. To keep up with the ever-changing expectations of clients and the competitive market, the software must be updated. The changes may cause unintended consequences, necessitating retesting, i.e., regression testing, before being released. The efficiency and efficacy of regression testing techniques can be improved with the use of optimization approaches. This paper proposes an improved quantum-behaved particle swarm optimization approach for regression testing. The algorithm is improved by employing a fix-up mechanism to perform perturbation for the combinatorial TCP problem. Second, the dynamic contraction-expansion coefficient is used to accelerate the convergence. It is followed by an adaptive test case selection strategy to choose the modification-revealing test cases. Finally, the superfluous test cases are removed. Furthermore, the algorithm's robustness is analyzed for fault as well as statement coverage. The empirical results reveal that the proposed algorithm performs better than the Genetic Algorithm, Bat Algorithm, Grey Wolf Optimization, Particle Swarm Optimization and its variants for prioritizing test cases. The findings show that inclusivity, test selection percentage and cost reduction percentages are higher in the case of fault coverage compared to statement coverage but at the cost of high fault detection loss (approx. 7%) at the test case reduction stage.


Assuntos
Algoritmos , Humanos , Análise de Regressão
7.
Sensors (Basel) ; 23(1)2022 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-36617019

RESUMO

Visual analysis of an electroencephalogram (EEG) by medical professionals is highly time-consuming and the information is difficult to process. To overcome these limitations, several automated seizure detection strategies have been introduced by combining signal processing and machine learning. This paper proposes a hybrid optimization-controlled ensemble classifier comprising the AdaBoost classifier, random forest (RF) classifier, and the decision tree (DT) classifier for the automatic analysis of an EEG signal dataset to predict an epileptic seizure. The EEG signal is pre-processed initially to make it suitable for feature selection. The feature selection process receives the alpha, beta, delta, theta, and gamma wave data from the EEG, where the significant features, such as statistical features, wavelet features, and entropy-based features, are extracted by the proposed hybrid seek optimization algorithm. These extracted features are fed forward to the proposed ensemble classifier that produces the predicted output. By the combination of corvid and gregarious search agent characteristics, the proposed hybrid seek optimization technique has been developed, and is used to evaluate the fusion parameters of the ensemble classifier. The suggested technique's accuracy, sensitivity, and specificity are determined to be 96.6120%, 94.6736%, and 91.3684%, respectively, for the CHB-MIT database. This demonstrates the effectiveness of the suggested technique for early seizure prediction. The accuracy, sensitivity, and specificity of the proposed technique are 95.3090%, 93.1766%, and 90.0654%, respectively, for the Siena Scalp database, again demonstrating its efficacy in the early seizure prediction process.


Assuntos
Epilepsia , Convulsões , Humanos , Convulsões/diagnóstico , Epilepsia/diagnóstico , Eletroencefalografia/métodos , Processamento de Sinais Assistido por Computador , Algoritmos , Máquina de Vetores de Suporte
8.
Sensors (Basel) ; 22(21)2022 Oct 26.
Artigo em Inglês | MEDLINE | ID: mdl-36365909

RESUMO

The induction motor plays a vital role in industrial drive systems due to its robustness and easy maintenance but at the same time, it suffers electrical faults, mainly rotor faults such as broken rotor bars. Early shortcoming identification is needed to lessen support expenses and hinder high costs by using failure detection frameworks that give features extraction and pattern grouping of the issue to distinguish the failure in an induction motor using classification models. In this paper, the open-source dataset of the rotor with the broken bars in a three-phase induction motor available on the IEEE data port is used for fault classification. The study aims at fault identification under various loading conditions on the rotor of an induction motor by performing time, frequency, and time-frequency domain feature extraction. The extracted features are provided to the models to classify between the healthy and faulty rotors. The extracted features from the time and frequency domain give an accuracy of up to 87.52% and 88.58%, respectively, using the Random-Forest (RF) model. Whereas, in time-frequency, the Short Time Fourier Transform (STFT) based spectrograms provide reasonably high accuracy, around 97.67%, using a Convolutional Neural Network (CNN) based fine-tuned transfer learning framework for diagnosing induction motor rotor bar severity under various loading conditions.


Assuntos
Algoritmos , Vibração , Análise de Falha de Equipamento , Simulação por Computador , Aprendizado de Máquina
9.
Sensors (Basel) ; 22(20)2022 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-36298176

RESUMO

Affective, emotional, and physiological states (AFFECT) detection and recognition by capturing human signals is a fast-growing area, which has been applied across numerous domains. The research aim is to review publications on how techniques that use brain and biometric sensors can be used for AFFECT recognition, consolidate the findings, provide a rationale for the current methods, compare the effectiveness of existing methods, and quantify how likely they are to address the issues/challenges in the field. In efforts to achieve the key goals of Society 5.0, Industry 5.0, and human-centered design better, the recognition of emotional, affective, and physiological states is progressively becoming an important matter and offers tremendous growth of knowledge and progress in these and other related fields. In this research, a review of AFFECT recognition brain and biometric sensors, methods, and applications was performed, based on Plutchik's wheel of emotions. Due to the immense variety of existing sensors and sensing systems, this study aimed to provide an analysis of the available sensors that can be used to define human AFFECT, and to classify them based on the type of sensing area and their efficiency in real implementations. Based on statistical and multiple criteria analysis across 169 nations, our outcomes introduce a connection between a nation's success, its number of Web of Science articles published, and its frequency of citation on AFFECT recognition. The principal conclusions present how this research contributes to the big picture in the field under analysis and explore forthcoming study trends.


Assuntos
Emoções , Reconhecimento Psicológico , Humanos , Emoções/fisiologia , Biometria , Inteligência Artificial
10.
Sensors (Basel) ; 22(20)2022 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-36298415

RESUMO

Human ideas and sentiments are mirrored in facial expressions. They give the spectator a plethora of social cues, such as the viewer's focus of attention, intention, motivation, and mood, which can help develop better interactive solutions in online platforms. This could be helpful for children while teaching them, which could help in cultivating a better interactive connect between teachers and students, since there is an increasing trend toward the online education platform due to the COVID-19 pandemic. To solve this, the authors proposed kids' emotion recognition based on visual cues in this research with a justified reasoning model of explainable AI. The authors used two datasets to work on this problem; the first is the LIRIS Children Spontaneous Facial Expression Video Database, and the second is an author-created novel dataset of emotions displayed by children aged 7 to 10. The authors identified that the LIRIS dataset has achieved only 75% accuracy, and no study has worked further on this dataset in which the authors have achieved the highest accuracy of 89.31% and, in the authors' dataset, an accuracy of 90.98%. The authors also realized that the face construction of children and adults is different, and the way children show emotions is very different and does not always follow the same way of facial expression for a specific emotion as compared with adults. Hence, the authors used 3D 468 landmark points and created two separate versions of the dataset from the original selected datasets, which are LIRIS-Mesh and Authors-Mesh. In total, all four types of datasets were used, namely LIRIS, the authors' dataset, LIRIS-Mesh, and Authors-Mesh, and a comparative analysis was performed by using seven different CNN models. The authors not only compared all dataset types used on different CNN models but also explained for every type of CNN used on every specific dataset type how test images are perceived by the deep-learning models by using explainable artificial intelligence (XAI), which helps in localizing features contributing to particular emotions. The authors used three methods of XAI, namely Grad-CAM, Grad-CAM++, and SoftGrad, which help users further establish the appropriate reason for emotion detection by knowing the contribution of its features in it.


Assuntos
COVID-19 , Aprendizado Profundo , Adulto , Criança , Animais , Humanos , Inteligência Artificial , Pandemias , Emoções
11.
Appl Math Model ; 112: 282-303, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35946032

RESUMO

This paper presents a bi-level blood supply chain network under uncertainty during the COVID-19 pandemic outbreak using a Stackelberg game theory technique. A new two-phase bi-level mixed-integer linear programming model is developed in which the total costs are minimized and the utility of donors is maximized. To cope with the uncertain nature of some of the input parameters, a novel mixed possibilistic-robust-fuzzy programming approach is developed. The data from a real case study is utilized to show the applicability and efficiency of the proposed model. Finally, some sensitivity analyses are performed on the important parameters and some managerial insights are suggested.

12.
Appl Intell (Dordr) ; 52(12): 13729-13762, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35677730

RESUMO

Millions of affected people and thousands of victims are consequences of earthquakes, every year. Therefore, it is necessary to prepare a proper preparedness and response planning. The objectives of this paper are i) minimizing the expected value of the total costs of relief supply chain, ii) minimizing the maximum number of unsatisfied demands for relief staff and iii) minimizing the total probability of unsuccessful evacuation in routes. In this paper, a scenario based stochastic multi-objective location-allocation-routing model is proposed for a real humanitarian relief logistics problem which focused on both pre- and post-disaster situations in presence of uncertainty. To cope with demand uncertainty, a simulation approach is used. The proposed model integrates these two phases simultaneously. Then, both strategic and operational decisions (pre-disaster and post-disaster), fairness in the evacuation, and relief item distribution including commodities and relief workers, victim evacuation including injured people, corpses and homeless people are also considered simultaneously in this paper. The presented model is solved utilizing the Epsilon-constraint method for small- and medium-scale problems and using three metaheuristic algorithms for the large-scale problem (case study). Empirical results illustrate that the model can be used to locate the shelters and relief distribution centers, determine appropriate routes and allocate resources in uncertain and real-life disaster situations.

13.
Appl Intell (Dordr) ; 52(15): 17652-17667, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35400845

RESUMO

The spread of COVID-19 has had a serious impact on either work or the lives of people. With the decrease in physical social contacts and the rise of anxiety on the pandemic, social media has become the primary approach for people to access information related to COVID-19. Social media is rife with rumors and fake news, causing great damage to the Society. Facing shortages, imbalance, and nosiness, the current Chinese data set related to the epidemic has not helped the detection of fake news. Besides, the accuracy of classification was also affected by the easy loss of edge characteristics in long text data. In this paper, long text feature extraction network with data augmentation (LTFE) was proposed, which improves the learning performance of the classifier by optimizing the data feature structure. In the stage of encoding, Twice-Masked Language Modeling for Fine-tuning (TMLM-F) and Data Alignment that Preserves Edge Characteristics (DA-PEC) was proposed to extract the classification features of the Chinese Dataset. Between the TMLM-F and DA-PEC processes, we use Attention to capture the dependencies between words and generate corresponding vector representations. The experimental results illustrate that this method is effective for the detection of Chinese fake news pertinent to the pandemic.

14.
Neurocomputing (Amst) ; 457: 40-66, 2021 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-34149184

RESUMO

The unprecedented surge of a novel coronavirus in the month of December 2019, named as COVID-19 by the World Health organization has caused a serious impact on the health and socioeconomic activities of the public all over the world. Since its origin, the number of infected and deceased cases has been growing exponentially in almost all the affected countries of the world. The rapid spread of the novel coronavirus across the world results in the scarcity of medical resources and overburdened hospitals. As a result, the researchers and technocrats are continuously working across the world for the inculcation of efficient strategies which may assist the government and healthcare system in controlling and managing the spread of the COVID-19 pandemic. Therefore, this study provides an extensive review of the ongoing strategies such as diagnosis, prediction, drug and vaccine development and preventive measures used in combating the COVID-19 along with technologies used and limitations. Moreover, this review also provides a comparative analysis of the distinct type of data, emerging technologies, approaches used in diagnosis and prediction of COVID-19, statistics of contact tracing apps, vaccine production platforms used in the COVID-19 pandemic. Finally, the study highlights some challenges and pitfalls observed in the systematic review which may assist the researchers to develop more efficient strategies used in controlling and managing the spread of COVID-19.

15.
Eng Appl Artif Intell ; 100: 104188, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33619424

RESUMO

Nowadays, in the pharmaceutical industry, a growing concern with sustainability has become a strict consideration during the COVID-19 pandemic. There is a lack of good mathematical models in the field. In this research, a production-distribution-inventory-allocation-location problem in the sustainable medical supply chain network is designed to fill this gap. Also, the distribution of medicines related to COVID-19 patients and the periods of production and delivery of medicine according to the perishability of some medicines are considered. In the model, a multi-objective, multi-level, multi-product, and multi-period problem for a sustainable medical supply chain network is designed. Three hybrid meta-heuristic algorithms, namely, ant colony optimization, fish swarm algorithm, and firefly algorithm are suggested, hybridized with variable neighborhood search to solve the sustainable medical supply chain network model. Response surface method is used to tune the parameters since meta-heuristic algorithms are sensitive to input parameters. Six assessment metrics were used to assess the quality of the obtained Pareto frontier by the meta-heuristic algorithms on the considered problems. A real case study is used and empirical results indicate the superiority of the hybrid fish swarm algorithm with variable neighborhood search.

16.
J Transl Med ; 18(1): 205, 2020 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-32430070

RESUMO

The COVID-19 pandemic has become the leading societal concern. The pandemic has shown that the public health concern is not only a medical problem, but also affects society as a whole; so, it has also become the leading scientific concern. We discuss in this treatise the importance of bringing the world's scientists together to find effective solutions for controlling the pandemic. By applying novel research frameworks, interdisciplinary collaboration promises to manage the pandemic's consequences and prevent recurrences of similar pandemics.


Assuntos
Pesquisa Biomédica/organização & administração , Infecções por Coronavirus/epidemiologia , Prestação Integrada de Cuidados de Saúde/organização & administração , Emergências , Necessidades e Demandas de Serviços de Saúde , Pandemias , Pneumonia Viral/epidemiologia , Betacoronavirus/patogenicidade , Pesquisa Biomédica/métodos , COVID-19 , Infecções por Coronavirus/terapia , Infecções por Coronavirus/virologia , Prestação Integrada de Cuidados de Saúde/métodos , História do Século XXI , Humanos , Comunicação Interdisciplinar , Estudos Interdisciplinares , Pneumonia Viral/terapia , Pneumonia Viral/virologia , Saúde Pública/história , Saúde Pública/normas , SARS-CoV-2
17.
ScientificWorldJournal ; 2014: 872929, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24711739

RESUMO

Existing opinion mining studies have focused on and explored only two types of reviews, that is, regular and comparative. There is a visible gap in determining the useful review types from customers and designers perspective. Based on Technology Acceptance Model (TAM) and statistical measures we examine users' perception about different review types and its effects in terms of behavioral intention towards using online review system. By using sample of users (N = 400) and designers (N = 106), current research work studies three review types, A (regular), B (comparative), and C (suggestive), which are related to perceived usefulness, perceived ease of use, and behavioral intention. The study reveals that positive perception of the use of suggestive reviews improves users' decision making in business intelligence. The results also depict that type C (suggestive reviews) could be considered a new useful review type in addition to other types, A and B.


Assuntos
Comportamento , Intenção , Percepção , Humanos , Internet , Modelos Estatísticos , Projetos Piloto
18.
PeerJ Comput Sci ; 10: e1769, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38686011

RESUMO

Object detection methods based on deep learning have been used in a variety of sectors including banking, healthcare, e-governance, and academia. In recent years, there has been a lot of attention paid to research endeavors made towards text detection and recognition from different scenesor images of unstructured document processing. The article's novelty lies in the detailed discussion and implementation of the various transfer learning-based different backbone architectures for printed text recognition. In this research article, the authors compared the ResNet50, ResNet50V2, ResNet152V2, Inception, Xception, and VGG19 backbone architectures with preprocessing techniques as data resizing, normalization, and noise removal on a standard OCR Kaggle dataset. Further, the top three backbone architectures selected based on the accuracy achieved and then hyper parameter tunning has been performed to achieve more accurate results. Xception performed well compared with the ResNet, Inception, VGG19, MobileNet architectures by achieving high evaluation scores with accuracy (98.90%) and min loss (0.19). As per existing research in this domain, until now, transfer learning-based backbone architectures that have been used on printed or handwritten data recognition are not well represented in literature. We split the total dataset into 80 percent for training and 20 percent for testing purpose and then into different backbone architecture models with the same number of epochs, and found that the Xception architecture achieved higher accuracy than the others. In addition, the ResNet50V2 model gave us higher accuracy (96.92%) than the ResNet152V2 model (96.34%).

19.
Diagnostics (Basel) ; 14(14)2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39061671

RESUMO

Background: Diagnosing lung diseases accurately is crucial for proper treatment. Convolutional neural networks (CNNs) have advanced medical image processing, but challenges remain in their accurate explainability and reliability. This study combines U-Net with attention and Vision Transformers (ViTs) to enhance lung disease segmentation and classification. We hypothesize that Attention U-Net will enhance segmentation accuracy and that ViTs will improve classification performance. The explainability methodologies will shed light on model decision-making processes, aiding in clinical acceptance. Methodology: A comparative approach was used to evaluate deep learning models for segmenting and classifying lung illnesses using chest X-rays. The Attention U-Net model is used for segmentation, and architectures consisting of four CNNs and four ViTs were investigated for classification. Methods like Gradient-weighted Class Activation Mapping plus plus (Grad-CAM++) and Layer-wise Relevance Propagation (LRP) provide explainability by identifying crucial areas influencing model decisions. Results: The results support the conclusion that ViTs are outstanding in identifying lung disorders. Attention U-Net obtained a Dice Coefficient of 98.54% and a Jaccard Index of 97.12%. ViTs outperformed CNNs in classification tasks by 9.26%, reaching an accuracy of 98.52% with MobileViT. An 8.3% increase in accuracy was seen while moving from raw data classification to segmented image classification. Techniques like Grad-CAM++ and LRP provided insights into the decision-making processes of the models. Conclusions: This study highlights the benefits of integrating Attention U-Net and ViTs for analyzing lung diseases, demonstrating their importance in clinical settings. Emphasizing explainability clarifies deep learning processes, enhancing confidence in AI solutions and perhaps enhancing clinical acceptance for improved healthcare results.

20.
Int J Cardiovasc Imaging ; 40(6): 1283-1303, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38678144

RESUMO

The quantification of carotid plaque has been routinely used to predict cardiovascular risk in cardiovascular disease (CVD) and coronary artery disease (CAD). To determine how well carotid plaque features predict the likelihood of CAD and cardiovascular (CV) events using deep learning (DL) and compare against the machine learning (ML) paradigm. The participants in this study consisted of 459 individuals who had undergone coronary angiography, contrast-enhanced ultrasonography, and focused carotid B-mode ultrasound. Each patient was tracked for thirty days. The measurements on these patients consisted of maximum plaque height (MPH), total plaque area (TPA), carotid intima-media thickness (cIMT), and intraplaque neovascularization (IPN). CAD risk and CV event stratification were performed by applying eight types of DL-based models. Univariate and multivariate analysis was also conducted to predict the most significant risk predictors. The DL's model effectiveness was evaluated by the area-under-the-curve measurement while the CV event prediction was evaluated using the Cox proportional hazard model (CPHM) and compared against the DL-based concordance index (c-index). IPN showed a substantial ability to predict CV events (p < 0.0001). The best DL system improved by 21% (0.929 vs. 0.762) over the best ML system. DL-based CV event prediction showed a ~ 17% increase in DL-based c-index compared to the CPHM (0.86 vs. 0.73). CAD and CV incidents were linked to IPN and carotid imaging characteristics. For survival analysis and CAD prediction, the DL-based system performs superior to ML-based models.


Assuntos
Doenças das Artérias Carótidas , Espessura Intima-Media Carotídea , Doença da Artéria Coronariana , Aprendizado Profundo , Fatores de Risco de Doenças Cardíacas , Placa Aterosclerótica , Valor Preditivo dos Testes , Humanos , Medição de Risco , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Doenças das Artérias Carótidas/diagnóstico por imagem , Doenças das Artérias Carótidas/mortalidade , Doenças das Artérias Carótidas/complicações , Prognóstico , Doença da Artéria Coronariana/diagnóstico por imagem , Doença da Artéria Coronariana/mortalidade , Fatores de Tempo , Canadá/epidemiologia , Angiografia Coronária , Artérias Carótidas/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador , Fatores de Risco , Técnicas de Apoio para a Decisão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA