Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 3.100
Filter
1.
BMC Oral Health ; 24(1): 772, 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38987714

ABSTRACT

Integrating artificial intelligence (AI) into medical and dental applications can be challenging due to clinicians' distrust of computer predictions and the potential risks associated with erroneous outputs. We introduce the idea of using AI to trigger second opinions in cases where there is a disagreement between the clinician and the algorithm. By keeping the AI prediction hidden throughout the diagnostic process, we minimize the risks associated with distrust and erroneous predictions, relying solely on human predictions. The experiment involved 3 experienced dentists, 25 dental students, and 290 patients treated for advanced caries across 6 centers. We developed an AI model to predict pulp status following advanced caries treatment. Clinicians were asked to perform the same prediction without the assistance of the AI model. The second opinion framework was tested in a 1000-trial simulation. The average F1-score of the clinicians increased significantly from 0.586 to 0.645.


Subject(s)
Artificial Intelligence , Dental Caries , Humans , Dental Caries/therapy , Referral and Consultation , Patient Care Planning , Algorithms
2.
Front Neurosci ; 18: 1387196, 2024.
Article in English | MEDLINE | ID: mdl-39015378

ABSTRACT

Abnormal ß-amyloid (Aß) accumulation in the brain is an early indicator of Alzheimer's disease (AD) and is typically assessed through invasive procedures such as PET (positron emission tomography) or CSF (cerebrospinal fluid) assays. As new anti-Alzheimer's treatments can now successfully target amyloid pathology, there is a growing interest in predicting Aß positivity (Aß+) from less invasive, more widely available types of brain scans, such as T1-weighted (T1w) MRI. Here we compare multiple approaches to infer Aß + from standard anatomical MRI: (1) classical machine learning algorithms, including logistic regression, XGBoost, and shallow artificial neural networks, (2) deep learning models based on 2D and 3D convolutional neural networks (CNNs), (3) a hybrid ANN-CNN, combining the strengths of shallow and deep neural networks, (4) transfer learning models based on CNNs, and (5) 3D Vision Transformers. All models were trained on paired MRI/PET data from 1,847 elderly participants (mean age: 75.1 yrs. ± 7.6SD; 863 females/984 males; 661 healthy controls, 889 with mild cognitive impairment (MCI), and 297 with Dementia), scanned as part of the Alzheimer's Disease Neuroimaging Initiative. We evaluated each model's balanced accuracy and F1 scores. While further tests on more diverse data are warranted, deep learning models trained on standard MRI showed promise for estimating Aß + status, at least in people with MCI. This may offer a potential screening option before resorting to more invasive procedures.

3.
ACS Appl Mater Interfaces ; 16(28): 36444-36452, 2024 Jul 17.
Article in English | MEDLINE | ID: mdl-38963298

ABSTRACT

Metal-organic frameworks (MOFs) are one of the most promising hydrogen-storing materials due to their rich specific surface area, adjustable topological and pore structures, and modified functional groups. In this work, we developed automatically parallel computational workflows for high-throughput screening of ∼11,600 MOFs from the CoRE database and discovered 69 top-performing MOF candidates with work capacity greater than 1.00 wt % at 298.5 K and a pressure swing between 100 and 0.1 bar, which is at least twice that of MOF-5. In particular, ZITRUP, OQFAJ01, WANHOL, and VATYIZ showed excellent hydrogen storage performance of 4.48, 3.16, 2.19, and 2.16 wt %. We specifically analyzed the relationship between pore-limiting diameter, largest cavity diameter, void fraction, open metal sites, metal elements or nonmetallic atomic elements, and deliverable capacity and found that not only geometrical and physical features of crystalline but also chemical properties of adsorbate sites determined the H2 storage capacity of MOFs at room temperature. It is highlighted that we first proposed the modified crystal graph convolutional neural networks by incorporating the obtained geometrical and physical features into the convolutional high-dimensional feature vectors of period crystal structures for predicting H2 storage performance, which can improve the prediction accuracy of the neural network from the former mean absolute error (MAE) of 0.064 wt % to the current MAE of 0.047 wt % and shorten the consuming time to about 10-4 times of high-throughput computational screening. This work opens a new avenue toward high-throughput screening of MOFs for H2 adsorption capacity, which can be extended for the screening and discovery of other functional materials.

4.
Sensors (Basel) ; 24(13)2024 Jun 21.
Article in English | MEDLINE | ID: mdl-39000829

ABSTRACT

This paper presents a new deep-learning architecture designed to enhance the spatial synchronization between CMOS and event cameras by harnessing their complementary characteristics. While CMOS cameras produce high-quality imagery, they struggle in rapidly changing environments-a limitation that event cameras overcome due to their superior temporal resolution and motion clarity. However, effective integration of these two technologies relies on achieving precise spatial alignment, a challenge unaddressed by current algorithms. Our architecture leverages a dynamic graph convolutional neural network (DGCNN) to process event data directly, improving synchronization accuracy. We found that synchronization precision strongly correlates with the spatial concentration and density of events, with denser distributions yielding better alignment results. Our empirical results demonstrate that areas with denser event clusters enhance calibration accuracy, with calibration errors increasing in more uniformly distributed event scenarios. This research pioneers scene-based synchronization between CMOS and event cameras, paving the way for advancements in mixed-modality visual systems. The implications are significant for applications requiring detailed visual and temporal information, setting new directions for the future of visual perception technologies.

5.
Sensors (Basel) ; 24(13)2024 Jun 25.
Article in English | MEDLINE | ID: mdl-39000903

ABSTRACT

The South-to-North Water Diversion Project in China is an extensive inter-basin water transfer project, for which ensuring the safe operation and maintenance of infrastructure poses a fundamental challenge. In this context, structural health monitoring is crucial for the safe and efficient operation of hydraulic infrastructure. Currently, most health monitoring systems for hydraulic infrastructure rely on commercial software or algorithms that only run on desktop computers. This study developed for the first time a lightweight convolutional neural network (CNN) model specifically for early detection of structural damage in water supply canals and deployed it as a tiny machine learning (TinyML) application on a low-power microcontroller unit (MCU). The model uses damage images of the supply canals that we collected as input and the damage types as output. With data augmentation techniques to enhance the training dataset, the deployed model is only 7.57 KB in size and demonstrates an accuracy of 94.17 ± 1.67% and a precision of 94.47 ± 1.46%, outperforming other commonly used CNN models in terms of performance and energy efficiency. Moreover, each inference consumes only 5610.18 µJ of energy, allowing a standard 225 mAh button cell to run continuously for nearly 11 years and perform approximately 4,945,055 inferences. This research not only confirms the feasibility of deploying real-time supply canal surface condition monitoring on low-power, resource-constrained devices but also provides practical technical solutions for improving infrastructure security.

6.
Sensors (Basel) ; 24(13)2024 Jun 28.
Article in English | MEDLINE | ID: mdl-39000977

ABSTRACT

(1) Background: The objective of this study was to predict the vascular health status of elderly women during exercise using pulse wave data and Temporal Convolutional Neural Networks (TCN); (2) Methods: A total of 492 healthy elderly women aged 60-75 years were recruited for the study. The study utilized a cross-sectional design. Vascular endothelial function was assessed non-invasively using Flow-Mediated Dilation (FMD). Pulse wave characteristics were quantified using photoplethysmography (PPG) sensors, and motion-induced noise in the PPG signals was mitigated through the application of a recursive least squares (RLS) adaptive filtering algorithm. A fixed-load cycling exercise protocol was employed. A TCN was constructed to classify flow-mediated dilation (FMD) into "optimal", "impaired", and "at risk" levels; (3) Results: TCN achieved an average accuracy of 79.3%, 84.8%, and 83.2% in predicting FMD at the "optimal", "impaired", and "at risk" levels, respectively. The results of the analysis of variance (ANOVA) comparison demonstrated that the accuracy of the TCN in predicting FMD at the impaired and at-risk levels was significantly higher than that of Long Short-Term Memory (LSTM) networks and Random Forest algorithms; (4) Conclusions: The use of pulse wave data during exercise combined with the TCN for predicting the vascular health status of elderly women demonstrated high accuracy, particularly in predicting impaired and at-risk FMD levels. This indicates that the integration of exercise pulse wave data with TCN can serve as an effective tool for the assessment and monitoring of the vascular health of elderly women.


Subject(s)
Exercise , Neural Networks, Computer , Photoplethysmography , Pulse Wave Analysis , Humans , Female , Photoplethysmography/methods , Aged , Pulse Wave Analysis/methods , Exercise/physiology , Middle Aged , Cross-Sectional Studies , Algorithms
7.
Sensors (Basel) ; 24(13)2024 Jun 28.
Article in English | MEDLINE | ID: mdl-39000985

ABSTRACT

(1) Background: The objective of this study was to recognize tai chi movements using inertial measurement units (IMUs) and temporal convolutional neural networks (TCNs) and to provide precise interventions for elderly people. (2) Methods: This study consisted of two parts: firstly, 70 skilled tai chi practitioners were used for movement recognition; secondly, 60 elderly males were used for an intervention study. IMU data were collected from skilled tai chi practitioners performing Bafa Wubu, and TCN models were constructed and trained to classify these movements. Elderly participants were divided into a precision intervention group and a standard intervention group, with the former receiving weekly real-time IMU feedback. Outcomes measured included balance, grip strength, quality of life, and depression. (3) Results: The TCN model demonstrated high accuracy in identifying tai chi movements, with percentages ranging from 82.6% to 94.4%. After eight weeks of intervention, both groups showed significant improvements in grip strength, quality of life, and depression. However, only the precision intervention group showed a significant increase in balance and higher post-intervention scores compared to the standard intervention group. (4) Conclusions: This study successfully employed IMU and TCN to identify Tai Chi movements and provide targeted feedback to older participants. Real-time IMU feedback can enhance health outcome indicators in elderly males.


Subject(s)
Movement , Neural Networks, Computer , Quality of Life , Tai Ji , Humans , Tai Ji/methods , Aged , Male , Movement/physiology , Hand Strength/physiology , Postural Balance/physiology , Female , Depression/therapy
8.
Sensors (Basel) ; 24(13)2024 Jul 05.
Article in English | MEDLINE | ID: mdl-39001152

ABSTRACT

The search for structural and microstructural defects using simple human vision is associated with significant errors in determining voids, large pores, and violations of the integrity and compactness of particle packing in the micro- and macrostructure of concrete. Computer vision methods, in particular convolutional neural networks, have proven to be reliable tools for the automatic detection of defects during visual inspection of building structures. The study's objective is to create and compare computer vision algorithms that use convolutional neural networks to identify and analyze damaged sections in concrete samples from different structures. Networks of the following architectures were selected for operation: U-Net, LinkNet, and PSPNet. The analyzed images are photos of concrete samples obtained by laboratory tests to assess the quality in terms of the defection of the integrity and compactness of the structure. During the implementation process, changes in quality metrics such as macro-averaged precision, recall, and F1-score, as well as IoU (Jaccard coefficient) and accuracy, were monitored. The best metrics were demonstrated by the U-Net model, supplemented by the cellular automaton algorithm: precision = 0.91, recall = 0.90, F1 = 0.91, IoU = 0.84, and accuracy = 0.90. The developed segmentation algorithms are universal and show a high quality in highlighting areas of interest under any shooting conditions and different volumes of defective zones, regardless of their localization. The automatization of the process of calculating the damage area and a recommendation in the "critical/uncritical" format can be used to assess the condition of concrete of various types of structures, adjust the formulation, and change the technological parameters of production.

9.
Diagnostics (Basel) ; 14(13)2024 Jun 26.
Article in English | MEDLINE | ID: mdl-39001248

ABSTRACT

Deep learning utilizing convolutional neural networks (CNNs) stands out among the state-of-the-art procedures in PC-supported medical findings. The method proposed in this paper consists of two key stages. In the first stage, the proposed deep sequential CNN model preprocesses images to isolate regions of interest from skin lesions and extracts features, capturing the relevant patterns and detecting multiple lesions. The second stage incorporates a web tool to increase the visualization of the model by promising patient health diagnoses. The proposed model was thoroughly trained, validated, and tested utilizing a database related to the HAM 10,000 dataset. The model accomplished an accuracy of 96.25% in classifying skin lesions, exhibiting significant areas of strength. The results achieved with the proposed model validated by evaluation methods and user feedback indicate substantial improvement over the current state-of-the-art methods for skin lesion classification (malignant/benign). In comparison to other models, sequential CNN surpasses CNN transfer learning (87.9%), VGG 19 (86%), ResNet-50 + VGG-16 (94.14%), Inception v3 (90%), Vision Transformers (RGB images) (92.14%), and the Entropy-NDOELM method (95.7%). The findings demonstrate the potential of deep learning, convolutional neural networks, and sequential CNN in disease detection and classification, eventually revolutionizing melanoma detection and, thus, upgrading patient consideration.

10.
Diagnostics (Basel) ; 14(13)2024 Jun 29.
Article in English | MEDLINE | ID: mdl-39001283

ABSTRACT

The rapid advancement of artificial intelligence (AI) and robotics has led to significant progress in various medical fields including interventional radiology (IR). This review focuses on the research progress and applications of AI and robotics in IR, including deep learning (DL), machine learning (ML), and convolutional neural networks (CNNs) across specialties such as oncology, neurology, and cardiology, aiming to explore potential directions in future interventional treatments. To ensure the breadth and depth of this review, we implemented a systematic literature search strategy, selecting research published within the last five years. We conducted searches in databases such as PubMed and Google Scholar to find relevant literature. Special emphasis was placed on selecting large-scale studies to ensure the comprehensiveness and reliability of the results. This review summarizes the latest research directions and developments, ultimately analyzing their corresponding potential and limitations. It furnishes essential information and insights for researchers, clinicians, and policymakers, potentially propelling advancements and innovations within the domains of AI and IR. Finally, our findings indicate that although AI and robotics technologies are not yet widely applied in clinical settings, they are evolving across multiple aspects and are expected to significantly improve the processes and efficacy of interventional treatments.

11.
Diagnostics (Basel) ; 14(13)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39001292

ABSTRACT

Breast cancer diagnosis from histopathology images is often time consuming and prone to human error, impacting treatment and prognosis. Deep learning diagnostic methods offer the potential for improved accuracy and efficiency in breast cancer detection and classification. However, they struggle with limited data and subtle variations within and between cancer types. Attention mechanisms provide feature refinement capabilities that have shown promise in overcoming such challenges. To this end, this paper proposes the Efficient Channel Spatial Attention Network (ECSAnet), an architecture built on EfficientNetV2 and augmented with a convolutional block attention module (CBAM) and additional fully connected layers. ECSAnet was fine-tuned using the BreakHis dataset, employing Reinhard stain normalization and image augmentation techniques to minimize overfitting and enhance generalizability. In testing, ECSAnet outperformed AlexNet, DenseNet121, EfficientNetV2-S, InceptionNetV3, ResNet50, and VGG16 in most settings, achieving accuracies of 94.2% at 40×, 92.96% at 100×, 88.41% at 200×, and 89.42% at 400× magnifications. The results highlight the effectiveness of CBAM in improving classification accuracy and the importance of stain normalization for generalizability.

12.
Article in English | MEDLINE | ID: mdl-39001913

ABSTRACT

PURPOSE: To develop a convolutional neural network (CNN)-based model for classifying videostroboscopic images of patients with sulcus, benign vocal fold (VF) lesions, and healthy VFs to improve clinicians' accuracy in diagnosis during videostroboscopies when evaluating sulcus. MATERIALS AND METHODS: Videostroboscopies of 433 individuals who were diagnosed with sulcus (91), who were diagnosed with benign VF diseases (i.e., polyp, nodule, papilloma, cyst, or pseudocyst [311]), or who were healthy (33) were analyzed. After extracting 91,159 frames from videostroboscopies, a CNN-based model was created and tested. The healthy and sulcus groups underwent binary classification. In the second phase of the study, benign VF lesions were added to the training set, and multiclassification was executed across all groups. The proposed CNN-based model results were compared with five laryngology experts' assessments. RESULTS: In the binary classification phase, the CNN-based model achieved 98% accuracy, 98% recall, 97% precision, and a 97% F1 score for classifying sulcus and healthy VFs. During the multiclassification phase, when evaluated on a subset of frames encompassing all included groups, the CNN-based model demonstrated greater accuracy when compared with that of the five laryngologists (%76 versus 72%, 68%, 72%, 63%, and 72%). CONCLUSION: The utilization of a CNN-based model serves as a significant aid in the diagnosis of sulcus, a VF disease that presents notable challenges in the diagnostic process. Further research could be undertaken to assess the practicality of implementing this approach in real-time application in clinical practice.

13.
Photodiagnosis Photodyn Ther ; : 104269, 2024 Jul 11.
Article in English | MEDLINE | ID: mdl-39002835

ABSTRACT

BACKGROUND: The early detection of Non-Melanoma Skin Cancer (NMSC) is essential to ensure patients receive the most effective treatment. Diagnostic screening tools for NMSC are crucial due to high confusion rates with other types of skin lesions, such as Actinic Keratosis. Nevertheless, current means of diagnosing and screening patients rely on either visual criteria, that are often conditioned by subjectivity and experience, or highly invasive, slow, and costly methods, such as histological diagnoses. From this, the objectives of the present study are to test if classification accuracies improve in the Near-Infrared region of the electromagnetic spectrum, as opposed to previous research in shorter wavelengths. METHODS: This study utilizes near-infrared hyperspectral imaging, within the range of 900.6 and 1454.8 nm. Images were captured for a total of 125 patients, including 66 patients with Basal Cell Carcinoma, 42 with cutaneous Squamous Cell Carcinoma, and 17 with Actinic Keratosis, to differentiate between healthy and unhealthy skin lesions. A combination of hybrid convolutional neural networks (for feature extraction) and support vector machine algorithms (as a final activation layer) was employed for analysis. In addition, we test whether transfer learning is feasible from networks trained on shorter wavelengths of the electromagnetic spectrum. RESULTS: The implemented method achieved a general accuracy of over 80%, with some tasks reaching over 90%. F1 scores were also found to generally be over the optimal threshold of 0.8. The best results were obtained when detecting Actinic Keratosis, however differentiation between the two types of malignant lesions was often noted to be more difficult. These results demonstrate the potential of near-infrared hyperspectral imaging combined with advanced machine learning techniques in distinguishing NMSC from other skin lesions. Transfer learning was unsuccessful in improving the training of these algorithms. CONCLUSIONS: We have shown that the Near-Infrared region of the electromagnetic spectrum is highly useful for the identification and study of non-melanoma type skin lesions. While the results are promising, further research is required to develop more robust algorithms that can minimize the impact of noise in these datasets before clinical application is feasible.

14.
Sensors (Basel) ; 24(13)2024 Jun 24.
Article in English | MEDLINE | ID: mdl-39000868

ABSTRACT

Diabetes has emerged as a worldwide health crisis, affecting approximately 537 million adults. Maintaining blood glucose requires careful observation of diet, physical activity, and adherence to medications if necessary. Diet monitoring historically involves keeping food diaries; however, this process can be labor-intensive, and recollection of food items may introduce errors. Automated technologies such as food image recognition systems (FIRS) can make use of computer vision and mobile cameras to reduce the burden of keeping diaries and improve diet tracking. These tools provide various levels of diet analysis, and some offer further suggestions for improving the nutritional quality of meals. The current study is a systematic review of mobile computer vision-based approaches for food classification, volume estimation, and nutrient estimation. Relevant articles published over the last two decades are evaluated, and both future directions and issues related to FIRS are explored.


Subject(s)
Diabetes Mellitus , Smartphone , Humans , Diet Records , Blood Glucose/analysis
15.
J Transl Med ; 22(1): 618, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38961476

ABSTRACT

BACKGROUND: Cell free DNA (cfDNA)-based assays hold great potential in detecting early cancer signals yet determining the tissue-of-origin (TOO) for cancer signals remains a challenging task. Here, we investigated the contribution of a methylation atlas to TOO detection in low depth cfDNA samples. METHODS: We constructed a tumor-specific methylation atlas (TSMA) using whole-genome bisulfite sequencing (WGBS) data from five types of tumor tissues (breast, colorectal, gastric, liver and lung cancer) and paired white blood cells (WBC). TSMA was used with a non-negative least square matrix factorization (NNLS) deconvolution algorithm to identify the abundance of tumor tissue types in a WGBS sample. We showed that TSMA worked well with tumor tissue but struggled with cfDNA samples due to the overwhelming amount of WBC-derived DNA. To construct a model for TOO, we adopted the multi-modal strategy and used as inputs the combination of deconvolution scores from TSMA with other features of cfDNA. RESULTS: Our final model comprised of a graph convolutional neural network using deconvolution scores and genome-wide methylation density features, which achieved an accuracy of 69% in a held-out validation dataset of 239 low-depth cfDNA samples. CONCLUSIONS: In conclusion, we have demonstrated that our TSMA in combination with other cfDNA features can improve TOO detection in low-depth cfDNA samples.


Subject(s)
DNA Methylation , Genome, Human , Neoplasms , Neural Networks, Computer , Humans , DNA Methylation/genetics , Neoplasms/genetics , Neoplasms/blood , Neoplasms/diagnosis , Cell-Free Nucleic Acids/blood , Cell-Free Nucleic Acids/genetics , Organ Specificity/genetics , Algorithms
16.
J Med Imaging (Bellingham) ; 11(4): 045501, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38988989

ABSTRACT

Purpose: Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors. Approach: Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC). Results: The CNN-CADe improved the 3D search for the small microcalcification signal ( Δ AUC = 0.098 , p = 0.0002 ) and the 2D search for the large mass signal ( Δ AUC = 0.076 , p = 0.002 ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( Δ Δ AUC = 0.066 , p = 0.035 ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( r = - 0.528 , p = 0.036 ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( Δ Δ AUC = 0.033 , p = 0.133 ). Conclusion: The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.

17.
J Neural Eng ; 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38986464

ABSTRACT

Eye-tracking research has proven valuable in understanding numerous cognitive functions. Recently, Frey et al. provided an exciting deep learning method for learning eye movements from functional magnetic resonance imaging (fMRI) data. It employed the multi-step co-registration of fMRI into the group template to obtain eyeball signal, and thus required additional templates and was time consuming. To resolve this issue, in this paper, we propose a framework named MRGazer for predicting eye gaze points from fMRI in individual space. The MRGazer consists of an eyeball extraction module and a residual network-based eye gaze prediction module. Compared to the previous method, the proposed framework skips the fMRI co-registration step, simplifies the processing protocol, and achieves end-to-end eye gaze regression. The proposed method achieved superior performance in eye fixation regression (Euclidean error, EE=2.04°) than the co-registration-based method (EE=2.89°), and delivered objective results within a shorter time (~0.02 second/volume) than prior method (~0.3 second/volume). The code is available at https://github.com/ustc-bmec/MRGazer.

18.
Methods Mol Biol ; 2780: 303-325, 2024.
Article in English | MEDLINE | ID: mdl-38987475

ABSTRACT

Antibodies are a class of proteins that recognize and neutralize pathogens by binding to their antigens. They are the most significant category of biopharmaceuticals for both diagnostic and therapeutic applications. Understanding how antibodies interact with their antigens plays a fundamental role in drug and vaccine design and helps to comprise the complex antigen binding mechanisms. Computational methods for predicting interaction sites of antibody-antigen are of great value due to the overall cost of experimental methods. Machine learning methods and deep learning techniques obtained promising results.In this work, we predict antibody interaction interface sites by applying HSS-PPI, a hybrid method defined to predict the interface sites of general proteins. The approach abstracts the proteins in terms of hierarchical representation and uses a graph convolutional network to classify the amino acids between interface and non-interface. Moreover, we also equipped the amino acids with different sets of physicochemical features together with structural ones to describe the residues. Analyzing the results, we observe that the structural features play a fundamental role in the amino acid descriptions. We compare the obtained performances, evaluated using standard metrics, with the ones obtained with SVM with 3D Zernike descriptors, Parapred, Paratome, and Antibody i-Patch.


Subject(s)
Computational Biology , Computational Biology/methods , Antigens/immunology , Binding Sites, Antibody , Antibodies/immunology , Antibodies/chemistry , Humans , Antigen-Antibody Complex/chemistry , Antigen-Antibody Complex/immunology , Protein Binding , Machine Learning , Databases, Protein , Algorithms
19.
IEEE Trans Hum Mach Syst ; 54(3): 317-324, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38974222

ABSTRACT

Ultrasound imaging or sonomyography has been found to be a robust modality for measuring muscle activity due to its ability to image deep-seated muscles directly while providing superior spatiotemporal specificity compared to surface electromyography-based techniques. Quantifying the morphological changes during muscle activity involves computationally expensive approaches for tracking muscle anatomical structures or extracting features from brightness-mode (B-mode) images and amplitude-mode (A-mode) signals. This paper uses an offline regression convolutional neural network (CNN) called SonoMyoNet to estimate continuous isometric force from sparse ultrasound scanlines. SonoMyoNet learns features from a few equispaced scanlines selected from B-mode images and utilizes the learned features to estimate continuous isometric force accurately. The performance of SonoMyoNet was evaluated by varying the number of scanlines to simulate the placement of multiple single-element ultrasound transducers in a wearable system. Results showed that SonoMyoNet could accurately predict isometric force with just four scanlines and is immune to speckle noise and shifts in the scanline location. Thus, the proposed network reduces the computational load involved in feature tracking algorithms and estimates muscle force from the global features of sparse ultrasound images.

20.
Int Immunopharmacol ; 138: 112608, 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38981221

ABSTRACT

BACKGROUND: Abdominal aortic aneurysm (AAA) poses a significant health risk and is influenced by various compositional features. This study aimed to develop an artificial intelligence-driven multiomics predictive model for AAA subtypes to identify heterogeneous immune cell infiltration and predict disease progression. Additionally, we investigated neutrophil heterogeneity in patients with different AAA subtypes to elucidate the relationship between the immune microenvironment and AAA pathogenesis. METHODS: This study enrolled 517 patients with AAA, who were clustered using k-means algorithm to identify AAA subtypes and stratify the risk. We utilized residual convolutional neural network 200 to annotate and extract contrast-enhanced computed tomography angiography images of AAA. A precise predictive model for AAA subtypes was established using clinical, imaging, and immunological data. We performed a comparative analysis of neutrophil levels in the different subgroups and immune cell infiltration analysis to explore the associations between neutrophil levels and AAA. Quantitative polymerase chain reaction, Western blotting, and enzyme-linked immunosorbent assay were performed to elucidate the interplay between CXCL1, neutrophil activation, and the nuclear factor (NF)-κB pathway in AAA pathogenesis. Furthermore, the effect of CXCL1 silencing with small interfering RNA was investigated. RESULTS: Two distinct AAA subtypes were identified, one clinically more severe and more likely to require surgical intervention. The CNN effectively detected AAA-associated lesion regions on computed tomography angiography, and the predictive model demonstrated excellent ability to discriminate between patients with the two identified AAA subtypes (area under the curve, 0.927). Neutrophil activation, AAA pathology, CXCL1 expression, and the NF-κB pathway were significantly correlated. CXCL1, NF-κB, IL-1ß, and IL-8 were upregulated in AAA. CXCL1 silencing downregulated NF-κB, interleukin-1ß, and interleukin-8. CONCLUSION: The predictive model for AAA subtypes demonstrated accurate and reliable risk stratification and clinical management. CXCL1 overexpression activated neutrophils through the NF-κB pathway, contributing to AAA development. This pathway may, therefore, be a therapeutic target in AAA.

SELECTION OF CITATIONS
SEARCH DETAIL