Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
1.
Network ; 34(4): 250-281, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37534974

RESUMO

The rapid advancement of technology such as stream processing technologies, deep-learning approaches, and artificial intelligence plays a prominent and vital role, to detect heart rate using a prediction model. However, the existing methods could not handle high -dimensional datasets, and deep feature learning to improvise the performance. Therefore, this work proposed a real-time heart rate prediction model, using K-nearest neighbour (KNN) adhered to the principle component analysis algorithm (PCA) and weighted random forest algorithm for feature fusion (KPCA-WRF) approach and deep CNN feature learning framework. The feature selection, from the fused features, was optimized by ant colony optimization (ACO) and particle swarm optimization (PSO) algorithm to enhance the selected fused features from deep CNN. The optimized features were reduced to low dimensions using the PCA algorithm. The significant straight heart rate features are plotted by capturing out nearest similar data point values using the algorithm. The fused features were then classified for aiding the training process. The weighted values are assigned to those tuned hyper parameters (feature matrix forms). The optimal path and continuity of the weighted feature representations are moved using the random forest algorithm, in K-fold validation iterations.


Assuntos
Inteligência Artificial , Máquina de Vetores de Suporte , Frequência Cardíaca , Algoritmos , Aprendizado de Máquina
2.
J Bus Res ; 156: 113480, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36506475

RESUMO

Vaccination offers health, economic, and social benefits. However, three major issues-vaccine quality, demand forecasting, and trust among stakeholders-persist in the vaccine supply chain (VSC), leading to inefficiencies. The COVID-19 pandemic has exacerbated weaknesses in the VSC, while presenting opportunities to apply digital technologies to manage it. For the first time, this study establishes an intelligent VSC management system that provides decision support for VSC management during the COVID-19 pandemic. The system combines blockchain, internet of things (IoT), and machine learning that effectively address the three issues in the VSC. The transparency of blockchain ensures trust among stakeholders. The real-time monitoring of vaccine status by the IoT ensures vaccine quality. Machine learning predicts vaccine demand and conducts sentiment analysis on vaccine reviews to help companies improve vaccine quality. The present study also reveals the implications for the management of supply chains, businesses, and government.

3.
Appl Energy ; 313: 118848, 2022 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-35250149

RESUMO

This paper proposes a time-series stochastic socioeconomic model for analyzing the impact of the pandemic on the regulated distribution electricity market. The proposed methodology combines the optimized tariff model (socioeconomic market model) and the random walk concept (risk assessment technique) to ensure robustness/accuracy. The model enables both a past and future analysis of the impact of the pandemic, which is essential to prepare regulatory agencies beforehand and allow enough time for the development of efficient public policies. By applying it to six Brazilian concession areas, results demonstrate that consumers have been/will be heavily affected in general, mainly due to the high electricity tariffs that took place with the pandemic, overcoming the natural trend of the market. In contrast, the model demonstrates that the pandemic did not/will not significantly harm power distribution companies in general, mainly due to the loan granted by the regulator agency, named COVID-account. Socioeconomic welfare losses averaging 500 (MR$/month) are estimated for the equivalent concession area, i.e., the sum of the six analyzed concession areas. Furthermore, this paper proposes a stochastic optimization problem to mitigate the impact of the pandemic on the electricity market over time, considering the interests of consumers, power distribution companies, and the government. Results demonstrate that it is successful as the tariffs provided by the algorithm compensate for the reduction in demand while increasing the socioeconomic welfare of the market.

4.
Inf Sci (N Y) ; 592: 389-401, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-36532848

RESUMO

Chest X-ray (CXR) imaging is a low-cost, easy-to-use imaging alternative that can be used to diagnose/screen pulmonary abnormalities due to infectious diseaseX: Covid-19, Pneumonia and Tuberculosis (TB). Not limited to binary decisions (with respect to healthy cases) that are reported in the state-of-the-art literature, we also consider non-healthy CXR screening using a lightweight deep neural network (DNN) with a reduced number of epochs and parameters. On three diverse publicly accessible and fully categorized datasets, for non-healthy versus healthy CXR screening, the proposed DNN produced the following accuracies: 99.87% on Covid-19 versus healthy, 99.55% on Pneumonia versus healthy, and 99.76% on TB versus healthy datasets. On the other hand, when considering non-healthy CXR screening, we received the following accuracies: 98.89% on Covid-19 versus Pneumonia, 98.99% on Covid-19 versus TB, and 100% on Pneumonia versus TB. To further precisely analyze how well the proposed DNN worked, we considered well-known DNNs such as ResNet50, ResNet152V2, MobileNetV2, and InceptionV3. Our results are comparable with the current state-of-the-art, and as the proposed CNN is light, it could potentially be used for mass screening in resource-constraint regions.

5.
Knowl Based Syst ; 253: 109539, 2022 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-35915642

RESUMO

Alongside the currently used nasal swab testing, the COVID-19 pandemic situation would gain noticeable advantages from low-cost tests that are available at any-time, anywhere, at a large-scale, and with real time answers. A novel approach for COVID-19 assessment is adopted here, discriminating negative subjects versus positive or recovered subjects. The scope is to identify potential discriminating features, highlight mid and short-term effects of COVID on the voice and compare two custom algorithms. A pool of 310 subjects took part in the study; recordings were collected in a low-noise, controlled setting employing three different vocal tasks. Binary classifications followed, using two different custom algorithms. The first was based on the coupling of boosting and bagging, with an AdaBoost classifier using Random Forest learners. A feature selection process was employed for the training, identifying a subset of features acting as clinically relevant biomarkers. The other approach was centered on two custom CNN architectures applied to mel-Spectrograms, with a custom knowledge-based data augmentation. Performances, evaluated on an independent test set, were comparable: Adaboost and CNN differentiated COVID-19 positive from negative with accuracies of 100% and 95% respectively, and recovered from negative individuals with accuracies of 86.1% and 75% respectively. This study highlights the possibility to identify COVID-19 positive subjects, foreseeing a tool for on-site screening, while also considering recovered subjects and the effects of COVID-19 on the voice. The two proposed novel architectures allow for the identification of biomarkers and demonstrate the ongoing relevance of traditional ML versus deep learning in speech analysis.

6.
Comput Electr Eng ; 100: 107971, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35399912

RESUMO

The coronavirus pandemic has affected people all over the world and posed a great challenge to international health systems. To aid early detection of coronavirus disease-2019 (COVID-19), this study proposes a real-time detection system based on the Internet of Things framework. The system collects real-time data from users to determine potential coronavirus cases, analyses treatment responses for people who have been treated, and accurately collects and analyses the datasets. Artificial intelligence-based algorithms are an alternative decision-making solution to extract valuable information from clinical data. This study develops a deep learning optimisation system that can work with imbalanced datasets to improve the classification of patients. A synthetic minority oversampling technique is applied to solve the problem of imbalance, and a recursive feature elimination algorithm is used to determine the most effective features. After data balance and extraction of features, the data are split into training and testing sets for validating all models. The experimental predictive results indicate good stability and compatibility of the models with the data, providing maximum accuracy of 98% and precision of 97%. Finally, the developed models are demonstrated to handle data bias and achieve high classification accuracy for patients with COVID-19. The findings of this study may be useful for healthcare organisations to properly prioritise assets.

7.
Cogn Neurodyn ; 18(3): 907-918, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38826653

RESUMO

EEG is the most common test for diagnosing a seizure, where it presents information about the electrical activity of the brain. Automatic Seizure detection is one of the challenging tasks due to limitations of conventional methods with regard to inefficient feature selection, increased computational complexity and time and less accuracy. The situation calls for a practical framework to achieve better performance for detecting the seizure effectively. Hence, this study proposes modified Blackman bandpass filter-greedy particle swarm optimization (MBBF-GPSO) with convolutional neural network (CNN) for effective seizure detection. In this case, unwanted signals (noise) is eliminated by MBBF as it possess better ability in stopband attenuation, and, only the optimized features are selected using GPSO. For enhancing the efficacy of obtaining optimal solutions in GPSO, the time and frequency domain is extracted to complement it. Through this process, an optimized features are attained by MBBF-GPSO. Then, the CNN layer is employed for obtaining the productive classification output using the objective function. Here, CNN is employed due to its ability in automatically learning distinct features for individual class. Such advantages of the proposed system have made it explore better performance in seizure detection that is confirmed through performance and comparative analysis.

8.
Front Plant Sci ; 15: 1349209, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38993936

RESUMO

Counting nematodes is a labor-intensive and time-consuming task, yet it is a pivotal step in various quantitative nematological studies; preparation of initial population densities and final population densities in pot, micro-plot and field trials for different objectives related to management including sampling and location of nematode infestation foci. Nematologists have long battled with the complexities of nematode counting, leading to several research initiatives aimed at automating this process. However, these research endeavors have primarily focused on identifying single-class objects within individual images. To enhance the practicality of this technology, there's a pressing need for an algorithm that cannot only detect but also classify multiple classes of objects concurrently. This study endeavors to tackle this challenge by developing a user-friendly Graphical User Interface (GUI) that comprises multiple deep learning algorithms, allowing simultaneous recognition and categorization of nematode eggs and second stage juveniles of Meloidogyne spp. In total of 650 images for eggs and 1339 images for juveniles were generated using two distinct imaging systems, resulting in 8655 eggs and 4742 Meloidogyne juveniles annotated using bounding box and segmentation, respectively. The deep-learning models were developed by leveraging the Convolutional Neural Networks (CNNs) machine learning architecture known as YOLOv8x. Our results showed that the models correctly identified eggs as eggs and Meloidogyne juveniles as Meloidogyne juveniles in 94% and 93% of instances, respectively. The model demonstrated higher than 0.70 coefficient correlation between model predictions and observations on unseen images. Our study has showcased the potential utility of these models in practical applications for the future. The GUI is made freely available to the public through the author's GitHub repository (https://github.com/bresilla/nematode_counting). While this study currently focuses on one genus, there are plans to expand the GUI's capabilities to include other economically significant genera of plant parasitic nematodes. Achieving these objectives, including enhancing the models' accuracy on different imaging systems, may necessitate collaboration among multiple nematology teams and laboratories, rather than being the work of a single entity. With the increasing interest among nematologists in harnessing machine learning, the authors are confident in the potential development of a universal automated nematode counting system accessible to all. This paper aims to serve as a framework and catalyst for initiating global collaboration toward this important goal.

9.
Front Physiol ; 15: 1412985, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39156824

RESUMO

In recent years, semantic segmentation in deep learning has been widely applied in medical image segmentation, leading to the development of numerous models. Convolutional Neural Network (CNNs) have achieved milestone achievements in medical image analysis. Particularly, deep neural networks based on U-shaped architectures and skip connections have been extensively employed in various medical image tasks. U-Net is characterized by its encoder-decoder architecture and pioneering skip connections, along with multi-scale features, has served as a fundamental network architecture for many modifications. But U-Net cannot fully utilize all the information from the encoder layer in the decoder layer. U-Net++ connects mid parameters of different dimensions through nested and dense skip connections. However, it can only alleviate the disadvantage of not being able to fully utilize the encoder information and will greatly increase the model parameters. In this paper, a novel BFNet is proposed to utilize all feature maps from the encoder at every layer of the decoder and reconnects with the current layer of the encoder. This allows the decoder to better learn the positional information of segmentation targets and improves learning of boundary information and abstract semantics in the current layer of the encoder. Our proposed method has a significant improvement in accuracy with 1.4 percent. Besides enhancing accuracy, our proposed BFNet also reduces network parameters. All the advantages we proposed are demonstrated on our dataset. We also discuss how different loss functions influence this model and some possible improvements.

10.
Front Neurol ; 14: 1217796, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37941573

RESUMO

Background: Rapid and accurate triage of acute ischemic stroke (AIS) is essential for early revascularization and improved patient outcomes. Response to acute reperfusion therapies varies significantly based on patient-specific cerebrovascular anatomy that governs cerebral blood flow. We present an end-to-end machine learning approach for automatic stroke triage. Methods: Employing a validated convolutional neural network (CNN) segmentation model for image processing, we extract each patient's cerebrovasculature and its morphological features from baseline non-invasive angiography scans. These features are used to detect occlusion's presence and the site automatically, and for the first time, to estimate collateral circulation without manual intervention. We then use the extracted cerebrovascular features along with commonly used clinical and imaging parameters to predict the 90 days functional outcome for each patient. Results: The CNN model achieved a segmentation accuracy of 94% based on the Dice similarity coefficient (DSC). The automatic stroke detection algorithm had a sensitivity and specificity of 92% and 94%, respectively. The models for occlusion site detection and automatic collateral grading reached 96% and 87.2% accuracy, respectively. Incorporating the automatically extracted cerebrovascular features significantly improved the 90 days outcome prediction accuracy from 0.63 to 0.83. Conclusion: The fast, automatic, and comprehensive model presented here can improve stroke diagnosis, aid collateral assessment, and enhance prognostication for treatment decisions, using cerebrovascular morphology.

11.
Soft comput ; 27(11): 7513-7523, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36475038

RESUMO

The outbreak of coronavirus disease 2019 (COVID-19) occurred at the end of 2019, and it has continued to be a source of misery for millions of people and companies well into 2020. There is a surge of concern among all persons, especially those who wish to resume in-person activities, as the globe recovers from the epidemic and intends to return to a level of normalcy. Wearing a face mask greatly decreases the likelihood of viral transmission and gives a sense of security, according to studies. However, manually tracking the execution of this regulation is not possible. The key to this is technology. We present a deep learning-based system that can detect instances of improper use of face masks. A dual-stage convolutional neural network architecture is used in our system to recognize masked and unmasked faces. This will aid in the tracking of safety breaches, the promotion of face mask use, and the maintenance of a safe working environment. In this paper, we propose a variant of a multi-face detection model which has the potential to target and identify a group of people whether they are wearing masks or not.

12.
Ophthalmol Sci ; 3(2): 100254, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36691594

RESUMO

Objective: To develop automated algorithms for the detection of posterior vitreous detachment (PVD) using OCT imaging. Design: Evaluation of a diagnostic test or technology. Subjects: Overall, 42 385 consecutive OCT images (865 volumetric OCT scans) obtained with Heidelberg Spectralis from 865 eyes from 464 patients at an academic retina clinic between October 2020 and December 2021 were retrospectively reviewed. Methods: We developed a customized computer vision algorithm based on image filtering and edge detection to detect the posterior vitreous cortex for the determination of PVD status. A second deep learning (DL) image classification model based on convolutional neural networks and ResNet-50 architecture was also trained to identify PVD status from OCT images. The training dataset consisted of 674 OCT volume scans (33 026 OCT images), while the validation testing set consisted of 73 OCT volume scans (3577 OCT images). Overall, 118 OCT volume scans (5782 OCT images) were used as a separate external testing dataset. Main Outcome Measures: Accuracy, sensitivity, specificity, F1-scores, and area under the receiver operator characteristic curves (AUROCs) were measured to assess the performance of the automated algorithms. Results: Both the customized computer vision algorithm and DL model results were largely in agreement with the PVD status labeled by trained graders. The DL approach achieved an accuracy of 90.7% and an F1-score of 0.932 with a sensitivity of 100% and a specificity of 74.5% for PVD detection from an OCT volume scan. The AUROC was 89% at the image level and 96% at the volume level for the DL model. The customized computer vision algorithm attained an accuracy of 89.5% and an F1-score of 0.912 with a sensitivity of 91.9% and a specificity of 86.1% on the same task. Conclusions: Both the computer vision algorithm and the DL model applied on OCT imaging enabled reliable detection of PVD status, demonstrating the potential for OCT-based automated PVD status classification to assist with vitreoretinal surgical planning. Financial Disclosures: Proprietary or commercial disclosure may be found after the references.

13.
JID Innov ; 3(1): 100150, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36655135

RESUMO

Artificial intelligence (AI) has recently made great advances in image classification and malignancy prediction in the field of dermatology. However, understanding the applicability of AI in clinical dermatology practice remains challenging owing to the variability of models, image data, database characteristics, and variable outcome metrics. This systematic review aims to provide a comprehensive overview of dermatology literature using convolutional neural networks. Furthermore, the review summarizes the current landscape of image datasets, transfer learning approaches, challenges, and limitations within current AI literature and current regulatory pathways for approval of models as clinical decision support tools.

14.
JACC Asia ; 3(1): 1-14, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36873752

RESUMO

Percutaneous coronary intervention has been a standard treatment strategy for patients with coronary artery disease with continuous ebullient progress in technology and techniques. The application of artificial intelligence and deep learning in particular is currently boosting the development of interventional solutions, improving the efficiency and objectivity of diagnosis and treatment. The ever-growing amount of data and computing power together with cutting-edge algorithms pave the way for the integration of deep learning into clinical practice, which has revolutionized the interventional workflow in imaging processing, interpretation, and navigation. This review discusses the development of deep learning algorithms and their corresponding evaluation metrics together with their clinical applications. Advanced deep learning algorithms create new opportunities for precise diagnosis and tailored treatment with a high degree of automation, reduced radiation, and enhanced risk stratification. Generalization, interpretability, and regulatory issues are remaining challenges that need to be addressed through joint efforts from multidisciplinary community.

15.
Comput Struct Biotechnol J ; 21: 644-654, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36659917

RESUMO

N6-methyladenine (6mA) plays a critical role in various epigenetic processing including DNA replication, DNA repair, silencing, transcription, and diseases such as cancer. To understand such epigenetic mechanisms, 6 mA has been detected by high-throughput technologies on a genome-wide scale at single-base resolution, together with conventional methods such as immunoprecipitation, mass spectrometry and capillary electrophoresis, but these experimental approaches are time-consuming and laborious. To complement these problems, we have developed a CNN-based 6 mA site predictor, named CNN6mA, which proposed two new architectures: a position-specific 1-D convolutional layer and a cross-interactive network. In the position-specific 1-D convolutional layer, position-specific filters with different window sizes were applied to an inquiry sequence instead of sharing the same filters over all positions in order to extract the position-specific features at different levels. The cross-interactive network explored the relationships between all the nucleotide patterns within the inquiry sequence. Consequently, CNN6mA outperformed the existing state-of-the-art models in many species and created the contribution score vector that intelligibly interpret the prediction mechanism. The source codes and web application in CNN6mA are freely accessible at https://github.com/kuratahiroyuki/CNN6mA.git and http://kurata35.bio.kyutech.ac.jp/CNN6mA/, respectively.

16.
Ophthalmol Sci ; 3(1): 100233, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36545260

RESUMO

Purpose: To compare the diagnostic accuracy and explainability of a Vision Transformer deep learning technique, Data-efficient image Transformer (DeiT), and ResNet-50, trained on fundus photographs from the Ocular Hypertension Treatment Study (OHTS) to detect primary open-angle glaucoma (POAG) and identify the salient areas of the photographs most important for each model's decision-making process. Design: Evaluation of a diagnostic technology. Subjects Participants and Controls: Overall 66 715 photographs from 1636 OHTS participants and an additional 5 external datasets of 16 137 photographs of healthy and glaucoma eyes. Methods: Data-efficient image Transformer models were trained to detect 5 ground-truth OHTS POAG classifications: OHTS end point committee POAG determinations because of disc changes (model 1), visual field (VF) changes (model 2), or either disc or VF changes (model 3) and Reading Center determinations based on disc (model 4) and VFs (model 5). The best-performing DeiT models were compared with ResNet-50 models on OHTS and 5 external datasets. Main Outcome Measures: Diagnostic performance was compared using areas under the receiver operating characteristic curve (AUROC) and sensitivities at fixed specificities. The explainability of the DeiT and ResNet-50 models was compared by evaluating the attention maps derived directly from DeiT to 3 gradient-weighted class activation map strategies. Results: Compared with our best-performing ResNet-50 models, the DeiT models demonstrated similar performance on the OHTS test sets for all 5 ground-truth POAG labels; AUROC ranged from 0.82 (model 5) to 0.91 (model 1). Data-efficient image Transformer AUROC was consistently higher than ResNet-50 on the 5 external datasets. For example, AUROC for the main OHTS end point (model 3) was between 0.08 and 0.20 higher in the DeiT than ResNet-50 models. The saliency maps from the DeiT highlight localized areas of the neuroretinal rim, suggesting important rim features for classification. The same maps in the ResNet-50 models show a more diffuse, generalized distribution around the optic disc. Conclusions: Vision Transformers have the potential to improve generalizability and explainability in deep learning models, detecting eye disease and possibly other medical conditions that rely on imaging for clinical diagnosis and management.

17.
Ophthalmol Sci ; 3(1): 100222, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36325476

RESUMO

Purpose: Two novel deep learning methods using a convolutional neural network (CNN) and a recurrent neural network (RNN) have recently been developed to forecast future visual fields (VFs). Although the original evaluations of these models focused on overall accuracy, it was not assessed whether they can accurately identify patients with progressive glaucomatous vision loss to aid clinicians in preventing further decline. We evaluated these 2 prediction models for potential biases in overestimating or underestimating VF changes over time. Design: Retrospective observational cohort study. Participants: All available and reliable Swedish Interactive Thresholding Algorithm Standard 24-2 VFs from Massachusetts Eye and Ear Glaucoma Service collected between 1999 and 2020 were extracted. Because of the methods' respective needs, the CNN data set included 54 373 samples from 7472 patients, and the RNN data set included 24 430 samples from 1809 patients. Methods: The CNN and RNN methods were reimplemented. A fivefold cross-validation procedure was performed on each model, and pointwise mean absolute error (PMAE) was used to measure prediction accuracy. Test data were stratified into categories based on the severity of VF progression to investigate the models' performances on predicting worsening cases. The models were additionally compared with a no-change model that uses the baseline VF (for the CNN) and the last-observed VF (for the RNN) for its prediction. Main Outcome Measures: PMAE in predictions. Results: The overall PMAE 95% confidence intervals were 2.21 to 2.24 decibels (dB) for the CNN and 2.56 to 2.61 dB for the RNN, which were close to the original studies' reported values. However, both models exhibited large errors in identifying patients with worsening VFs and often failed to outperform the no-change model. Pointwise mean absolute error values were higher in patients with greater changes in mean sensitivity (for the CNN) and mean total deviation (for the RNN) between baseline and follow-up VFs. Conclusions: Although our evaluation confirms the low overall PMAEs reported in the original studies, our findings also reveal that both models severely underpredict worsening of VF loss. Because the accurate detection and projection of glaucomatous VF decline is crucial in ophthalmic clinical practice, we recommend that this consideration is explicitly taken into account when developing and evaluating future deep learning models.

18.
Comput Struct Biotechnol J ; 21: 238-250, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36544476

RESUMO

The process of designing biomolecules, in particular proteins, is witnessing a rapid change in available tooling and approaches, moving from design through physicochemical force fields, to producing plausible, complex sequences fast via end-to-end differentiable statistical models. To achieve conditional and controllable protein design, researchers at the interface of artificial intelligence and biology leverage advances in natural language processing (NLP) and computer vision techniques, coupled with advances in computing hardware to learn patterns from growing biological databases, curated annotations thereof, or both. Once learned, these patterns can be used to provide novel insights into mechanistic biology and the design of biomolecules. However, navigating and understanding the practical applications for the many recent protein design tools is complex. To facilitate this, we 1) document recent advances in deep learning (DL) assisted protein design from the last three years, 2) present a practical pipeline that allows to go from de novo-generated sequences to their predicted properties and web-powered visualization within minutes, and 3) leverage it to suggest a generated protein sequence which might be used to engineer a biosynthetic gene cluster to produce a molecular glue-like compound. Lastly, we discuss challenges and highlight opportunities for the protein design field.

19.
JHEP Rep ; 5(4): 100664, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36908748

RESUMO

Background & Aims: Patterns of liver HBV antigen expression have been described but not quantified at single-cell resolution. We applied quantitative techniques to liver biopsies from individuals with chronic hepatitis B and evaluated sampling heterogeneity, effects of disease stage, and nucleos(t)ide (NUC) treatment, and correlations between liver and peripheral viral biomarkers. Methods: Hepatocytes positive for HBV core and HBsAg were quantified using a novel four-plex immunofluorescence assay and image analysis. Biopsies were analysed from HBeAg-positive (n = 39) and HBeAg-negative (n = 75) participants before and after NUC treatment. To evaluate sampling effects, duplicate biopsies collected at the same time point were compared. Serum or plasma samples were evaluated for levels of HBV DNA, HBsAg, hepatitis B core-related antigen (HBcrAg), and HBV RNA. Results: Diffusely distributed individual HBV core+ cells and foci of HBsAg+ cells were the most common staining patterns. Hepatocytes positive for both HBV core and HBsAg were rare. Paired biopsies revealed large local variation in HBV staining within participants, which was confirmed in a large liver resection. NUC treatment was associated with a >100-fold lower median frequency of HBV core+ cells in HBeAg-positive and HBeAg-negative participants, whereas reductions in HBsAg+ cells were not statistically significant. The frequency of HBV core+ hepatocytes was lower in HBeAg-negative participants than in HBeAg-positive participants at all time points evaluated. Total HBV+ hepatocyte burden correlated with HBcrAg, HBV DNA, and HBV RNA only in baseline HBeAg-positive samples. Conclusions: Reductions in HBV core+ hepatocytes were associated with HBeAg-negative status and NUC treatment. Variation in HBV positivity within individual livers was extensive. Correlations between the liver and the periphery were found only between biomarkers likely indicative of cccDNA (HBV core+ and HBcrAg, HBV DNA, and RNA). Impact and Implications: HBV infects liver hepatocyte cells, and its genome can exist in two forms that express different sets of viral proteins: a circular genome called cccDNA that can express all viral proteins, including the HBV core and HBsAg proteins, or a linear fragment that inserts into the host genome typically to express HBsAg, but not HBV core. We used new techniques to determine the percentage of hepatocytes expressing the HBV core and HBsAg proteins in a large set of liver biopsies. We find that abundance and patterns of expression differ across patient groups and even within a single liver and that NUC treatment greatly reduces the number of core-expressing hepatocytes.

20.
J Clin Exp Hepatol ; 13(1): 149-161, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36647407

RESUMO

Artificial Intelligence (AI) is a mathematical process of computer mediating designing of algorithms to support human intelligence. AI in hepatology has shown tremendous promise to plan appropriate management and hence improve treatment outcomes. The field of AI is in a very early phase with limited clinical use. AI tools such as machine learning, deep learning, and 'big data' are in a continuous phase of evolution, presently being applied for clinical and basic research. In this review, we have summarized various AI applications in hepatology, the pitfalls and AI's future implications. Different AI models and algorithms are under study using clinical, laboratory, endoscopic and imaging parameters to diagnose and manage liver diseases and mass lesions. AI has helped to reduce human errors and improve treatment protocols. Further research and validation are required for future use of AI in hepatology.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA