RESUMO
BACKGROUND & AIMS: Several models have recently been developed to predict risk of hepatocellular carcinoma (HCC) in patients with chronic hepatitis B (CHB). Our aims were to develop and validate an artificial intelligence-assisted prediction model of HCC risk. METHODS: Using a gradient-boosting machine (GBM) algorithm, a model was developed using 6,051 patients with CHB who received entecavir or tenofovir therapy from 4 hospitals in Korea. Two external validation cohorts were independently established: Korean (5,817 patients from 14 Korean centers) and Caucasian (1,640 from 11 Western centers) PAGE-B cohorts. The primary outcome was HCC development. RESULTS: In the derivation cohort and the 2 validation cohorts, cirrhosis was present in 26.9%-50.2% of patients at baseline. A model using 10 parameters at baseline was derived and showed good predictive performance (c-index 0.79). This model showed significantly better discrimination than previous models (PAGE-B, modified PAGE-B, REACH-B, and CU-HCC) in both the Korean (c-index 0.79 vs. 0.64-0.74; all p <0.001) and Caucasian validation cohorts (c-index 0.81 vs. 0.57-0.79; all p <0.05 except modified PAGE-B, p = 0.42). A calibration plot showed a satisfactory calibration function. When the patients were grouped into 4 risk groups, the minimal-risk group (11.2% of the Korean cohort and 8.8% of the Caucasian cohort) had a less than 0.5% risk of HCC during 8 years of follow-up. CONCLUSIONS: This GBM-based model provides the best predictive power for HCC risk in Korean and Caucasian patients with CHB treated with entecavir or tenofovir. LAY SUMMARY: Risk scores have been developed to predict the risk of hepatocellular carcinoma (HCC) in patients with chronic hepatitis B. We developed and validated a new risk prediction model using machine learning algorithms in 13,508 antiviral-treated patients with chronic hepatitis B. Our new model, based on 10 common baseline characteristics, demonstrated superior performance in risk stratification compared with previous risk scores. This model also identified a group of patients at minimal risk of developing HCC, who could be indicated for less intensive HCC surveillance.
Assuntos
Inteligência Artificial/normas , Carcinoma Hepatocelular/fisiopatologia , Hepatite B Crônica/complicações , Adulto , Antivirais/farmacologia , Antivirais/uso terapêutico , Inteligência Artificial/estatística & dados numéricos , Povo Asiático/etnologia , Povo Asiático/estatística & dados numéricos , Carcinoma Hepatocelular/etiologia , Estudos de Coortes , Simulação por Computador/normas , Simulação por Computador/estatística & dados numéricos , Feminino , Seguimentos , Guanina/análogos & derivados , Guanina/farmacologia , Guanina/uso terapêutico , Hepatite B Crônica/fisiopatologia , Humanos , Neoplasias Hepáticas/complicações , Neoplasias Hepáticas/fisiopatologia , Masculino , Pessoa de Meia-Idade , República da Coreia/etnologia , Tenofovir/farmacologia , Tenofovir/uso terapêutico , População Branca/etnologia , População Branca/estatística & dados numéricosRESUMO
Recent neuroscience studies demonstrate that a deeper understanding of brain function requires a deeper understanding of behavior. Detailed behavioral measurements are now often collected using video cameras, resulting in an increased need for computer vision algorithms that extract useful information from video data. Here we introduce a new video analysis tool that combines the output of supervised pose estimation algorithms (e.g. DeepLabCut) with unsupervised dimensionality reduction methods to produce interpretable, low-dimensional representations of behavioral videos that extract more information than pose estimates alone. We demonstrate this tool by extracting interpretable behavioral features from videos of three different head-fixed mouse preparations, as well as a freely moving mouse in an open field arena, and show how these interpretable features can facilitate downstream behavioral and neural analyses. We also show how the behavioral features produced by our model improve the precision and interpretation of these downstream analyses compared to using the outputs of either fully supervised or fully unsupervised methods alone.
Assuntos
Algoritmos , Inteligência Artificial/estatística & dados numéricos , Comportamento Animal , Gravação em Vídeo , Animais , Biologia Computacional , Simulação por Computador , Cadeias de Markov , Camundongos , Modelos Estatísticos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado/estatística & dados numéricos , Aprendizado de Máquina não Supervisionado/estatística & dados numéricos , Gravação em Vídeo/estatística & dados numéricosRESUMO
Leveraging artificial intelligence (AI) approaches in animal health (AH) makes it possible to address highly complex issues such as those encountered in quantitative and predictive epidemiology, animal/human precision-based medicine, or to study host × pathogen interactions. AI may contribute (i) to diagnosis and disease case detection, (ii) to more reliable predictions and reduced errors, (iii) to representing more realistically complex biological systems and rendering computing codes more readable to non-computer scientists, (iv) to speeding-up decisions and improving accuracy in risk analyses, and (v) to better targeted interventions and anticipated negative effects. In turn, challenges in AH may stimulate AI research due to specificity of AH systems, data, constraints, and analytical objectives. Based on a literature review of scientific papers at the interface between AI and AH covering the period 2009-2019, and interviews with French researchers positioned at this interface, the present study explains the main AH areas where various AI approaches are currently mobilised, how it may contribute to renew AH research issues and remove methodological or conceptual barriers. After presenting the possible obstacles and levers, we propose several recommendations to better grasp the challenge represented by the AH/AI interface. With the development of several recent concepts promoting a global and multisectoral perspective in the field of health, AI should contribute to defract the different disciplines in AH towards more transversal and integrative research.
Assuntos
Inteligência Artificial/estatística & dados numéricos , Atenção à Saúde/métodos , Medicina Veterinária/métodos , Animais , Medicina Veterinária/instrumentaçãoRESUMO
Artificial intelligence (AI) utilizes computer algorithms to carry out tasks with human-like intelligence. Convolutional neural networks, a type of deep learning AI, can classify basal cell carcinoma, seborrheic keratosis, and conventional nevi, highlighting the potential for deep learning algorithms to improve diagnostic workflow in dermatopathology of highly routine diagnoses. Additionally, convolutional neural networks can support the diagnosis of melanoma and may help predict disease outcomes. Capabilities of machine learning in dermatopathology can extend beyond clinical diagnosis to education and research. Intelligent tutoring systems can teach visual diagnoses in inflammatory dermatoses, with measurable cognitive effects on learners. Natural language interfaces can instruct dermatopathology trainees to produce diagnostic reports that capture relevant detail for diagnosis in compliance with guidelines. Furthermore, deep learning can power computation- and population-based research. However, there are many limitations of deep learning that need to be addressed before broad incorporation into clinical practice. The current potential of AI in dermatopathology is to supplement diagnosis, and dermatopathologist guidance is essential for the development of useful deep learning algorithms. Herein, the recent progress of AI in dermatopathology is reviewed with emphasis on how deep learning can influence diagnosis, education, and research.
Assuntos
Inteligência Artificial/estatística & dados numéricos , Dermatologia/educação , Patologia/educação , Neoplasias Cutâneas/diagnóstico , Algoritmos , Carcinoma Basocelular/diagnóstico , Carcinoma Basocelular/patologia , Aprendizado Profundo/estatística & dados numéricos , Dermatologia/instrumentação , Diagnóstico Diferencial , Testes Diagnósticos de Rotina/instrumentação , Humanos , Ceratose Seborreica/diagnóstico , Ceratose Seborreica/patologia , Aprendizado de Máquina/estatística & dados numéricos , Melanoma/diagnóstico , Melanoma/patologia , Redes Neurais de Computação , Nevo/diagnóstico , Nevo/patologia , Variações Dependentes do Observador , Patologia/instrumentação , Pesquisa/instrumentação , Neoplasias Cutâneas/patologiaRESUMO
INTRODUCTION: The SARS-CoV-2 pandemic has led to one of the most critical and boundless waves of publications in the history of modern science. The necessity to find and pursue relevant information and quantify its quality is broadly acknowledged. Modern information retrieval techniques combined with artificial intelligence (AI) appear as one of the key strategies for COVID-19 living evidence management. Nevertheless, most AI projects that retrieve COVID-19 literature still require manual tasks. METHODS: In this context, we pre-sent a novel, automated search platform, called Risklick AI, which aims to automatically gather COVID-19 scientific evidence and enables scientists, policy makers, and healthcare professionals to find the most relevant information tailored to their question of interest in real time. RESULTS: Here, we compare the capacity of Risklick AI to find COVID-19-related clinical trials and scientific publications in comparison with clinicaltrials.gov and PubMed in the field of pharmacology and clinical intervention. DISCUSSION: The results demonstrate that Risklick AI is able to find COVID-19 references more effectively, both in terms of precision and recall, compared to the baseline platforms. Hence, Risklick AI could become a useful alternative assistant to scientists fighting the COVID-19 pandemic.
Assuntos
Inteligência Artificial/tendências , COVID-19/terapia , Interpretação Estatística de Dados , Desenvolvimento de Medicamentos/tendências , Medicina Baseada em Evidências/tendências , Farmacologia/tendências , Inteligência Artificial/estatística & dados numéricos , COVID-19/diagnóstico , COVID-19/epidemiologia , Ensaios Clínicos como Assunto/estatística & dados numéricos , Desenvolvimento de Medicamentos/estatística & dados numéricos , Medicina Baseada em Evidências/estatística & dados numéricos , Humanos , Farmacologia/estatística & dados numéricos , Sistema de RegistrosRESUMO
While artificial agents (AA) such as Artificial Intelligence are being extensively developed, a popular belief that AA will someday surpass human intelligence is growing. The present research examined whether this common belief translates into negative psychological and behavioral consequences when individuals assess that an AA performs better than them on cognitive and intellectual tasks. In two studies, participants were led to believe that an AA performed better or less well than them on a cognitive inhibition task (Study 1) and on an intelligence task (Study 2). Results indicated that being outperformed by an AA increased subsequent participants' performance as long as they did not experience psychological discomfort towards the AA and self-threat. Psychological implications in terms of motivation and potential threat as well as the prerequisite for the future interactions of humans with AAs are further discussed.
Assuntos
Inteligência Artificial/estatística & dados numéricos , Atitude Frente aos Computadores , Inibição Psicológica , Inteligência/fisiologia , Sujeitos da Pesquisa/psicologia , Sujeitos da Pesquisa/estatística & dados numéricos , Autoimagem , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-IdadeAssuntos
Inteligência Artificial/estatística & dados numéricos , Inteligência Artificial/normas , Objetivos , Pesquisa/estatística & dados numéricos , Pesquisa/tendências , Inteligência Artificial/economia , Inteligência Artificial/ética , Autoria , Criança , China , Conjuntos de Dados como Assunto , Registros Eletrônicos de Saúde , Humanos , Seleção de Pessoal , Densidade Demográfica , Pesquisa/economia , Pesquisa/normas , Pesquisadores/normas , Pesquisadores/provisão & distribuição , Mudança Social , Fatores de Tempo , Estados UnidosRESUMO
PURPOSE: To investigate the feasibility of training an artificial intelligence (AI) on a public-available AI platform to diagnose polypoidal choroidal vasculopathy (PCV) using indocyanine green angiography (ICGA). METHODS: Two methods using AI models were trained by a data set including 430 ICGA images of normal, neovascular age-related macular degeneration (nvAMD), and PCV eyes on a public-available AI platform. The one-step method distinguished normal, nvAMD, and PCV images simultaneously. The two-step method identifies normal and abnormal ICGA images at the first step and diagnoses PCV from the abnormal ICGA images at the second step. The method with higher performance was used to compare with retinal specialists and ophthalmologic residents on the performance of diagnosing PCV. RESULTS: The two-step method had better performance, in which the precision was 0.911 and the recall was 0.911 at the first step, and the precision was 0.783, and the recall was 0.783 at the second step. For the test data set, the two-step method distinguished normal and abnormal images with an accuracy of 1 and diagnosed PCV with an accuracy of 0.83, which was comparable to retinal specialists and superior to ophthalmologic residents. CONCLUSION: In this evaluation of ICGA images from normal, nvAMD, and PCV eyes, the models trained on a public-available AI platform had comparable performance to retinal specialists for diagnosing PCV. The utility of public-available AI platform might help everyone including ophthalmologists who had no AI-related resources, especially those in less developed areas, for future studies.
Assuntos
Inteligência Artificial/estatística & dados numéricos , Doenças da Coroide/diagnóstico , Corioide/irrigação sanguínea , Angiofluoresceinografia/métodos , Aprendizado de Máquina/estatística & dados numéricos , Pólipos/diagnóstico , Tomografia de Coerência Óptica/métodos , Fundo de Olho , Humanos , Curva ROC , Estudos RetrospectivosRESUMO
Accurate detection and quantification of hepatic fibrosis remain essential for assessing the severity of non-alcoholic fatty liver disease (NAFLD) and its response to therapy in clinical practice and research studies. Our aim was to develop an integrated artificial intelligence-based automated tool to detect and quantify hepatic fibrosis and assess its architectural pattern in NAFLD liver biopsies. Digital images of the trichrome-stained slides of liver biopsies from patients with NAFLD and different severity of fibrosis were used. Two expert liver pathologists semi-quantitatively assessed the severity of fibrosis in these biopsies and using a web applet provided a total of 987 annotations of different fibrosis types for developing, training and testing supervised machine learning models to detect fibrosis. The collagen proportionate area (CPA) was measured and correlated with each of the pathologists semi-quantitative fibrosis scores. Models were created and tested to detect each of six potential fibrosis patterns. There was good to excellent correlation between CPA and the pathologist score of fibrosis stage. The coefficient of determination (R2) of automated CPA with the pathologist stages ranged from 0.60 to 0.86. There was considerable overlap in the calculated CPA across different fibrosis stages. For identification of fibrosis patterns, the models areas under the receiver operator curve were 78.6% for detection of periportal fibrosis, 83.3% for pericellular fibrosis, 86.4% for portal fibrosis and >90% for detection of normal fibrosis, bridging fibrosis, and presence of nodule/cirrhosis. In conclusion, an integrated automated tool could accurately quantify hepatic fibrosis and determine its architectural patterns in NAFLD liver biopsies.
Assuntos
Inteligência Artificial/estatística & dados numéricos , Colágeno/análise , Cirrose Hepática/patologia , Hepatopatia Gordurosa não Alcoólica/patologia , Automação/métodos , Compostos Azo/metabolismo , Biópsia , Ensaios Clínicos como Assunto , Colágeno/metabolismo , Amarelo de Eosina-(YS)/metabolismo , Fibrose/classificação , Fibrose/patologia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Fígado/patologia , Verde de Metila/metabolismo , Escores de Disfunção Orgânica , Patologistas/estatística & dados numéricos , Veia Porta/fisiopatologia , Padrões de Prática Médica/normas , Índice de Gravidade de Doença , Aprendizado de Máquina Supervisionado/estatística & dados numéricosRESUMO
The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load.
Assuntos
Inteligência Artificial , Betacoronavirus , Cegueira/reabilitação , COVID-19/prevenção & controle , Infecções por Coronavirus/prevenção & controle , Pandemias/prevenção & controle , Pneumonia Viral/prevenção & controle , Auxiliares Sensoriais , Dispositivos Eletrônicos Vestíveis , Acústica , Adulto , Algoritmos , Inteligência Artificial/estatística & dados numéricos , Cegueira/psicologia , Visão de Cores , Sistemas Computacionais/estatística & dados numéricos , Infecções por Coronavirus/epidemiologia , Desenho de Equipamento , Feminino , Alemanha/epidemiologia , Humanos , Processamento de Imagem Assistida por Computador/estatística & dados numéricos , Masculino , Distanciamento Físico , Pneumonia Viral/epidemiologia , Robótica , SARS-CoV-2 , Semântica , Óculos Inteligentes/estatística & dados numéricos , Pessoas com Deficiência Visual/reabilitação , Dispositivos Eletrônicos Vestíveis/estatística & dados numéricosRESUMO
For the first time, a field programmable transistor array (FPTA) was used to evolve robot control circuits directly in analog hardware. Controllers were successfully incrementally evolved for a physical robot engaged in a series of visually guided behaviours, including finding a target in a complex environment where the goal was hidden from most locations. Circuits for recognising spoken commands were also evolved and these were used in conjunction with the controllers to enable voice control of the robot, triggering behavioural switching. Poor quality visual sensors were deliberately used to test the ability of evolved analog circuits to deal with noisy uncertain data in realtime. Visual features were coevolved with the controllers to automatically achieve dimensionality reduction and feature extraction and selection in an integrated way. An efficient new method was developed for simulating the robot in its visual environment. This allowed controllers to be evaluated in a simulation connected to the FPTA. The controllers then transferred seamlessly to the real world. The circuit replication issue was also addressed in experiments where circuits were evolved to be able to function correctly in multiple areas of the FPTA. A methodology was developed to analyse the evolved circuits which provided insights into their operation. Comparative experiments demonstrated the superior evolvability of the transistor array medium.
Assuntos
Robótica/instrumentação , Transistores Eletrônicos , Algoritmos , Inteligência Artificial/estatística & dados numéricos , Aprendizagem da Esquiva , Simulação por Computador , Desenho de Equipamento , Fenômenos Genéticos , Humanos , Redes Neurais de Computação , Robótica/estatística & dados numéricos , Interface para o Reconhecimento da Fala , Transistores Eletrônicos/estatística & dados numéricosRESUMO
The objective of this article is to discuss the inherent bias involved with artificial intelligence-based decision support systems for healthcare. In this article, the authors describe some relevant work published in this area. A proposed overview of solutions is also presented. The authors believe that the information presented in this article will enhance the readers' understanding of this inherent bias and add to the discussion on this topic. Finally, the authors discuss an overview of the need to implement transdisciplinary solutions that can be used to mitigate this bias.
Assuntos
Inteligência Artificial/estatística & dados numéricos , Viés , Sistemas de Apoio a Decisões Clínicas/estatística & dados numéricos , HumanosRESUMO
Across Canada, healthcare leaders are exploring the potential of artificial intelligence and advanced analytics to transform the healthcare system. This report shares a summary of the current state of healthcare analytics across major hospitals and public healthcare agencies in Canada. We present information on the current level of investment, data governance maturity, analytics talent and tools and models being leveraged across the nation. The findings point to an opportunity for enhanced collaboration in advanced analytics and the adoption of nascent artificial intelligence technologies in healthcare. The recommendations will help drive adoption in Canada, ultimately improving the patient experience and promoting better health outcomes for Canadians.
Assuntos
Inteligência Artificial/tendências , Atenção à Saúde/organização & administração , Administração Hospitalar/métodos , Inteligência Artificial/estatística & dados numéricos , Gerenciamento de Dados/métodos , Atenção à Saúde/métodos , Hospitais , Humanos , Inquéritos e QuestionáriosRESUMO
OBJECTIVES: In Japan, endoscopic resection (ER) is often used to treat esophageal squamous cell carcinoma (ESCC) when invasion depths are diagnosed as EP-SM1, whereas ESCC cases deeper than SM2 are treated by surgical operation or chemoradiotherapy. Therefore, it is crucial to determine the invasion depth of ESCC via preoperative endoscopic examination. Recently, rapid progress in the utilization of artificial intelligence (AI) with deep learning in medical fields has been achieved. In this study, we demonstrate the diagnostic ability of AI to measure ESCC invasion depth. METHODS: We retrospectively collected 1751 training images of ESCC at the Cancer Institute Hospital, Japan. We developed an AI-diagnostic system of convolutional neural networks using deep learning techniques with these images. Subsequently, 291 test images were prepared and reviewed by the AI-diagnostic system and 13 board-certified endoscopists to evaluate the diagnostic accuracy. RESULTS: The AI-diagnostic system detected 95.5% (279/291) of the ESCC in test images in 10 s, analyzed the 279 images and correctly estimated the invasion depth of ESCC with a sensitivity of 84.1% and accuracy of 80.9% in 6 s. The accuracy score of this system exceeded those of 12 out of 13 board-certified endoscopists, and its area under the curve (AUC) was greater than the AUCs of all endoscopists. CONCLUSIONS: The AI-diagnostic system demonstrated a higher diagnostic accuracy for ESCC invasion depth than those of endoscopists and, therefore, can be potentially used in ESCC diagnostics.
Assuntos
Inteligência Artificial/estatística & dados numéricos , Ressecção Endoscópica de Mucosa/instrumentação , Neoplasias Esofágicas/patologia , Carcinoma de Células Escamosas do Esôfago/cirurgia , Idoso , Idoso de 80 Anos ou mais , Área Sob a Curva , Aprendizado Profundo , Ressecção Endoscópica de Mucosa/métodos , Carcinoma de Células Escamosas do Esôfago/diagnóstico , Feminino , Humanos , Japão/epidemiologia , Masculino , Pessoa de Meia-Idade , Invasividade Neoplásica , Redes Neurais de Computação , Avaliação de Resultados em Cuidados de Saúde , Cuidados Pré-Operatórios/métodos , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e EspecificidadeRESUMO
BACKGROUND: Deep learning has made tremendous successes in numerous artificial intelligence applications and is unsurprisingly penetrating into various biomedical domains. High-throughput omics data in the form of molecular profile matrices, such as transcriptomes and metabolomes, have long existed as a valuable resource for facilitating diagnosis of patient statuses/stages. It is timely imperative to compare deep learning neural networks against classical machine learning methods in the setting of matrix-formed omics data in terms of classification accuracy and robustness. RESULTS: Using 37 high throughput omics datasets, covering transcriptomes and metabolomes, we evaluated the classification power of deep learning compared to traditional machine learning methods. Representative deep learning methods, Multi-Layer Perceptrons (MLP) and Convolutional Neural Networks (CNN), were deployed and explored in seeking optimal architectures for the best classification performance. Together with five classical supervised classification methods (Linear Discriminant Analysis, Multinomial Logistic Regression, Naïve Bayes, Random Forest, Support Vector Machine), MLP and CNN were comparatively tested on the 37 datasets to predict disease stages or to discriminate diseased samples from normal samples. MLPs achieved the highest overall accuracy among all methods tested. More thorough analyses revealed that single hidden layer MLPs with ample hidden units outperformed deeper MLPs. Furthermore, MLP was one of the most robust methods against imbalanced class composition and inaccurate class labels. CONCLUSION: Our results concluded that shallow MLPs (of one or two hidden layers) with ample hidden neurons are sufficient to achieve superior and robust classification performance in exploiting numerical matrix-formed omics data for diagnosis purpose. Specific observations regarding optimal network width, class imbalance tolerance, and inaccurate labeling tolerance will inform future improvement of neural network applications on functional genomics data.
Assuntos
Aprendizado Profundo/tendências , Perfilação da Expressão Gênica/estatística & dados numéricos , Aprendizado de Máquina/tendências , Redes Neurais de Computação , Algoritmos , Inteligência Artificial/estatística & dados numéricos , Teorema de Bayes , Aprendizado Profundo/estatística & dados numéricos , Perfilação da Expressão Gênica/métodos , Humanos , Modelos Logísticos , Aprendizado de Máquina/estatística & dados numéricos , Metaboloma/genética , Máquina de Vetores de Suporte/estatística & dados numéricos , Máquina de Vetores de Suporte/tendênciasRESUMO
Biologically inspired deep convolutional neural networks (CNNs), trained for computer vision tasks, have been found to predict cortical responses with remarkable accuracy. However, the internal operations of these models remain poorly understood, and the factors that account for their success are unknown. Here we develop a set of techniques for using CNNs to gain insights into the computational mechanisms underlying cortical responses. We focused on responses in the occipital place area (OPA), a scene-selective region of dorsal occipitoparietal cortex. In a previous study, we showed that fMRI activation patterns in the OPA contain information about the navigational affordances of scenes; that is, information about where one can and cannot move within the immediate environment. We hypothesized that this affordance information could be extracted using a set of purely feedforward computations. To test this idea, we examined a deep CNN with a feedforward architecture that had been previously trained for scene classification. We found that responses in the CNN to scene images were highly predictive of fMRI responses in the OPA. Moreover the CNN accounted for the portion of OPA variance relating to the navigational affordances of scenes. The CNN could thus serve as an image-computable candidate model of affordance-related responses in the OPA. We then ran a series of in silico experiments on this model to gain insights into its internal operations. These analyses showed that the computation of affordance-related features relied heavily on visual information at high-spatial frequencies and cardinal orientations, both of which have previously been identified as low-level stimulus preferences of scene-selective visual cortex. These computations also exhibited a strong preference for information in the lower visual field, which is consistent with known retinotopic biases in the OPA. Visualizations of feature selectivity within the CNN suggested that affordance-based responses encoded features that define the layout of the spatial environment, such as boundary-defining junctions and large extended surfaces. Together, these results map the sensory functions of the OPA onto a fully quantitative model that provides insights into its visual computations. More broadly, they advance integrative techniques for understanding visual cortex across multiple level of analysis: from the identification of cortical sensory functions to the modeling of their underlying algorithms.
Assuntos
Inteligência Artificial/estatística & dados numéricos , Redes Neurais de Computação , Córtex Visual/fisiologia , Algoritmos , Mapeamento Encefálico , Biologia Computacional , Humanos , Imageamento por Ressonância Magnética , Modelos Neurológicos , Lobo Occipital/fisiologia , Estimulação Luminosa , Navegação Espacial/fisiologia , Campos Visuais/fisiologia , Percepção Visual/fisiologiaRESUMO
BACKGROUND: Informed estimates claim that 80% to 99% of alarms set off in hospital units are false or clinically insignificant, representing a cacophony of sounds that do not present a real danger to patients. These false alarms can lead to an alert overload that causes a health care provider to miss important events that could be harmful or even life-threatening. As health care units become more dependent on monitoring devices for patient care purposes, the alarm fatigue issue has to be addressed as a major concern for the health care team as well as to enhance patient safety. OBJECTIVE: The main goal of this paper was to propose a feasible solution for the alarm fatigue problem by using an automatic reasoning mechanism to decide how to notify members of the health care team. The aim was to reduce the number of notifications sent by determining whether or not to group a set of alarms that occur over a short period of time to deliver them together, without compromising patient safety. METHODS: This paper describes: (1) a model for supporting reasoning algorithms that decide how to notify caregivers to avoid alarm fatigue; (2) an architecture for health systems that support patient monitoring and notification capabilities; and (3) a reasoning algorithm that specifies how to notify caregivers by deciding whether to aggregate a group of alarms to avoid alarm fatigue. RESULTS: Experiments were used to demonstrate that providing a reasoning system can reduce the notifications received by the caregivers by up to 99.3% (582/586) of the total alarms generated. Our experiments were evaluated through the use of a dataset comprising patient monitoring data and vital signs recorded during 32 surgical cases where patients underwent anesthesia at the Royal Adelaide Hospital. We present the results of our algorithm by using graphs we generated using the R language, where we show whether the algorithm decided to deliver an alarm immediately or after a delay. CONCLUSIONS: The experimental results strongly suggest that this reasoning algorithm is a useful strategy for avoiding alarm fatigue. Although we evaluated our algorithm in an experimental environment, we tried to reproduce the context of a clinical environment by using real-world patient data. Our future work is to reproduce the evaluation study based on more realistic clinical conditions by increasing the number of patients, monitoring parameters, and types of alarm.
Assuntos
Adaptação Psicológica/fisiologia , Inteligência Artificial/estatística & dados numéricos , Fadiga/terapia , Monitorização Fisiológica/métodos , Algoritmos , Alarmes Clínicos , Humanos , Reprodutibilidade dos TestesRESUMO
Artificial intelligence (AI) has potential to improve the accuracy of screening for valvular and congenital heart disease by auscultation. However, despite recent advances in signal processing and classification algorithms focused on heart sounds, clinical acceptance of this technology has been limited, in part due to lack of objective performance data. We hypothesized that a heart murmur detection algorithm could be quantitatively and objectively evaluated by virtual clinical trial. All cases from the Johns Hopkins Cardiac Auscultatory Recording Database (CARD) with either a pathologic murmur, an innocent murmur or no murmur were selected. The test algorithm, developed independently of CARD, analyzed each recording using an automated batch processing protocol. 3180 heart sound recordings from 603 outpatient visits were selected from CARD. Algorithm estimation of heart rate was similar to gold standard. Sensitivity and specificity for detection of pathologic cases were 93% (CI 90-95%) and 81% (CI 75-85%), respectively, with accuracy 88% (CI 85-91%). Performance varied according to algorithm certainty measure, age of patient, heart rate, murmur intensity, location of recording on the chest and pathologic diagnosis. This is the first reported comprehensive and objective evaluation of an AI-based murmur detection algorithm to our knowledge. The test algorithm performed well in this virtual clinical trial. This strategy can be used to efficiently compare performance of other algorithms against the same dataset and improve understanding of the potential clinical usefulness of AI-assisted auscultation.