Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Artif Intell Med ; 155: 102934, 2024 09.
Artigo em Inglês | MEDLINE | ID: mdl-39088883

RESUMO

BACKGROUND: Melanoma is a serious risk to human health and early identification is vital for treatment success. Deep learning (DL) has the potential to detect cancer using imaging technologies and many studies provide evidence that DL algorithms can achieve high accuracy in melanoma diagnostics. OBJECTIVES: To critically assess different DL performances in diagnosing melanoma using dermatoscopic images and discuss the relationship between dermatologists and DL. METHODS: Ovid-Medline, Embase, IEEE Xplore, and the Cochrane Library were systematically searched from inception until 7th December 2021. Studies that reported diagnostic DL model performances in detecting melanoma using dermatoscopic images were included if they had specific outcomes and histopathologic confirmation. Binary diagnostic accuracy data and contingency tables were extracted to analyze outcomes of interest, which included sensitivity (SEN), specificity (SPE), and area under the curve (AUC). Subgroup analyses were performed according to human-machine comparison and cooperation. The study was registered in PROSPERO, CRD42022367824. RESULTS: 2309 records were initially retrieved, of which 37 studies met our inclusion criteria, and 27 provided sufficient data for meta-analytical synthesis. The pooled SEN was 82 % (range 77-86), SPE was 87 % (range 84-90), with an AUC of 0.92 (range 0.89-0.94). Human-machine comparison had pooled AUCs of 0.87 (0.84-0.90) and 0.83 (0.79-0.86) for DL and dermatologists, respectively. Pooled AUCs were 0.90 (0.87-0.93), 0.80 (0.76-0.83), and 0.88 (0.85-0.91) for DL, and junior and senior dermatologists, respectively. Analyses of human-machine cooperation were 0.88 (0.85-0.91) for DL, 0.76 (0.72-0.79) for unassisted, and 0.87 (0.84-0.90) for DL-assisted dermatologists. CONCLUSIONS: Evidence suggests that DL algorithms are as accurate as senior dermatologists in melanoma diagnostics. Therefore, DL could be used to support dermatologists in diagnostic decision-making. Although, further high-quality, large-scale multicenter studies are required to address the specific challenges associated with medical AI-based diagnostics.


Assuntos
Aprendizado Profundo , Dermoscopia , Melanoma , Neoplasias Cutâneas , Humanos , Dermoscopia/métodos , Melanoma/diagnóstico , Melanoma/patologia , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/patologia , Pele/diagnóstico por imagem , Pele/patologia
2.
Ergonomics ; 67(1): 81-94, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37074777

RESUMO

Lane Departure Warning Systems (LDWS) generate a warning in case of imminent lane departure. LDWS have proven to be effective and associated human-machine cooperation modelled. In this study, LDWS acceptance and its impact on visual and steering behaviour have been investigated over 6 weeks for novice and experienced drivers. Unprovoked lane departures were analysed along three driving tasks gradually more demanding. These observations were compared to a baseline condition without automation. The number of lane departures and their duration were dramatically reduced by LDWS, and a narrower visual spread of search during lane departure events was recorded. The findings confirmed LDWS effectiveness and suggested that these benefits are supported by visuo-attentional guidance. No specific influence of driving experience on LDWS was found, suggesting that similar cognitive processes are engaged with or without driving experience. Drivers' acceptance of LDWS lowered after automation use, but LDWS effectiveness remained stable during prolonged use.Practitioner summary: Lane Departure Warning Systems (LDWS) have been designed to prevent lane departure crashes. Here, LDWS assessment over a 6-week period showed a major drop in the number of lane departure events increasing over time. LDWS effectiveness is supported by the guidance of drivers' visual attention during lane departure events.


Assuntos
Condução de Veículo , Humanos , Condução de Veículo/psicologia , Acidentes de Trânsito/prevenção & controle , Estudos Longitudinais , Tempo de Reação , Automação
3.
Sensors (Basel) ; 23(19)2023 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-37836979

RESUMO

Forward collision warning systems (FCWSs) monitor the road ahead and warn drivers when the time to collision reaches a certain threshold. Using a driving simulator, this study compared the effects of FCWSs between novice drivers (unlicensed drivers) and experienced drivers (holding a driving license for at least four years) on near-collision events, as well as visual and driving behaviors. The experimental drives lasted about six hours spread over six consecutive weeks. Visual behaviors (e.g., mean number of fixations) and driving behaviors (e.g., braking reaction times) were collected during unprovoked near-collision events occurring during a car-following task, with (FCWS group) or without FCWS (No Automation group). FCWS presence reduced the number of near-collision events drastically and enhanced visual behaviors during those events. Unexpectedly, brake reaction times were observed to be significantly longer with FCWS, suggesting a cognitive cost associated with the warning process. Still, the FCWS showed a slight safety benefit for novice drivers attributed to the assistance provided for the situation analysis. Outside the warning events, FCWS presence also impacted car-following behaviors. Drivers took an extra safety margin, possibly to prevent incidental triggering of warnings. The data enlighten the nature of the cognitive processes associated with FCWSs. Altogether, the findings support the general efficiency of FCWSs observed through a massive reduction in the number of near-collision events and point toward the need for further investigations.

4.
Sensors (Basel) ; 22(23)2022 Nov 23.
Artigo em Inglês | MEDLINE | ID: mdl-36501792

RESUMO

The Shared Control (SC) cooperation scheme, where the driver and automated driving system control the vehicle together, has been gaining attention through the years as a promising option to improve road safety. As a result, advanced interaction methods can be investigated to enhance user experience, acceptance, and trust. Under this perspective, not only the development of algorithms and system applications are needed, but it is also essential to evaluate the system with real drivers, assess its impact on road safety, and understand how drivers accept and are willing to use this technology. In this sense, the contribution of this work is to conduct an experimental study to evaluate if a previously developed shared control system can improve overtaking performance on roads with oncoming traffic. The evaluation is performed in a Driver-in-the-Loop (DiL) simulator with 13 real drivers. The system based on SC is compared against a vehicle with conventional SAE-L2 functionalities. The evaluation includes both objective and subjective assessments. Results show that SC proved to be the best solution for assisting the driver during overtaking in terms of safety and acceptance. The SC's longer and smoother control transitions provide benefits to cooperative driving. The System Usability Scale (SUS) and the System Acceptance Scale (SAS) questionnaire show that the SC system was perceived as better in terms of usability, usefulness, and satisfaction.


Assuntos
Acidentes de Trânsito , Condução de Veículo , Humanos , Acidentes de Trânsito/prevenção & controle , Automação , Algoritmos , Tecnologia
6.
JMIR Med Inform ; 9(12): e33049, 2021 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-34889764

RESUMO

BACKGROUND: Deep learning (DL)-based artificial intelligence may have different diagnostic characteristics than human experts in medical diagnosis. As a data-driven knowledge system, heterogeneous population incidence in the clinical world is considered to cause more bias to DL than clinicians. Conversely, by experiencing limited numbers of cases, human experts may exhibit large interindividual variability. Thus, understanding how the 2 groups classify given data differently is an essential step for the cooperative usage of DL in clinical application. OBJECTIVE: This study aimed to evaluate and compare the differential effects of clinical experience in otoendoscopic image diagnosis in both computers and physicians exemplified by the class imbalance problem and guide clinicians when utilizing decision support systems. METHODS: We used digital otoendoscopic images of patients who visited the outpatient clinic in the Department of Otorhinolaryngology at Severance Hospital, Seoul, South Korea, from January 2013 to June 2019, for a total of 22,707 otoendoscopic images. We excluded similar images, and 7500 otoendoscopic images were selected for labeling. We built a DL-based image classification model to classify the given image into 6 disease categories. Two test sets of 300 images were populated: balanced and imbalanced test sets. We included 14 clinicians (otolaryngologists and nonotolaryngology specialists including general practitioners) and 13 DL-based models. We used accuracy (overall and per-class) and kappa statistics to compare the results of individual physicians and the ML models. RESULTS: Our ML models had consistently high accuracies (balanced test set: mean 77.14%, SD 1.83%; imbalanced test set: mean 82.03%, SD 3.06%), equivalent to those of otolaryngologists (balanced: mean 71.17%, SD 3.37%; imbalanced: mean 72.84%, SD 6.41%) and far better than those of nonotolaryngologists (balanced: mean 45.63%, SD 7.89%; imbalanced: mean 44.08%, SD 15.83%). However, ML models suffered from class imbalance problems (balanced test set: mean 77.14%, SD 1.83%; imbalanced test set: mean 82.03%, SD 3.06%). This was mitigated by data augmentation, particularly for low incidence classes, but rare disease classes still had low per-class accuracies. Human physicians, despite being less affected by prevalence, showed high interphysician variability (ML models: kappa=0.83, SD 0.02; otolaryngologists: kappa=0.60, SD 0.07). CONCLUSIONS: Even though ML models deliver excellent performance in classifying ear disease, physicians and ML models have their own strengths. ML models have consistent and high accuracy while considering only the given image and show bias toward prevalence, whereas human physicians have varying performance but do not show bias toward prevalence and may also consider extra information that is not images. To deliver the best patient care in the shortage of otolaryngologists, our ML model can serve a cooperative role for clinicians with diverse expertise, as long as it is kept in mind that models consider only images and could be biased toward prevalent diseases even after data augmentation.

7.
Int J Med Robot ; 17(2): e2231, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33470010

RESUMO

BACKGROUND: Traditional craniotomy depends primarily on the experience of the surgeon. However, the accuracy of manual operation is limited and carries certain surgical risks. The interaction method of current robot-assisted craniotomy is unnatural and inadaptive to the operating style of the surgeon. In this research, we built a hands-on synergistic robotics craniotomy system with human-machine collaboration. Safe isometric surfaces and virtual restraint methods are combined to achieve highly accurate, efficient, minimally invasive and safe craniotomy. MATERIALS AND METHODS: Fifteen three-dimensional (3D)-printed beagle skull models were used to evaluate the system accuracy and the related image guidance process. It mainly includes the design of the surgical plan, the adopted strategy based on motion constraint and safe isometric surface, and the impedance control method based on the position inner loop via the human-machine collaboration method. The trajectory tracking experiment was performed by applying human-machine collaboration, and completed an experiment on the 3D-printed beagle skull models involving drilling and milling of the skull performed by the robot, and evaluation of accuracy via computed tomographic (CT) scanning verification after the operation. RESULTS: The 3D-printed beagle skull model experiment shows that the average errors for the top surface and the bottom surface, and the angle error were 0.81 ± 0.15 mm, 0.89 ± 0.12 mm, and 1.74° ± 0.16°, respectively. The average milling position errors for the top and bottom surfaces were 0.87 ± 0.19 and 0.93 ± 0.22 mm, respectively. CONCLUSION: The performance of the robot system was evaluated and verified using a 3D-printed beagle model experiment. The proposed collaborative surgical robot system is feasible and can complete a craniotomy, with improved accuracy and surgical safety.


Assuntos
Procedimentos Cirúrgicos Robóticos , Animais , Craniotomia , Cães , Movimento (Física) , Crânio
8.
Ergonomics ; 61(12): 1601-1612, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30010501

RESUMO

A seminal work by Sheridan and Verplank depicted 10 levels of automation, ranging from no automation to an automation that acts completely autonomously without human support. These levels of automation were later complemented with a four-stage model of human information processing. Next, human-machine cooperation centred models and associated cooperation modes were introduced. The objective of the experiment was to test which human-machine theorie describe automation use better. The participants were asked to choose repeatedly between four automation types (i.e. no automation, warning, co-action, function delegation) to complete three multi-attribute task battery tasks. The results showed that the participants favour the selection of automation types offering the best human-machine interactions quality rather that the most effective automation type. Contrary to human-machine cooperation models, technology centred models could not predict accurately automation selection. The most advanced automation was not the most selected. Practitioner Summary: The experiment dealt with how people select different automation types to complete the multi-attribute task battery that emulates recreational aircraft pilot tasks. Automation performance was not the main criteria that explain automation use, as people tend to select an automation type based on the quality of the human-machine cooperation.


Assuntos
Aeronaves , Automação , Sistemas Homem-Máquina , Pilotos , Adolescente , Adulto , Comportamento de Escolha , Comportamento Cooperativo , Feminino , Humanos , Masculino , Processos Mentais , Pessoa de Meia-Idade , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA