Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 150
Filtrar
1.
Ultrasound Med Biol ; 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38692940

RESUMEN

OBJECTIVE: We present a statistical characterisation of fetal anatomies in obstetric ultrasound video sweeps where the transducer followed a fixed trajectory on the maternal abdomen. METHODS: Large-scale, frame-level manual annotations of fetal anatomies (head, spine, abdomen, pelvis, femur) were used to compute common frame-level anatomy detection patterns expected for breech, cephalic, and transverse fetal presentations, with respect to video sweep paths. The patterns, termed statistical heatmaps, quantify the expected anatomies seen in a simple obstetric ultrasound video sweep protocol. In this study, a total of 760 unique manual annotations from 365 unique pregnancies were used. RESULTS: We provide a qualitative interpretation of the heatmaps assessing the transducer sweep paths with respect to different fetal presentations and suggest ways in which the heatmaps can be applied in computational research (e.g., as a machine learning prior). CONCLUSION: The heatmap parameters are freely available to other researchers (https://github.com/agleed/calopus_statistical_heatmaps).

2.
Br J Anaesth ; 132(5): 1049-1062, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38448269

RESUMEN

BACKGROUND: Artificial intelligence (AI) for ultrasound scanning in regional anaesthesia is a rapidly developing interdisciplinary field. There is a risk that work could be undertaken in parallel by different elements of the community but with a lack of knowledge transfer between disciplines, leading to repetition and diverging methodologies. This scoping review aimed to identify and map the available literature on the accuracy and utility of AI systems for ultrasound scanning in regional anaesthesia. METHODS: A literature search was conducted using Medline, Embase, CINAHL, IEEE Xplore, and ACM Digital Library. Clinical trial registries, a registry of doctoral theses, regulatory authority databases, and websites of learned societies in the field were searched. Online commercial sources were also reviewed. RESULTS: In total, 13,014 sources were identified; 116 were included for full-text review. A marked change in AI techniques was noted in 2016-17, from which point on the predominant technique used was deep learning. Methods of evaluating accuracy are variable, meaning it is impossible to compare the performance of one model with another. Evaluations of utility are more comparable, but predominantly gained from the simulation setting with limited clinical data on efficacy or safety. Study methodology and reporting lack standardisation. CONCLUSIONS: There is a lack of structure to the evaluation of accuracy and utility of AI for ultrasound scanning in regional anaesthesia, which hinders rigorous appraisal and clinical uptake. A framework for consistent evaluation is needed to inform model evaluation, allow comparison between approaches/models, and facilitate appropriate clinical adoption.


Asunto(s)
Anestesia de Conducción , Inteligencia Artificial , Humanos , Ultrasonografía , Simulación por Computador , Bases de Datos Factuales
3.
PLoS Negl Trop Dis ; 18(3): e0012033, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38507368

RESUMEN

BACKGROUND: Abdominal ultrasound imaging is an important method for hepatic schistosomiasis diagnosis and staging. Several ultrasound staging systems have been proposed, each attempting to standardise schistosomal periportal fibrosis (PPF) diagnosis. This review aims to establish the role of ultrasound in the diagnosis and staging of schistosomal PPF, and to map the evolution of ultrasound staging systems over time, focusing on internal validation and external reproducibility. METHODS: A systematic search was undertaken on 21st December 2022 considering the following databases: PubMed/MEDLINE (1946-present), Embase (1974-present), Global Health (1973-present), Global Index Medicus (1901-present), and Web of Science Core Collection-Science Citation Index Expanded (1900-present) and the Cochrane Central Register of Controlled Trials (1996-present). Case reports, systematic reviews and meta-analyses, and studies exclusively using transient or shear-wave elastography were excluded. Variables extracted included study design, study population, schistosomal PPF characteristics, and diagnostic methods. The PRISMA-ScR (2018) guidelines were followed to inform the structure of the scoping analysis. RESULTS: The initial search yielded 573 unique articles, of which 168 were removed after screening titles and abstracts, 43 were not retrieved due to full texts not being available online or through inter-library loans, and 170 were excluded during full text review. There were 192 remaining studies eligible for extraction. Of the extracted studies, 61.8% (76/123) of studies that reported study year were conducted after the year 2000. Over half of all extracted studies (59.4%; 114/192) were conducted in Brazil (26.0%; 50/192), China (18.8%; 36/192) or Egypt (14.6%; 28/192). For the species of schistosome considered, 77.6% (149/192) of studies considered S. mansoni and 21.4% (41/192) of studies considered S. japonicum. The ultrasound staging systems used took on three forms: measurement-based, feature-based and image pattern-based. The Niamey protocol, a measurement and image pattern-based system, was the most used among the staging systems (32.8%; 63/192), despite being the most recently proposed in 1996. The second most used was the Cairo protocol (20.8%; 40/192). Of the studies using the Niamey protocol, 77.8% (49/63) only used the image patterns element. Where ultrasound technology was specified, studies after 2000 were more likely to use convex transducers (43.4%; 33/76) than studies conducted before 2000 (32.7%; 16/49). Reporting on ultrasound-based hepatic diagnoses and their association with clinical severity was poor. Just over half of studies (56.2%; 108/192) reported the personnel acquiring the ultrasound images. A small number (9.4%; 18/192) of studies detailed their methods of image quality assurance, and 13.0% (25/192) referenced, discussed or quantified the inter- or intra-observer variation of the staging system that was used. CONCLUSIONS: The exclusive use of the image patterns in many studies despite lack of specific acquisition guidance, the increasing number of studies over time that conduct ultrasound staging of schistosomal PPF, and the advances in ultrasound technology used since 2000 all indicate a need to consider an update to the Niamey protocol. The protocol update should simplify and prioritise what is to be assessed, advise on who is to conduct the ultrasound examination, and procedures for improved standardisation and external reproducibility.


Asunto(s)
Sistemas de Atención de Punto , Esquistosomiasis , Humanos , Reproducibilidad de los Resultados , Cirrosis Hepática/diagnóstico por imagen , Ultrasonografía , Esquistosomiasis/diagnóstico por imagen
4.
Ultrasound Med Biol ; 50(6): 805-816, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38467521

RESUMEN

OBJECTIVE: Automated medical image analysis solutions should closely mimic complete human actions to be useful in clinical practice. However, more often an automated image analysis solution represents only part of a human task, which restricts its practical utility. In the case of ultrasound-based fetal biometry, an automated solution should ideally recognize key fetal structures in freehand video guidance, select a standard plane from a video stream and perform biometry. A complete automated solution should automate all three subactions. METHODS: In this article, we consider how to automate the complete human action of first-trimester biometry measurement from real-world freehand ultrasound. In the proposed hybrid convolutional neural network (CNN) architecture design, a classification regression-based guidance model detects and tracks fetal anatomical structures (using visual cues) in the ultrasound video. Several high-quality standard planes that contain the mid-sagittal view of the fetus are sampled at multiple time stamps (using a custom-designed confident-frame detector) based on the estimated probability values associated with predicted anatomical structures that define the biometry plane. Automated semantic segmentation is performed on the selected frames to extract fetal anatomical landmarks. A crown-rump length (CRL) estimate is calculated as the mean CRL from these multiple frames. RESULTS: Our fully automated method has a high correlation with clinical expert CRL measurement (Pearson's p = 0.92, R-squared [R2] = 0.84) and a low mean absolute error of 0.834 (weeks) for fetal age estimation on a test data set of 42 videos. CONCLUSION: A novel algorithm for standard plane detection employs a quality detection mechanism defined by clinical standards, ensuring precise biometric measurements.


Asunto(s)
Biometría , Primer Trimestre del Embarazo , Ultrasonografía Prenatal , Humanos , Ultrasonografía Prenatal/métodos , Femenino , Embarazo , Biometría/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Feto/diagnóstico por imagen , Feto/anatomía & histología
6.
Med Image Anal ; 90: 102977, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37778101

RESUMEN

In obstetric sonography, the quality of acquisition of ultrasound scan video is crucial for accurate (manual or automated) biometric measurement and fetal health assessment. However, the nature of fetal ultrasound involves free-hand probe manipulation and this can make it challenging to capture high-quality videos for fetal biometry, especially for the less-experienced sonographer. Manually checking the quality of acquired videos would be time-consuming, subjective and requires a comprehensive understanding of fetal anatomy. Thus, it would be advantageous to develop an automatic quality assessment method to support video standardization and improve diagnostic accuracy of video-based analysis. In this paper, we propose a general and purely data-driven video-based quality assessment framework which directly learns a distinguishable feature representation from high-quality ultrasound videos alone, without anatomical annotations. Our solution effectively utilizes both spatial and temporal information of ultrasound videos. The spatio-temporal representation is learned by a bi-directional reconstruction between the video space and the feature space, enhanced by a key-query memory module proposed in the feature space. To further improve performance, two additional modalities are introduced in training which are the sonographer gaze and optical flow derived from the video. Two different clinical quality assessment tasks in fetal ultrasound are considered in our experiments, i.e., measurement of the fetal head circumference and cerebellar diameter; in both of these, low-quality videos are detected by the large reconstruction error in the feature space. Extensive experimental evaluation demonstrates the merits of our approach.


Asunto(s)
Feto , Ultrasonografía Prenatal , Embarazo , Femenino , Humanos , Ultrasonografía Prenatal/métodos , Feto/diagnóstico por imagen , Ultrasonografía
7.
Nature ; 623(7985): 106-114, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37880365

RESUMEN

Maturation of the human fetal brain should follow precisely scheduled structural growth and folding of the cerebral cortex for optimal postnatal function1. We present a normative digital atlas of fetal brain maturation based on a prospective international cohort of healthy pregnant women2, selected using World Health Organization recommendations for growth standards3. Their fetuses were accurately dated in the first trimester, with satisfactory growth and neurodevelopment from early pregnancy to 2 years of age4,5. The atlas was produced using 1,059 optimal quality, three-dimensional ultrasound brain volumes from 899 of the fetuses and an automated analysis pipeline6-8. The atlas corresponds structurally to published magnetic resonance images9, but with finer anatomical details in deep grey matter. The between-study site variability represented less than 8.0% of the total variance of all brain measures, supporting pooling data from the eight study sites to produce patterns of normative maturation. We have thereby generated an average representation of each cerebral hemisphere between 14 and 31 weeks' gestation with quantification of intracranial volume variability and growth patterns. Emergent asymmetries were detectable from as early as 14 weeks, with peak asymmetries in regions associated with language development and functional lateralization between 20 and 26 weeks' gestation. These patterns were validated in 1,487 three-dimensional brain volumes from 1,295 different fetuses in the same cohort. We provide a unique spatiotemporal benchmark of fetal brain maturation from a large cohort with normative postnatal growth and neurodevelopment.


Asunto(s)
Encéfalo , Desarrollo Fetal , Feto , Preescolar , Femenino , Humanos , Embarazo , Encéfalo/anatomía & histología , Encéfalo/embriología , Encéfalo/crecimiento & desarrollo , Feto/embriología , Edad Gestacional , Sustancia Gris/anatomía & histología , Sustancia Gris/embriología , Sustancia Gris/crecimiento & desarrollo , Voluntarios Sanos , Internacionalidad , Imagen por Resonancia Magnética , Tamaño de los Órganos , Estudios Prospectivos , Organización Mundial de la Salud , Imagenología Tridimensional , Ultrasonografía
8.
Med Image Anal ; 90: 102981, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37863638

RESUMEN

In this work, we exploit multi-task learning to jointly predict the two decision-making processes of gaze movement and probe manipulation that an experienced sonographer would perform in routine obstetric scanning. A multimodal guidance framework, Multimodal-GuideNet, is proposed to detect the causal relationship between a real-world ultrasound video signal, synchronized gaze, and probe motion. The association between the multi-modality inputs is learned and shared through a modality-aware spatial graph that leverages useful cross-modal dependencies. By estimating the probability distribution of probe and gaze movements in real scans, the predicted guidance signals also allow inter- and intra-sonographer variations and avoid a fixed scanning path. We validate the new multi-modality approach on three types of obstetric scanning examinations, and the result consistently outperforms single-task learning under various guidance policies. To simulate sonographer's attention on multi-structure images, we also explore multi-step estimation in gaze guidance, and its visual results show that the prediction allows multiple gaze centers that are substantially aligned with underlying anatomical structures.


Asunto(s)
Atención , Aprendizaje , Femenino , Embarazo , Humanos , Ultrasonografía Prenatal , Ultrasonografía
9.
Med Image Anal ; 90: 102935, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37716198

RESUMEN

The prowess that makes few-shot learning desirable in medical image analysis is the efficient use of the support image data, which are labelled to classify or segment new classes, a task that otherwise requires substantially more training images and expert annotations. This work describes a fully 3D prototypical few-shot segmentation algorithm, such that the trained networks can be effectively adapted to clinically interesting structures that are absent in training, using only a few labelled images from a different institute. First, to compensate for the widely recognised spatial variability between institutions in episodic adaptation of novel classes, a novel spatial registration mechanism is integrated into prototypical learning, consisting of a segmentation head and an spatial alignment module. Second, to assist the training with observed imperfect alignment, support mask conditioning module is proposed to further utilise the annotation available from the support images. Extensive experiments are presented in an application of segmenting eight anatomical structures important for interventional planning, using a data set of 589 pelvic T2-weighted MR images, acquired at seven institutes. The results demonstrate the efficacy in each of the 3D formulation, the spatial registration, and the support mask conditioning, all of which made positive contributions independently or collectively. Compared with the previously proposed 2D alternatives, the few-shot segmentation performance was improved with statistical significance, regardless whether the support data come from the same or different institutes.

10.
Artículo en Inglés | MEDLINE | ID: mdl-37665699

RESUMEN

Monitoring the healthy development of a fetus requires accurate and timely identification of different maternal-fetal structures as they grow. To facilitate this objective in an automated fashion, we propose a deep-learning-based image classification architecture called the COMFormer to classify maternal-fetal and brain anatomical structures present in 2-D fetal ultrasound (US) images. The proposed architecture classifies the two subcategories separately: maternal-fetal (abdomen, brain, femur, thorax, mother's cervix (MC), and others) and brain anatomical structures [trans-thalamic (TT), trans-cerebellum (TC), trans-ventricular (TV), and non-brain (NB)]. Our proposed architecture relies on a transformer-based approach that leverages spatial and global features using a newly designed residual cross-variance attention block. This block introduces an advanced cross-covariance attention (XCA) mechanism to capture a long-range representation from the input using spatial (e.g., shape, texture, intensity) and global features. To build COMFormer, we used a large publicly available dataset (BCNatal) consisting of 12 400 images from 1792 subjects. Experimental results prove that COMFormer outperforms the recent CNN and transformer-based models by achieving 95.64% and 96.33% classification accuracy on maternal-fetal and brain anatomy, respectively.


Asunto(s)
Encéfalo , Ultrasonografía Prenatal , Femenino , Embarazo , Humanos , Encéfalo/diagnóstico por imagen , Ultrasonografía , Suministros de Energía Eléctrica , Fémur
11.
Proc Mach Learn Res ; 210: 184-198, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37252341

RESUMEN

We present a method for classifying human skill at fetal ultrasound scanning from eye-tracking and pupillary data of sonographers. Human skill characterization for this clinical task typically creates groupings of clinician skills such as expert and beginner based on the number of years of professional experience; experts typically have more than 10 years and beginners between 0-5 years. In some cases, they also include trainees who are not yet fully-qualified professionals. Prior work has considered eye movements that necessitates separating eye-tracking data into eye movements, such as fixations and saccades. Our method does not use prior assumptions about the relationship between years of experience and does not require the separation of eye-tracking data. Our best performing skill classification model achieves an F1 score of 98% and 70% for expert and trainee classes respectively. We also show that years of experience as a direct measure of skill, is significantly correlated to the expertise of a sonographer.

12.
NPJ Digit Med ; 6(1): 36, 2023 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-36894653

RESUMEN

Accurate estimation of gestational age is an essential component of good obstetric care and informs clinical decision-making throughout pregnancy. As the date of the last menstrual period is often unknown or uncertain, ultrasound measurement of fetal size is currently the best method for estimating gestational age. The calculation assumes an average fetal size at each gestational age. The method is accurate in the first trimester, but less so in the second and third trimesters as growth deviates from the average and variation in fetal size increases. Consequently, fetal ultrasound late in pregnancy has a wide margin of error of at least ±2 weeks' gestation. Here, we utilise state-of-the-art machine learning methods to estimate gestational age using only image analysis of standard ultrasound planes, without any measurement information. The machine learning model is based on ultrasound images from two independent datasets: one for training and internal validation, and another for external validation. During validation, the model was blinded to the ground truth of gestational age (based on a reliable last menstrual period date and confirmatory first-trimester fetal crown rump length). We show that this approach compensates for increases in size variation and is even accurate in cases of intrauterine growth restriction. Our best machine-learning based model estimates gestational age with a mean absolute error of 3.0 (95% CI, 2.9-3.2) and 4.3 (95% CI, 4.1-4.5) days in the second and third trimesters, respectively, which outperforms current ultrasound-based clinical biometry at these gestational ages. Our method for dating the pregnancy in the second and third trimesters is, therefore, more accurate than published methods.

13.
Med Image Anal ; 86: 102768, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36857945

RESUMEN

While Generative Adversarial Networks (GANs) can now reliably produce realistic images in a multitude of imaging domains, they are ill-equipped to model thin, stochastic textures present in many large 3D fluorescent microscopy (FM) images acquired in biological research. This is especially problematic in neuroscience where the lack of ground truth data impedes the development of automated image analysis algorithms for neurons and neural populations. We therefore propose an unpaired mesh-to-image translation methodology for generating volumetric FM images of neurons from paired ground truths. We start by learning unique FM styles efficiently through a Gramian-based discriminator. Then, we stylize 3D voxelized meshes of previously reconstructed neurons by successively generating slices. As a result, we effectively create a synthetic microscope and can acquire realistic FM images of neurons with control over the image content and imaging configurations. We demonstrate the feasibility of our architecture and its superior performance compared to state-of-the-art image translation architectures through a variety of texture-based metrics, unsupervised segmentation accuracy, and an expert opinion test. In this study, we use 2 synthetic FM datasets and 2 newly acquired FM datasets of retinal neurons.


Asunto(s)
Microscopía , Mallas Quirúrgicas , Humanos , Imagenología Tridimensional/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Neuronas
14.
Ultrasound Med Biol ; 49(1): 106-121, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36241588

RESUMEN

Ultrasound-based assistive tools are aimed at reducing the high skill needed to interpret a scan by providing automatic image guidance. This may encourage uptake of ultrasound (US) clinical assessments in rural settings in low- and middle-income countries (LMICs), where well-trained sonographers can be scarce. This paper describes a new method that automatically generates an assistive video overlay to provide image guidance to a user to assess placenta location. The user captures US video by following a sweep protocol that scans a U-shape on the lower maternal abdomen. The sweep trajectory is simple and easy to learn. We initially explore a 2-D embedding of placenta shapes, mapping manually segmented placentas in US video frames to a 2-D space. We map 2013 frames from 11 videos. This provides insight into the spectrum of placenta shapes that appear when using the sweep protocol. We propose classification of the placenta shapes from three observed clusters: complex, tip and rectangular. We use this insight to design an effective automatic segmentation algorithm, combining a U-Net with a CRF-RNN module to enhance segmentation performance with respect to placenta shape. The U-Net + CRF-RNN algorithm automatically segments the placenta and maternal bladder. We assess segmentation performance using both area and shape metrics. We report results comparable to the state-of-the-art for automatic placenta segmentation on the Dice metric, achieving 0.83 ± 0.15 evaluated on 2127 frames from 10 videos. We also qualitatively evaluate 78,308 frames from 135 videos, assessing if the anatomical outline is correctly segmented. We found that addition of the CRF-RNN improves over a baseline U-Net when faced with a complex placenta shape, which we observe in our 2-D embedding, up to 14% with respect to the percentage shape error. From the segmentations, an assistive video overlay is automatically constructed that (i) highlights the placenta and bladder, (ii) determines the lower placenta edge and highlights this location as a point and (iii) labels a 2-cm clearance on the lower placenta edge. The 2-cm clearance is chosen to satisfy current clinical guidelines. We propose to assess the placenta location by comparing the 2-cm region and the bottom of the bladder, which represents a coarse localization of the cervix. Anatomically, the bladder must sit above the cervix region. We present proof-of-concept results for the video overlay.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Embarazo , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía , Vejiga Urinaria/diagnóstico por imagen , Placenta/diagnóstico por imagen
15.
IEEE Trans Med Imaging ; 42(5): 1301-1313, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36455084

RESUMEN

Obstetric ultrasound assessment of fetal anatomy in the first trimester of pregnancy is one of the less explored fields in obstetric sonography because of the paucity of guidelines on anatomical screening and availability of data. This paper, for the first time, examines imaging proficiency and practices of first trimester ultrasound scanning through analysis of full-length ultrasound video scans. Findings from this study provide insights to inform the development of more effective user-machine interfaces, of targeted assistive technologies, as well as improvements in workflow protocols for first trimester scanning. Specifically, this paper presents an automated framework to model operator clinical workflow from full-length routine first-trimester fetal ultrasound scan videos. The 2D+t convolutional neural network-based architecture proposed for video annotation incorporates transfer learning and spatio-temporal (2D+t) modelling to automatically partition an ultrasound video into semantically meaningful temporal segments based on the fetal anatomy detected in the video. The model results in a cross-validation A1 accuracy of 96.10% , F1=0.95 , precision =0.94 and recall =0.95 . Automated semantic partitioning of unlabelled video scans (n=250) achieves a high correlation with expert annotations ( ρ = 0.95, p=0.06 ). Clinical workflow patterns, operator skill and its variability can be derived from the resulting representation using the detected anatomy labels, order, and distribution. It is shown that nuchal translucency (NT) is the toughest standard plane to acquire and most operators struggle to localize high-quality frames. Furthermore, it is found that newly qualified operators spend 25.56% more time on key biometry tasks than experienced operators.


Asunto(s)
Medida de Translucencia Nucal , Ultrasonografía Prenatal , Embarazo , Femenino , Humanos , Primer Trimestre del Embarazo , Flujo de Trabajo , Ultrasonografía Prenatal/métodos , Medida de Translucencia Nucal/métodos , Aprendizaje Automático
16.
Br J Anaesth ; 130(2): 226-233, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36088136

RESUMEN

BACKGROUND: Ultrasound-guided regional anaesthesia relies on the visualisation of key landmark, target, and safety structures on ultrasound. However, this can be challenging, particularly for inexperienced practitioners. Artificial intelligence (AI) is increasingly being applied to medical image interpretation, including ultrasound. In this exploratory study, we evaluated ultrasound scanning performance by non-experts in ultrasound-guided regional anaesthesia, with and without the use of an assistive AI device. METHODS: Twenty-one anaesthetists, all non-experts in ultrasound-guided regional anaesthesia, underwent a standardised teaching session in ultrasound scanning for six peripheral nerve blocks. All then performed a scan for each block; half of the scans were performed with AI assistance and half without. Experts assessed acquisition of the correct block view and correct identification of sono-anatomical structures on each view. Participants reported scan confidence, experts provided a global rating score of scan performance, and scans were timed. RESULTS: Experts assessed 126 ultrasound scans. Participants acquired the correct block view in 56/62 (90.3%) scans with the device compared with 47/62 (75.1%) without (P=0.031, two data points lost). Correct identification of sono-anatomical structures on the view was 188/212 (88.8%) with the device compared with 161/208 (77.4%) without (P=0.002). There was no significant overall difference in participant confidence, expert global performance score, or scan time. CONCLUSIONS: Use of an assistive AI device was associated with improved ultrasound image acquisition and interpretation. Such technology holds potential to augment performance of ultrasound scanning for regional anaesthesia by non-experts, potentially expanding patient access to these techniques. CLINICAL TRIAL REGISTRATION: NCT05156099.


Asunto(s)
Anestesia de Conducción , Bloqueo Nervioso , Humanos , Bloqueo Nervioso/métodos , Inteligencia Artificial , Ultrasonografía Intervencional/métodos , Anestesia de Conducción/métodos , Ultrasonografía
17.
Br J Anaesth ; 130(2): 217-225, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-35987706

RESUMEN

BACKGROUND: Ultrasonound is used to identify anatomical structures during regional anaesthesia and to guide needle insertion and injection of local anaesthetic. ScanNav Anatomy Peripheral Nerve Block (Intelligent Ultrasound, Cardiff, UK) is an artificial intelligence-based device that produces a colour overlay on real-time B-mode ultrasound to highlight anatomical structures of interest. We evaluated the accuracy of the artificial-intelligence colour overlay and its perceived influence on risk of adverse events or block failure. METHODS: Ultrasound-guided regional anaesthesia experts acquired 720 videos from 40 volunteers (across nine anatomical regions) without using the device. The artificial-intelligence colour overlay was subsequently applied. Three more experts independently reviewed each video (with the original unmodified video) to assess accuracy of the colour overlay in relation to key anatomical structures (true positive/negative and false positive/negative) and the potential for highlighting to modify perceived risk of adverse events (needle trauma to nerves, arteries, pleura, and peritoneum) or block failure. RESULTS: The artificial-intelligence models identified the structure of interest in 93.5% of cases (1519/1624), with a false-negative rate of 3.0% (48/1624) and a false-positive rate of 3.5% (57/1624). Highlighting was judged to reduce the risk of unwanted needle trauma to nerves, arteries, pleura, and peritoneum in 62.9-86.4% of cases (302/480 to 345/400), and to increase the risk in 0.0-1.7% (0/160 to 8/480). Risk of block failure was reported to be reduced in 81.3% of scans (585/720) and to be increased in 1.8% (13/720). CONCLUSIONS: Artificial intelligence-based devices can potentially aid image acquisition and interpretation in ultrasound-guided regional anaesthesia. Further studies are necessary to demonstrate their effectiveness in supporting training and clinical practice. CLINICAL TRIAL REGISTRATION: NCT04906018.


Asunto(s)
Anestesia de Conducción , Bloqueo Nervioso , Humanos , Bloqueo Nervioso/métodos , Inteligencia Artificial , Ultrasonografía Intervencional/métodos , Anestesia de Conducción/métodos , Ultrasonografía
18.
JMIR Hum Factors ; 9(4): e34823, 2022 12 27.
Artículo en Inglés | MEDLINE | ID: mdl-36574278

RESUMEN

BACKGROUND: Ultrasound for gestational age (GA) assessment is not routinely available in resource-constrained settings, particularly in rural and remote locations. The TraCer device combines a handheld wireless ultrasound probe and a tablet with artificial intelligence (AI)-enabled software that obtains GA from videos of the fetal head by automated measurements of the fetal transcerebellar diameter and head circumference. OBJECTIVE: The aim of this study was to assess the perceptions of pregnant women, their families, and health care workers regarding the feasibility and acceptability of the TraCer device in an appropriate setting. METHODS: A descriptive study using qualitative methods was conducted in two public health facilities in Kilifi county in coastal Kenya prior to introduction of the new technology. Study participants were shown a video role-play of the use of TraCer at a typical antenatal clinic visit. Data were collected through 6 focus group discussions (N=52) and 18 in-depth interviews. RESULTS: Overall, TraCer was found to be highly acceptable to women, their families, and health care workers, and its implementation at health care facilities was considered to be feasible. Its introduction was predicted to reduce anxiety regarding fetal well-being, increase antenatal care attendance, increase confidence by women in their care providers, as well as save time and cost by reducing unnecessary referrals. TraCer was felt to increase the self-image of health care workers and reduce time spent providing antenatal care. Some participants expressed hesitancy toward the new technology, indicating the need to test its performance over time before full acceptance by some users. The preferred cadre of health care professionals to use the device were antenatal clinic nurses. Important implementation considerations included adequate staff training and the need to ensure sustainability and consistency of the service. Misconceptions were common, with a tendency to overestimate the diagnostic capability, and expectations that it would provide complete reassurance of fetal and maternal well-being and not primarily the GA. CONCLUSIONS: This study shows a positive attitude toward TraCer and highlights the potential role of this innovation that uses AI-enabled automation to assess GA. Clarity of messaging about the tool and its role in pregnancy is essential to address misconceptions and prevent misuse. Further research on clinical validation and related usability and safety evaluations are recommended.

19.
Med Image Anal ; 82: 102630, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36223683

RESUMEN

In this work, we present a novel gaze-assisted natural language processing (NLP)-based video captioning model to describe routine second-trimester fetal ultrasound scan videos in a vocabulary of spoken sonography. The primary novelty of our multi-modal approach is that the learned video captioning model is built using a combination of ultrasound video, tracked gaze and textual transcriptions from speech recordings. The textual captions that describe the spatio-temporal scan video content are learnt from sonographer speech recordings. The generation of captions is assisted by sonographer gaze-tracking information reflecting their visual attention while performing live-imaging and interpreting a frozen image. To evaluate the effect of adding, or withholding, different forms of gaze on the video model, we compare spatio-temporal deep networks trained using three multi-modal configurations, namely: (1) a gaze-less neural network with only text and video as input, (2) a neural network additionally using real sonographer gaze in the form of attention maps, and (3) a neural network using automatically-predicted gaze in the form of saliency maps instead. We assess algorithm performance through established general text-based metrics (BLEU, ROUGE-L, F1 score), a domain-specific metric (ARS), and metrics that consider the richness and efficiency of the generated captions with respect to the scan video. Results show that the proposed gaze-assisted models can generate richer and more diverse captions for clinical fetal ultrasound scan videos than those without gaze at the expense of the perceived sentence structure. The results also show that the generated captions are similar to sonographer speech in terms of discussing the visual content and the scanning actions performed.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos , Embarazo , Femenino , Ultrasonografía Prenatal
20.
Reg Anesth Pain Med ; 47(12): 762-772, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36283714

RESUMEN

Recent recommendations describe a set of core anatomical structures to identify on ultrasound for the performance of basic blocks in ultrasound-guided regional anesthesia (UGRA). This project aimed to generate consensus recommendations for core structures to identify during the performance of intermediate and advanced blocks. An initial longlist of structures was refined by an international panel of key opinion leaders in UGRA over a three-round Delphi process. All rounds were conducted virtually and anonymously. Blocks were considered twice in each round: for "orientation scanning" (the dynamic process of acquiring the final view) and for "block view" (which visualizes the block site and is maintained for needle insertion/injection). A "strong recommendation" was made if ≥75% of participants rated any structure as "definitely include" in any round. A "weak recommendation" was made if >50% of participants rated it as "definitely include" or "probably include" for all rounds, but the criterion for strong recommendation was never met. Structures which did not meet either criterion were excluded. Forty-one participants were invited and 40 accepted; 38 completed all three rounds. Participants considered the ultrasound scanning for 19 peripheral nerve blocks across all three rounds. Two hundred and seventy-four structures were reviewed for both orientation scanning and block view; a "strong recommendation" was made for 60 structures on orientation scanning and 44 on the block view. A "weak recommendation" was made for 107 and 62 structures, respectively. These recommendations are intended to help standardize teaching and research in UGRA and support widespread and consistent practice.


Asunto(s)
Anestesia de Conducción , Ultrasonografía Intervencional , Humanos , Ultrasonografía , Nervios Periféricos/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA