Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
2.
Echocardiography ; 32(2): 302-9, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24924997

RESUMEN

BACKGROUND: Three-dimensional fusion echocardiography (3DFE) is a novel postprocessing approach that utilizes imaging data acquired from multiple 3D acquisitions. We assessed image quality, endocardial border definition, and cardiac wall motion in patients using 3DFE compared to standard 3D images (3D) and results obtained with contrast echocardiography (2DC). METHODS: Twenty-four patients (mean age 66.9 ± 13 years, 17 males, 7 females) undergoing 2DC had three, noncontrast, 3D apical volumes acquired at rest. Images were fused using an automated image fusion approach. Quality of the 3DFE was compared to both 3D and 2DC based on contrast-to-noise ratio (CNR) and endocardial border definition. We then compared clinical wall-motion score index (WMSI) calculated from 3DFE and 3D to those obtained from 2DC images. RESULTS: Fused 3D volumes had significantly improved CNR (8.92 ± 1.35 vs. 6.59 ± 1.19, P < 0.0005) and segmental image quality (2.42 ± 0.99 vs. 1.93 ± 1.18, P < 0.005) compared to unfused 3D acquisitions. Levels achieved were closer to scores for 2D contrast images (CNR: 9.04 ± 2.21, P = 0.6; segmental image quality: 2.91 ± 0.37, P < 0.005). WMSI calculated from fused 3D volumes did not differ significantly from those obtained from 2D contrast echocardiography (1.06 ± 0.09 vs. 1.07 ± 0.15, P = 0.69), whereas unfused images produced significantly more variable results (1.19 ± 0.30). This was confirmed by a better intraclass correlation coefficient (ICC 0.72; 95% CI 0.32-0.88) relative to comparisons with unfused images (ICC 0.56; 95% CI 0.02-0.81). CONCLUSION: 3DFE significantly improves left ventricular image quality compared to unfused 3D in a patient population and allows noncontrast assessment of wall motion that approaches that achieved with 2D contrast echocardiography.


Asunto(s)
Medios de Contraste , Ecocardiografía Tridimensional/métodos , Ventrículos Cardíacos/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Disfunción Ventricular Izquierda/diagnóstico por imagen , Anciano , Ecocardiografía/métodos , Femenino , Humanos , Aumento de la Imagen , Masculino , Variaciones Dependientes del Observador , Fosfolípidos , Reproducibilidad de los Resultados , Hexafluoruro de Azufre
3.
Ultrasound Med Biol ; 50(5): 703-711, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38350787

RESUMEN

OBJECTIVE: The aim of this study was address the challenges posed by the manual labeling of fetal ultrasound images by introducing an unsupervised approach, the fetal ultrasound semantic clustering (FUSC) method. The primary objective was to automatically cluster a large volume of ultrasound images into various fetal views, reducing or eliminating the need for labor-intensive manual labeling. METHODS: The FUSC method was developed by using a substantial data set comprising 88,063 images. The methodology involves an unsupervised clustering approach to categorize ultrasound images into diverse fetal views. The method's effectiveness was further evaluated on an additional, unseen data set consisting of 8187 images. The evaluation included assessment of the clustering purity, and the entire process is detailed to provide insights into the method's performance. RESULTS: The FUSC method exhibited notable success, achieving >92% clustering purity on the evaluation data set of 8187 images. The results signify the feasibility of automatically clustering fetal ultrasound images without relying on manual labeling. The study showcases the potential of this approach in handling a large volume of ultrasound scans encountered in clinical practice, with implications for improving efficiency and accuracy in fetal ultrasound imaging. CONCLUSION: The findings of this investigation suggest that the FUSC method holds significant promise for the field of fetal ultrasound imaging. By automating the clustering of ultrasound images, this approach has the potential to reduce the manual labeling burden, making the process more efficient. The results pave the way for advanced automated labeling solutions, contributing to the enhancement of clinical practices in fetal ultrasound imaging. Our code is available at https://github.com/BioMedIA-MBZUAI/FUSC.


Asunto(s)
Semántica , Ultrasonografía Prenatal , Embarazo , Femenino , Humanos , Segundo Trimestre del Embarazo , Ultrasonografía Prenatal/métodos , Aprendizaje Automático Supervisado , Análisis por Conglomerados
4.
Sci Rep ; 14(1): 16464, 2024 07 16.
Artículo en Inglés | MEDLINE | ID: mdl-39013934

RESUMEN

The spread of antimicrobial resistance (AMR) leads to challenging complications and losses of human lives plus medical resources, with a high expectancy of deterioration in the future if the problem is not controlled. From a machine learning perspective, data-driven models could aid clinicians and microbiologists by anticipating the resistance beforehand. Our study serves as the first attempt to harness deep learning (DL) techniques and the multimodal data available in electronic health records (EHR) for predicting AMR. In this work, we utilize and preprocess the MIMIC-IV database extensively to produce separate structured input sources for time-invariant and time-series data customized to the AMR task. Then, a multimodality fusion approach merges the two modalities with clinical notes to determine resistance based on an antibiotic or a pathogen. To efficiently predict AMR, our approach builds the foundation for deploying multimodal DL techniques in clinical practice, leveraging the existing patient data.


Asunto(s)
Antibacterianos , Registros Electrónicos de Salud , Humanos , Antibacterianos/farmacología , Antibacterianos/uso terapéutico , Aprendizaje Profundo , Farmacorresistencia Bacteriana , Aprendizaje Automático
5.
Med Image Anal ; 92: 103047, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38157647

RESUMEN

Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Núcleo Celular/patología , Técnicas Histológicas/métodos
6.
Fetal Diagn Ther ; 34(3): 158-65, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24051348

RESUMEN

OBJECTIVE: To combine multiple 3D volumes of the same fetal femur into one composite image data set using image registration and wavelet-based fusion. Fused and single data sets were compared in terms of image quality and femur volume (FV) measurement repeatability. METHOD: In healthy pregnant volunteers, six volumes of the same femur were acquired and fused into a composite data set. Image quality scores were given to the fused and single data sets by an independent assessor in a blinded fashion; repeatability of FV measurement was assessed using coefficients of variation (CV), intraclass correlation coefficients (ICC) and Bland-Altman plots. RESULTS: Fusion was successful in 24 out of 25 cases. Median image quality score was 7/10 in fused data sets, compared to 6/10 in single data sets (p = 0.096). Repeatability of FV measurement was better in fused data sets (intraobserver CV 4.6% and ICC 0.987; interobserver CV 4.9%, ICC 0.985) compared to single ones (intraobserver CV 5.8%, ICC 0.977; interobserver CV 10.0%, ICC 0.931). The measured FV was significantly higher in fused data sets (mean FV 1.7 vs. 1.3 ml, p < 0.001). CONCLUSION: Image registration and wavelet-based fusion can improve image quality and FV repeatability; it also results in an increased FV measurement.


Asunto(s)
Fémur/diagnóstico por imagen , Retardo del Crecimiento Fetal/diagnóstico por imagen , Biomarcadores , Femenino , Fémur/embriología , Humanos , Imagenología Tridimensional/métodos , Embarazo , Ultrasonografía Prenatal/métodos , Deficiencia de Vitamina D/diagnóstico
7.
Pac Symp Biocomput ; 28: 263-274, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36540983

RESUMEN

We have gained access to vast amounts of multi-omics data thanks to Next Generation Sequencing. However, it is challenging to analyse this data due to its high dimensionality and much of it not being annotated. Lack of annotated data is a significant problem in machine learning, and Self-Supervised Learning (SSL) methods are typically used to deal with limited labelled data. However, there is a lack of studies that use SSL methods to exploit inter-omics relationships on unlabelled multi-omics data. In this work, we develop a novel and efficient pre-training paradigm that consists of various SSL components, including but not limited to contrastive alignment, data recovery from corrupted samples, and using one type of omics data to recover other omic types. Our pre-training paradigm improves performance on downstream tasks with limited labelled data. We show that our approach outperforms the state-of-the-art method in cancer type classification on the TCGA pancancer dataset in semi-supervised setting. Moreover, we show that the encoders that are pre-trained using our approach can be used as powerful feature extractors even without fine-tuning. Our ablation study shows that the method is not overly dependent on any pretext task component. The network architectures in our approach are designed to handle missing omic types and multiple datasets for pre-training and downstream training. Our pre-training paradigm can be extended to perform zero-shot classification of rare cancers.


Asunto(s)
Multiómica , Neoplasias , Humanos , Biología Computacional , Neoplasias/genética , Secuenciación de Nucleótidos de Alto Rendimiento , Aprendizaje Automático Supervisado
8.
Bioengineering (Basel) ; 10(7)2023 Jul 24.
Artículo en Inglés | MEDLINE | ID: mdl-37508906

RESUMEN

Medical image segmentation is a vital healthcare endeavor requiring precise and efficient models for appropriate diagnosis and treatment. Vision transformer (ViT)-based segmentation models have shown great performance in accomplishing this task. However, to build a powerful backbone, the self-attention block of ViT requires large-scale pre-training data. The present method of modifying pre-trained models entails updating all or some of the backbone parameters. This paper proposes a novel fine-tuning strategy for adapting a pretrained transformer-based segmentation model on data from a new medical center. This method introduces a small number of learnable parameters, termed prompts, into the input space (less than 1% of model parameters) while keeping the rest of the model parameters frozen. Extensive studies employing data from new unseen medical centers show that the prompt-based fine-tuning of medical segmentation models provides excellent performance regarding the new-center data with a negligible drop regarding the old centers. Additionally, our strategy delivers great accuracy with minimum re-training on new-center data, significantly decreasing the computational and time costs of fine-tuning pre-trained models. Our source code will be made publicly available.

9.
Med Image Anal ; 90: 102989, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37827111

RESUMEN

The number of studies on deep learning for medical diagnosis is expanding, and these systems are often claimed to outperform clinicians. However, only a few systems have shown medical efficacy. From this perspective, we examine a wide range of deep learning algorithms for the assessment of glioblastoma - a common brain tumor in older adults that is lethal. Surgery, chemotherapy, and radiation are the standard treatments for glioblastoma patients. The methylation status of the MGMT promoter, a specific genetic sequence found in the tumor, affects chemotherapy's effectiveness. MGMT promoter methylation improves chemotherapy response and survival in several cancers. MGMT promoter methylation is determined by a tumor tissue biopsy, which is then genetically tested. This lengthy and invasive procedure increases the risk of infection and other complications. Thus, researchers have used deep learning models to examine the tumor from brain MRI scans to determine the MGMT promoter's methylation state. We employ deep learning models and one of the largest public MRI datasets of 585 participants to predict the methylation status of the MGMT promoter in glioblastoma tumors using MRI scans. We test these models using Grad-CAM, occlusion sensitivity, feature visualizations, and training loss landscapes. Our results show no correlation between these two, indicating that external cohort data should be used to verify these models' performance to assure the accuracy and reliability of deep learning systems in cancer diagnosis.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Glioblastoma , Humanos , Anciano , Glioblastoma/diagnóstico por imagen , Glioblastoma/genética , Metilación , Reproducibilidad de los Resultados , Metilasas de Modificación del ADN/genética , Metilasas de Modificación del ADN/metabolismo , Metilasas de Modificación del ADN/uso terapéutico , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/genética , Imagen por Resonancia Magnética/métodos , Proteínas Supresoras de Tumor/genética , Proteínas Supresoras de Tumor/metabolismo , Proteínas Supresoras de Tumor/uso terapéutico , Enzimas Reparadoras del ADN/genética , Enzimas Reparadoras del ADN/metabolismo , Enzimas Reparadoras del ADN/uso terapéutico
10.
NPJ Digit Med ; 6(1): 36, 2023 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-36894653

RESUMEN

Accurate estimation of gestational age is an essential component of good obstetric care and informs clinical decision-making throughout pregnancy. As the date of the last menstrual period is often unknown or uncertain, ultrasound measurement of fetal size is currently the best method for estimating gestational age. The calculation assumes an average fetal size at each gestational age. The method is accurate in the first trimester, but less so in the second and third trimesters as growth deviates from the average and variation in fetal size increases. Consequently, fetal ultrasound late in pregnancy has a wide margin of error of at least ±2 weeks' gestation. Here, we utilise state-of-the-art machine learning methods to estimate gestational age using only image analysis of standard ultrasound planes, without any measurement information. The machine learning model is based on ultrasound images from two independent datasets: one for training and internal validation, and another for external validation. During validation, the model was blinded to the ground truth of gestational age (based on a reliable last menstrual period date and confirmatory first-trimester fetal crown rump length). We show that this approach compensates for increases in size variation and is even accurate in cases of intrauterine growth restriction. Our best machine-learning based model estimates gestational age with a mean absolute error of 3.0 (95% CI, 2.9-3.2) and 4.3 (95% CI, 4.1-4.5) days in the second and third trimesters, respectively, which outperforms current ultrasound-based clinical biometry at these gestational ages. Our method for dating the pregnancy in the second and third trimesters is, therefore, more accurate than published methods.

11.
Ultrasound Med Biol ; 47(12): 3470-3479, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34538535

RESUMEN

The aims of this work were to create a robust automatic software tool for measurement of the levator hiatal area on transperineal ultrasound (TPUS) volumes and to measure the potential reduction in variability and time taken for analysis in a clinical setting. The proposed tool automatically detects the C-plane (i.e., the plane of minimal hiatal dimensions) from a 3-D TPUS volume and subsequently uses the extracted plane to automatically segment the levator hiatus, using a convolutional neural network. The automatic pipeline was tested using 73 representative TPUS volumes. Reference hiatal outlines were obtained manually by two experts and compared with the pipeline's automated outlines. The Hausdorff distance, area, a clinical quality score, C-plane angle and C-plane Euclidean distance were used to evaluate C-plane detection and quantify levator hiatus segmentation accuracy. A visual Turing test was created to compare the performance of the software with that of the expert, based on the visual assessment of C-plane and hiatal segmentation quality. The overall time taken to extract the hiatal area with both measurement methods (i.e., manual and automatic) was measured. Each metric was calculated both for computer-observer differences and for inter-and intra-observer differences. The automatic method gave results similar to those of the expert when determining the hiatal outline from a TPUS volume. Indeed, the hiatal area measured by the algorithm and by an expert were within the intra-observer variability. Similarly, the method identified the C-plane with an accuracy of 5.76 ± 5.06° and 6.46 ± 5.18 mm in comparison to the inter-observer variability of 9.39 ± 6.21° and 8.48 ± 6.62 mm. The visual Turing test suggested that the automatic method identified the C-plane position within the TPUS volume visually as well as the expert. The average time taken to identify the C-plane and segment the hiatal area manually was 2 min and 35 ± 17 s, compared with 35 ± 4 s for the automatic result. This study presents a method for automatically measuring the levator hiatal area using artificial intelligence-based methodologies whereby the C-plane within a TPUS volume is detected and subsequently traced for the levator hiatal outline. The proposed solution was determined to be accurate, relatively quick, robust and reliable and, importantly, to reduce time and expertise required for pelvic floor disorder assessment.


Asunto(s)
Diafragma Pélvico , Maniobra de Valsalva , Inteligencia Artificial , Humanos , Imagenología Tridimensional , Diafragma Pélvico/diagnóstico por imagen , Ultrasonografía
12.
J Med Imaging (Bellingham) ; 7(1): 014501, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31956665

RESUMEN

Obstetric ultrasound is a fundamental ingredient of modern prenatal care with many applications including accurate dating of a pregnancy, identifying pregnancy-related complications, and diagnosis of fetal abnormalities. However, despite its many benefits, two factors currently prevent wide-scale uptake of this technology for point-of-care clinical decision-making in low- and middle-income country (LMIC) settings. First, there is a steep learning curve for scan proficiency, and second, there has been a lack of easy-to-use, affordable, and portable ultrasound devices. We introduce a framework toward addressing these barriers, enabled by recent advances in machine learning applied to medical imaging. The framework is designed to be realizable as a point-of-care ultrasound (POCUS) solution with an affordable wireless ultrasound probe, a smartphone or tablet, and automated machine-learning-based image processing. Specifically, we propose a machine-learning-based algorithm pipeline designed to automatically estimate the gestational age of a fetus from a short fetal ultrasound scan. We present proof-of-concept evaluation of accuracy of the key image analysis algorithms for automatic head transcerebellar plane detection, automatic transcerebellar diameter measurement, and estimation of gestational age on conventional ultrasound data simulating the POCUS task and discuss next steps toward translation via a first application on clinical ultrasound video from a low-cost ultrasound probe.

13.
J Med Imaging (Bellingham) ; 7(5): 057001, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32968691

RESUMEN

Purpose: We present an original method for simulating realistic fetal neurosonography images specifically generating third-trimester pregnancy ultrasound images from second-trimester images. Our method was developed using unpaired data, as pairwise data were not available. We also report original insights on the general appearance differences between second- and third-trimester fetal head transventricular (TV) plane images. Approach: We design a cycle-consistent adversarial network (Cycle-GAN) to simulate visually realistic third-trimester images from unpaired second- and third-trimester ultrasound images. Simulation realism is evaluated qualitatively by experienced sonographers who blindly graded real and simulated images. A quantitative evaluation is also performed whereby a validated deep-learning-based image recognition algorithm (ScanNav®) acts as the expert reference to allow hundreds of real and simulated images to be automatically analyzed and compared efficiently. Results: Qualitative evaluation shows that the human expert cannot tell the difference between real and simulated third-trimester scan images. 84.2% of the simulated third-trimester images could not be distinguished from the real third-trimester images. As a quantitative baseline, on 3000 images, the visibility drop of the choroid, CSP, and mid-line falx between real second- and real third-trimester scans was computed by ScanNav® and found to be 72.5%, 61.5%, and 67%, respectively. The visibility drop of the same structures between real second-trimester and simulated third-trimester was found to be 77.5%, 57.7%, and 56.2%, respectively. Therefore, the real and simulated third-trimester images were consider to be visually similar to each other. Our evaluation also shows that the third-trimester simulation of a conventional GAN is much easier to distinguish, and the visibility drop of the structures is smaller than our proposed method. Conclusions: The results confirm that it is possible to simulate realistic third-trimester images from second-trimester images using a modified Cycle-GAN, which may be useful for deep learning researchers with a restricted availability of third-trimester scans but with access to ample second trimester images. We also show convincing simulation improvements, both qualitatively and quantitatively, using the Cycle-GAN method compared with a conventional GAN. Finally, the use of a machine learning-based reference (in the case ScanNav®) for large-scale quantitative image analysis evaluation is also a first to our knowledge.

14.
Phys Med Biol ; 64(18): 185010, 2019 09 17.
Artículo en Inglés | MEDLINE | ID: mdl-31408850

RESUMEN

The first trimester fetal ultrasound scan is important to confirm fetal viability, to estimate the gestational age of the fetus, and to detect fetal anomalies early in pregnancy. First trimester ultrasound images have a different appearance than for the second trimester scan, reflecting the different stage of fetal development. There is limited literature on automation of image-based assessment for this earlier trimester, and most of the literature is focused on one specific fetal anatomy. In this paper, we consider automation to support first trimester fetal assessment of multiple fetal anatomies including both visualization and the measurements from a single 3D ultrasound scan. We present a deep learning and image processing solution (i) to perform semantic segmentation of the whole fetus, (ii) to estimate plane orientation for standard biometry views, (iii) to localize and automatically estimate biometry, and (iv) to detect fetal limbs from a 3D first trimester volume. Computational analysis methods were built using a real-world dataset (n = 44 volumes). An evaluation on a further independent clinical dataset (n = 21 volumes) showed that the automated methods approached human expert assessment of a 3D volume.


Asunto(s)
Desarrollo Fetal , Feto/diagnóstico por imagen , Edad Gestacional , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Ultrasonografía Prenatal/métodos , Abdomen/diagnóstico por imagen , Algoritmos , Femenino , Cabeza/diagnóstico por imagen , Humanos , Embarazo , Primer Trimestre del Embarazo
15.
Med Image Anal ; 46: 1-14, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-29499436

RESUMEN

Methods for aligning 3D fetal neurosonography images must be robust to (i) intensity variations, (ii) anatomical and age-specific differences within the fetal population, and (iii) the variations in fetal position. To this end, we propose a multi-task fully convolutional neural network (FCN) architecture to address the problem of 3D fetal brain localization, structural segmentation, and alignment to a referential coordinate system. Instead of treating these tasks as independent problems, we optimize the network by simultaneously learning features shared within the input data pertaining to the correlated tasks, and later branching out into task-specific output streams. Brain alignment is achieved by defining a parametric coordinate system based on skull boundaries, location of the eye sockets, and head pose, as predicted from intracranial structures. This information is used to estimate an affine transformation to align a volumetric image to the skull-based coordinate system. Co-alignment of 140 fetal ultrasound volumes (age range: 26.0 ±â€¯4.4 weeks) was achieved with high brain overlap and low eye localization error, regardless of gestational age or head size. The automatically co-aligned volumes show good structural correspondence between fetal anatomies.


Asunto(s)
Encéfalo/diagnóstico por imagen , Encéfalo/embriología , Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Neuroimagen/métodos , Ultrasonografía Prenatal/métodos , Adulto , Algoritmos , Femenino , Edad Gestacional , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Embarazo
16.
Ultrasound Med Biol ; 43(12): 2925-2933, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-28958729

RESUMEN

During routine ultrasound assessment of the fetal brain for biometry estimation and detection of fetal abnormalities, accurate imaging planes must be found by sonologists following a well-defined imaging protocol or clinical standard, which can be difficult for non-experts to do well. This assessment helps provide accurate biometry estimation and the detection of possible brain abnormalities. We describe a machine-learning method to assess automatically that transventricular ultrasound images of the fetal brain have been correctly acquired and meet the required clinical standard. We propose a deep learning solution, which breaks the problem down into three stages: (i) accurate localization of the fetal brain, (ii) detection of regions that contain structures of interest and (iii) learning the acoustic patterns in the regions that enable plane verification. We evaluate the developed methodology on a large real-world clinical data set of 2-D mid-gestation fetal images. We show that the automatic verification method approaches human expert assessment.


Asunto(s)
Encéfalo/diagnóstico por imagen , Encéfalo/embriología , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Redes Neurales de la Computación , Ultrasonografía Prenatal/métodos , Femenino , Humanos , Embarazo
17.
Avicenna J Med ; 7(1): 23-27, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28182034

RESUMEN

AIM OF THE STUDY: Coronary artery bypass graft surgery is the gold standard for the treatment of multivessel and left main coronary artery disease. However, there is considerable debate that whether left internal mammary artery (IMA) should be taken as pedicled or skeletonized. This study was conducted to assess the difference in blood flow after the application of topical vasodilator in skeletonized and pedicled IMA. MATERIALS AND METHODS: In this study, each patient underwent either skeletonized (n = 25) or pedicled IMA harvesting (n = 25). The type of graft on each individual patient was decided randomly. Intraoperative variables such as conduit length and blood flow were measured by the surgeon himself. The length of the grafted IMA was carefully determined in vivo, with the proximal and distal ends attached, from the first rib to IMA divergence. The IMA flow was measured on two separate occasions, before and after application of topical vasodilator. Known cases of subclavian artery stenosis and previous sternal radiation were excluded from the study. RESULTS: The blood flow before the application of topical vasodilator was similar in both the groups (P = 0.227). However, the flow was significantly less in pedicled than skeletonized IMA after application of vasodilator (P < 0.0001). Similarly, the length of skeletonized graft was significantly higher than the length of pedicled graft (P < 0.0001). CONCLUSION: Our study signifies that skeletonization of IMA results in increased graft length and blood flow after the application of topical vasodilator. However, we recommend that long-term clinical trials should be conducted to fully determine long-term patency rates of skeletonized IMA.

18.
Eur J Prev Cardiol ; 24(17): 1799-1806, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28925747

RESUMEN

Background Ultrasound imaging is able to quantify carotid arterial wall structure for the assessment of cerebral and cardiovascular disease risks. We describe a protocol and quality assurance process to enable carotid imaging at large scale that has been developed for the UK Biobank Imaging Enhancement Study of 100,000 individuals. Design An imaging protocol was developed to allow measurement of carotid intima-media thickness from the far wall of both common carotid arteries. Six quality assurance criteria were defined and a web-based interface (Intelligent Ultrasound) was developed to facilitate rapid assessment of images against each criterion. Results and conclusions Excellent inter and intra-observer agreements were obtained for image quality evaluations on a test dataset from 100 individuals. The image quality criteria then were applied in the UK Biobank Imaging Enhancement Study. Data from 2560 participants were evaluated. Feedback of results to the imaging team led to improvement in quality assurance, with quality assurance failures falling from 16.2% in the first two-month period examined to 6.4% in the last. Eighty per cent had all carotid intima-media thickness images graded as of acceptable quality, with at least one image acceptable for 98% of participants. Carotid intima-media thickness measures showed expected associations with increasing age and gender. Carotid imaging can be performed consistently, with semi-automated quality assurance of all scans, in a limited timeframe within a large scale multimodality imaging assessment. Routine feedback of quality control metrics to operators can improve the quality of the data collection.


Asunto(s)
Arterias Carótidas/diagnóstico por imagen , Enfermedades de las Arterias Carótidas/diagnóstico por imagen , Grosor Intima-Media Carotídeo/normas , Protocolos Clínicos/normas , Garantía de la Calidad de Atención de Salud/normas , Mejoramiento de la Calidad/normas , Indicadores de Calidad de la Atención de Salud/normas , Anciano , Recolección de Datos/normas , Femenino , Humanos , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador , Valor Predictivo de las Pruebas , Pronóstico , Desarrollo de Programa , Evaluación de Programas y Proyectos de Salud , Reproducibilidad de los Resultados , Reino Unido
19.
IEEE J Biomed Health Inform ; 20(4): 1120-8, 2016 07.
Artículo en Inglés | MEDLINE | ID: mdl-26011873

RESUMEN

The parasagittal (PS) plane is a 2-D diagnostic plane used routinely in cranial ultrasonography of the neonatal brain. This paper develops a novel approach to find the PS plane in a 3-D fetal ultrasound scan to allow image-based biomarkers to be tracked from prebirth through the first weeks of postbirth life. We propose an accurate plane-finding solution based on regression forests (RF). The method initially localizes the fetal brain and its midline automatically. The midline on several axial slices is used to detect the midsagittal plane, which is used as a constraint in the proposed RF framework to detect the PS plane. The proposed learning algorithm guides the RF learning method in a novel way by: 1) using informative voxels and voxel informative strength as a weighting within the training stage objective function, and 2) introducing regularization of the RF by proposing a geometrical feature within the training stage. Results on clinical data indicate that the new automated method is more reproducible than manual plane finding obtained by two clinicians.


Asunto(s)
Encéfalo/diagnóstico por imagen , Feto/diagnóstico por imagen , Imagenología Tridimensional/métodos , Ultrasonografía Prenatal/métodos , Femenino , Humanos , Embarazo , Análisis de Regresión , Procesamiento de Señales Asistido por Computador
20.
J Pak Med Assoc ; 55(10): 439-43, 2005 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-16304853

RESUMEN

OBJECTIVE: To prepare good quality screening cells reagent according to the standards, at Armed Forces Institute of Transfusion (AFIT). METHODS: Random group O donors, seronegative for HBsAg, HCV and HIV were selected if they resided in Rawalpindi or Islamabad and could be contacted. Micro column Gel technique was used to find out R1R1, R1wr, R2R2 and rr phenotypes with or without K antigen. Repeat sample of these donors were phenotyped for minimum antigens required for reagent cells. Teams of three donors each were made on the basis of Rh, K antigens and homozygosity for E, Fya, Fyb, Jka, Jkb, S, and s antigens. The selected cells were added to preservative suspension containing neomycin and chloramphenicol and dispensed as 8% solution and labeled. Cells were submitted to quality control testing for 35 days shelf life and efficacy was compared with commercial cells. RESULTS: The cells of required phenotype were prepared according to UK guidelines and AABB standards with minor exceptions. Reagent cells had excellent quality confirmed by many quality control procedures and were comparable to commercial cells in efficacy. The cost saving was significant. CONCLUSION: AFIT can introduce type and screen policy and Maximum Surgical Blood Ordering Schedule using indigenously prepared cells, of good quality and at an affordable price. This will enhance serological safety of recipients and brings AFIT near to adopting standard practice of pretransfusion testing.


Asunto(s)
Sistema del Grupo Sanguíneo ABO , Autoanticuerpos/análisis , Tipificación y Pruebas Cruzadas Sanguíneas/normas , Eritrocitos/inmunología , Hospitales Militares , Autoanticuerpos/inmunología , Estudios de Seguimiento , Humanos , Pakistán , Guías de Práctica Clínica como Asunto , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA