Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 108
Filtrar
1.
Phys Med Biol ; 69(15)2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-38981593

RESUMEN

Objective.Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons.Approach.We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network.Results.The proposed model achieves a mean absolute error (MAE) of18.76±5.167in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of0.95±0.09and a Frechet inception distance (FID) of145.60±8.38. The model yields a MAE of26.83±8.27to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of0.73±0.06and a FID distance equal to122.58±7.55. The improvement of our model over other state-of-the-art GAN approaches is of 3.8%, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio of27.89±2.22and26.08±2.95to synthesize MRI from CT input.Significance.The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.


Asunto(s)
Neoplasias de Cabeza y Cuello , Imagenología Tridimensional , Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X , Humanos , Imagen por Resonancia Magnética/métodos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Imagenología Tridimensional/métodos , Imagen Multimodal/métodos , Redes Neurales de la Computación
2.
Can J Cardiol ; 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38885787

RESUMEN

The potential of artificial intelligence (AI) in medicine lies in its ability to enhance clinicians' capacity to analyse medical images, thereby improving diagnostic precision and accuracy and thus enhancing current tests. However, the integration of AI within health care is fraught with difficulties. Heterogeneity among health care system applications, reliance on proprietary closed-source software, and rising cybersecurity threats pose significant challenges. Moreover, before their deployment in clinical settings, AI models must demonstrate their effectiveness across a wide range of scenarios and must be validated by prospective studies, but doing so requires testing in an environment mirroring the clinical workflow, which is difficult to achieve without dedicated software. Finally, the use of AI techniques in health care raises significant legal and ethical issues, such as the protection of patient privacy, the prevention of bias, and the monitoring of the device's safety and effectiveness for regulatory compliance. This review describes challenges to AI integration in health care and provides guidelines on how to move forward. We describe an open-source solution that we developed that integrates AI models into the Picture Archives Communication System (PACS), called PACS-AI. This approach aims to increase the evaluation of AI models by facilitating their integration and validation with existing medical imaging databases. PACS-AI may overcome many current barriers to AI deployment and offer a pathway toward responsible, fair, and effective deployment of AI models in health care. In addition, we propose a list of criteria and guidelines that AI researchers should adopt when publishing a medical AI model to enhance standardisation and reproducibility.

3.
Can J Cardiol ; 2024 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-38825181

RESUMEN

Large language models (LLMs) have emerged as powerful tools in artificial intelligence, demonstrating remarkable capabilities in natural language processing and generation. In this article, we explore the potential applications of LLMs in enhancing cardiovascular care and research. We discuss how LLMs can be used to simplify complex medical information, improve patient-physician communication, and automate tasks such as summarising medical articles and extracting key information. In addition, we highlight the role of LLMs in categorising and analysing unstructured data, such as medical notes and test results, which could revolutionise data handling and interpretation in cardiovascular research. However, we also emphasise the limitations and challenges associated with LLMs, including potential biases, reasoning opacity, and the need for rigourous validation in medical contexts. This review provides a practical guide for cardiovascular professionals to understand and harness the power of LLMs while navigating their limitations. We conclude by discussing the future directions and implications of LLMs in transforming cardiovascular care and research.

4.
NPJ Digit Med ; 7(1): 138, 2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38783037

RESUMEN

The coronary angiogram is the gold standard for evaluating the severity of coronary artery disease stenoses. Presently, the assessment is conducted visually by cardiologists, a method that lacks standardization. This study introduces DeepCoro, a ground-breaking AI-driven pipeline that integrates advanced vessel tracking and a video-based Swin3D model that was trained and validated on a dataset comprised of 182,418 coronary angiography videos spanning 5 years. DeepCoro achieved a notable precision of 71.89% in identifying coronary artery segments and demonstrated a mean absolute error of 20.15% (95% CI: 19.88-20.40) and a classification AUROC of 0.8294 (95% CI: 0.8215-0.8373) in stenosis percentage prediction compared to traditional cardiologist assessments. When compared to two expert interventional cardiologists, DeepCoro achieved lower variability than the clinical reports (19.09%; 95% CI: 18.55-19.58 vs 21.00%; 95% CI: 20.20-21.76, respectively). In addition, DeepCoro can be fine-tuned to a different modality type. When fine-tuned on quantitative coronary angiography assessments, DeepCoro attained an even lower mean absolute error of 7.75% (95% CI: 7.37-8.07), underscoring the reduced variability inherent to this method. This study establishes DeepCoro as an innovative video-based, adaptable tool in coronary artery disease analysis, significantly enhancing the precision and reliability of stenosis assessment.

5.
Int J Comput Assist Radiol Surg ; 19(6): 1103-1111, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38573566

RESUMEN

PURPOSE: Cancer confirmation in the operating room (OR) is crucial to improve local control in cancer therapies. Histopathological analysis remains the gold standard, but there is a lack of real-time in situ cancer confirmation to support margin confirmation or remnant tissue. Raman spectroscopy (RS), as a label-free optical technique, has proven its power in cancer detection and, when integrated into a robotic assistance system, can positively impact the efficiency of procedures and the quality of life of patients, avoiding potential recurrence. METHODS: A workflow is proposed where a 6-DOF robotic system (optical camera + MECA500 robotic arm) assists the characterization of fresh tissue samples using RS. Three calibration methods are compared for the robot, and the temporal efficiency is compared with standard hand-held analysis. For healthy/cancerous tissue discrimination, a 1D-convolutional neural network is proposed and tested on three ex vivo datasets (brain, breast, and prostate) containing processed RS and histopathology ground truth. RESULTS: The robot achieves a minimum error of 0.20 mm (0.12) on a set of 30 test landmarks and demonstrates significant time reduction in 4 of the 5 proposed tasks. The proposed classification model can identify brain, breast, and prostate cancer with an accuracy of 0.83 (0.02), 0.93 (0.01), and 0.71 (0.01), respectively. CONCLUSION: Automated RS analysis with deep learning demonstrates promising classification performance compared to commonly used support vector machines. Robotic assistance in tissue characterization can contribute to highly accurate, rapid, and robust biopsy analysis in the OR. These two elements are an important step toward real-time cancer confirmation using RS and OR integration.


Asunto(s)
Neoplasias de la Mama , Neoplasias de la Próstata , Procedimientos Quirúrgicos Robotizados , Espectrometría Raman , Humanos , Espectrometría Raman/métodos , Neoplasias de la Próstata/patología , Neoplasias de la Próstata/diagnóstico , Procedimientos Quirúrgicos Robotizados/métodos , Neoplasias de la Mama/patología , Masculino , Femenino , Quirófanos , Biopsia/métodos , Neoplasias Encefálicas/patología , Neoplasias Encefálicas/diagnóstico
6.
Sci Robot ; 9(87): eadh8702, 2024 Feb 14.
Artículo en Inglés | MEDLINE | ID: mdl-38354257

RESUMEN

Using external actuation sources to navigate untethered drug-eluting microrobots in the bloodstream offers great promise in improving the selectivity of drug delivery, especially in oncology, but the current field forces are difficult to maintain with enough strength inside the human body (>70-centimeter-diameter range) to achieve this operation. Here, we present an algorithm to predict the optimal patient position with respect to gravity during endovascular microrobot navigation. Magnetic resonance navigation, using magnetic field gradients in clinical magnetic resonance imaging (MRI), is combined with the algorithm to improve the targeting efficiency of magnetic microrobots (MMRs). Using a dedicated microparticle injector, a high-precision MRI-compatible balloon inflation system, and a clinical MRI, MMRs were successfully steered into targeted lobes via the hepatic arteries of living pigs. The distribution ratio of the microrobots (roughly 2000 MMRs per pig) in the right liver lobe increased from 47.7 to 86.4% and increased in the left lobe from 52.2 to 84.1%. After passing through multiple vascular bifurcations, the number of MMRs reaching four different target liver lobes had a 1.7- to 2.6-fold increase in the navigation groups compared with the control group. Performing simulations on 19 patients with hepatocellular carcinoma (HCC) demonstrated that the proposed technique can meet the need for hepatic embolization in patients with HCC. Our technology offers selectable direction for actuator-based navigation of microrobots at the human scale.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Robótica , Humanos , Animales , Porcinos , Arteria Hepática/diagnóstico por imagen , Neoplasias Hepáticas/diagnóstico por imagen
7.
Radiology ; 309(1): e230659, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37787678

RESUMEN

Background Screening for nonalcoholic fatty liver disease (NAFLD) is suboptimal due to the subjective interpretation of US images. Purpose To evaluate the agreement and diagnostic performance of radiologists and a deep learning model in grading hepatic steatosis in NAFLD at US, with biopsy as the reference standard. Materials and Methods This retrospective study included patients with NAFLD and control patients without hepatic steatosis who underwent abdominal US and contemporaneous liver biopsy from September 2010 to October 2019. Six readers visually graded steatosis on US images twice, 2 weeks apart. Reader agreement was assessed with use of κ statistics. Three deep learning techniques applied to B-mode US images were used to classify dichotomized steatosis grades. Classification performance of human radiologists and the deep learning model for dichotomized steatosis grades (S0, S1, S2, and S3) was assessed with area under the receiver operating characteristic curve (AUC) on a separate test set. Results The study included 199 patients (mean age, 53 years ± 13 [SD]; 101 men). On the test set (n = 52), radiologists had fair interreader agreement (0.34 [95% CI: 0.31, 0.37]) for classifying steatosis grades S0 versus S1 or higher, while AUCs were between 0.49 and 0.84 for radiologists and 0.85 (95% CI: 0.83, 0.87) for the deep learning model. For S0 or S1 versus S2 or S3, radiologists had fair interreader agreement (0.30 [95% CI: 0.27, 0.33]), while AUCs were between 0.57 and 0.76 for radiologists and 0.73 (95% CI: 0.71, 0.75) for the deep learning model. For S2 or lower versus S3, radiologists had fair interreader agreement (0.37 [95% CI: 0.33, 0.40]), while AUCs were between 0.52 and 0.81 for radiologists and 0.67 (95% CI: 0.64, 0.69) for the deep learning model. Conclusion Deep learning approaches applied to B-mode US images provided comparable performance with human readers for detection and grading of hepatic steatosis. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Tuthill in this issue.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Imagen de Elasticidad , Enfermedad del Hígado Graso no Alcohólico , Masculino , Humanos , Persona de Mediana Edad , Enfermedad del Hígado Graso no Alcohólico/diagnóstico por imagen , Enfermedad del Hígado Graso no Alcohólico/patología , Hígado/diagnóstico por imagen , Hígado/patología , Estudios Retrospectivos , Diagnóstico por Imagen de Elasticidad/métodos , Curva ROC , Biopsia/métodos
8.
J Transl Med ; 21(1): 507, 2023 07 27.
Artículo en Inglés | MEDLINE | ID: mdl-37501197

RESUMEN

BACKGROUND: Finding a noninvasive radiomic surrogate of tumor immune features could help identify patients more likely to respond to novel immune checkpoint inhibitors. Particularly, CD73 is an ectonucleotidase that catalyzes the breakdown of extracellular AMP into immunosuppressive adenosine, which can be blocked by therapeutic antibodies. High CD73 expression in colorectal cancer liver metastasis (CRLM) resected with curative intent is associated with early recurrence and shorter patient survival. The aim of this study was hence to evaluate whether machine learning analysis of preoperative liver CT-scan could estimate high vs low CD73 expression in CRLM and whether such radiomic score would have a prognostic significance. METHODS: We trained an Attentive Interpretable Tabular Learning (TabNet) model to predict, from preoperative CT images, stratified expression levels of CD73 (CD73High vs. CD73Low) assessed by immunofluorescence (IF) on tissue microarrays. Radiomic features were extracted from 160 segmented CRLM of 122 patients with matched IF data, preprocessed and used to train the predictive model. We applied a five-fold cross-validation and validated the performance on a hold-out test set. RESULTS: TabNet provided areas under the receiver operating characteristic curve of 0.95 (95% CI 0.87 to 1.0) and 0.79 (0.65 to 0.92) on the training and hold-out test sets respectively, and outperformed other machine learning models. The TabNet-derived score, termed rad-CD73, was positively correlated with CD73 histological expression in matched CRLM (Spearman's ρ = 0.6004; P < 0.0001). The median time to recurrence (TTR) and disease-specific survival (DSS) after CRLM resection in rad-CD73High vs rad-CD73Low patients was 13.0 vs 23.6 months (P = 0.0098) and 53.4 vs 126.0 months (P = 0.0222), respectively. The prognostic value of rad-CD73 was independent of the standard clinical risk score, for both TTR (HR = 2.11, 95% CI 1.30 to 3.45, P < 0.005) and DSS (HR = 1.88, 95% CI 1.11 to 3.18, P = 0.020). CONCLUSIONS: Our findings reveal promising results for non-invasive CT-scan-based prediction of CD73 expression in CRLM and warrant further validation as to whether rad-CD73 could assist oncologists as a biomarker of prognosis and response to immunotherapies targeting the adenosine pathway.


Asunto(s)
Neoplasias Colorrectales , Neoplasias Hepáticas , Humanos , Adenosina , Neoplasias Hepáticas/diagnóstico por imagen , Pronóstico , Estudios Retrospectivos , Tomografía Computarizada por Rayos X , 5'-Nucleotidasa
9.
Phys Med Biol ; 68(12)2023 06 15.
Artículo en Inglés | MEDLINE | ID: mdl-37257456

RESUMEN

Objective.Multi-parametric MR image synthesis is an effective approach for several clinical applications where specific modalities may be unavailable to reach a diagnosis. While technical and practical conditions limit the acquisition of new modalities for a patient, multimodal image synthesis combines multiple modalities to synthesize the desired modality.Approach.In this paper, we propose a new multi-parametric magnetic resonance imaging (MRI) synthesis model, which generates the target MRI modality from two other available modalities, in pathological MR images. We first adopt a contrastive learning approach that trains an encoder network to extract a suitable feature representation of the target space. Secondly, we build a synthesis network that generates the target image from a common feature space that approximately matches the contrastive learned space of the target modality. We incorporate a bidirectional feature learning strategy that learns a multimodal feature matching function, in two opposite directions, to transform the augmented multichannel input in the learned target space. Overall, our training synthesis loss is expressed as the combination of the reconstruction loss and a bidirectional triplet loss, using a pair of features.Main results.Compared to other state-of-the-art methods, the proposed model achieved an average improvement rate of 3.9% and 3.6% on the IXI and BraTS'18 datasets respectively. On the tumor BraTS'18 dataset, our model records the highest Dice score of 0.793(0.04) for preserving the synthesized tumor regions in the segmented images.Significance.Validation of the proposed model on two public datasets confirms the efficiency of the model to generate different MR contrasts, and preserve tumor areas in the synthesized images. In addition, the model is flexible to generate head and neck CT image from MR acquisitions. In future work, we plan to validate the model using interventional iMRI contrasts for MR-guided neurosurgery applications, and also for radiotherapy applications. Clinical measurements will be collected during surgery to evaluate the model's performance.


Asunto(s)
Aprendizaje Profundo , Imágenes de Resonancia Magnética Multiparamétrica , Humanos , Imagen por Resonancia Magnética/métodos , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos
10.
IEEE Trans Med Imaging ; 42(6): 1603-1618, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37018252

RESUMEN

Real-time motion management for image-guided radiation therapy interventions plays an important role for accurate dose delivery. Forecasting future 4D deformations from in-plane image acquisitions is fundamental for accurate dose delivery and tumor targeting. However, anticipating visual representations is challenging and is not exempt from hurdles such as the prediction from limited dynamics, and the high-dimensionality inherent to complex deformations. Also, existing 3D tracking approaches typically need both template and search volumes as inputs, which are not available during real-time treatments. In this work, we propose an attention-based temporal prediction network where features extracted from input images are treated as tokens for the predictive task. Moreover, we employ a set of learnable queries, conditioned on prior knowledge, to predict future latent representation of deformations. Specifically, the conditioning scheme is based on estimated time-wise prior distributions computed from future images available during the training stage. Finally, we propose a new framework to address the problem of temporal 3D local tracking using cine 2D images as inputs, by employing latent vectors as gating variables to refine the motion fields over the tracked region. The tracker module is anchored on a 4D motion model, which provides both the latent vectors and the volumetric motion estimates to be refined. Our approach avoids auto-regression and leverages spatial transformations to generate the forecasted images. The tracking module reduces the error by 63% compared to a conditional-based transformer 4D motion model, yielding a mean error of 1.5± 1.1 mm. Furthermore, for the studied cohort of abdominal 4D MRI images, the proposed method is able to predict future deformations with a mean geometrical error of 1.2± 0.7 mm.


Asunto(s)
Imagen por Resonancia Magnética , Radioterapia Guiada por Imagen , Humanos , Imagen por Resonancia Magnética/métodos , Radioterapia Guiada por Imagen/métodos , Movimiento (Física) , Abdomen
11.
Int J Comput Assist Radiol Surg ; 18(6): 971-979, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37103727

RESUMEN

PURPOSE: During MR-guided neurosurgical procedures, several factors may limit the acquisition of additional MR sequences, which are needed by neurosurgeons to adjust surgical plans or ensure complete tumor resection. Automatically synthesized MR contrasts generated from other available heterogeneous MR sequences could alleviate timing constraints. METHODS: We propose a new multimodal MR synthesis approach leveraging a combination of MR modalities presenting glioblastomas to generate an additional modality. The proposed learning approach relies on a least square GAN (LSGAN) using an unsupervised contrastive learning strategy. We incorporate a contrastive encoder, which extracts an invariant contrastive representation from augmented pairs of the generated and real target MR contrasts. This contrastive representation describes a pair of features for each input channel, allowing to regularize the generator to be invariant to the high-frequency orientations. Moreover, when training the generator, we impose on the LSGAN loss another term reformulated as the combination of a reconstruction and a novel perception loss based on a pair of features. RESULTS: When compared to other multimodal MR synthesis approaches evaluated on the BraTS'18 brain dataset, the model yields the highest Dice score with [Formula: see text] and achieves the lowest variability information of [Formula: see text], with a probability rand index score of [Formula: see text] and a global consistency error of [Formula: see text]. CONCLUSION: The proposed model allows to generate reliable MR contrasts with enhanced tumors on the synthesized image using a brain tumor dataset (BraTS'18). In future work, we will perform a clinical evaluation of residual tumor segmentations during MR-guided neurosurgeries, where limited MR contrasts will be acquired during the procedure.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Análisis de los Mínimos Cuadrados , Encéfalo
12.
Clin Transl Radiat Oncol ; 39: 100590, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36935854

RESUMEN

Head and neck radiotherapy induces important toxicity, and its efficacy and tolerance vary widely across patients. Advancements in radiotherapy delivery techniques, along with the increased quality and frequency of image guidance, offer a unique opportunity to individualize radiotherapy based on imaging biomarkers, with the aim of improving radiation efficacy while reducing its toxicity. Various artificial intelligence models integrating clinical data and radiomics have shown encouraging results for toxicity and cancer control outcomes prediction in head and neck cancer radiotherapy. Clinical implementation of these models could lead to individualized risk-based therapeutic decision making, but the reliability of the current studies is limited. Understanding, validating and expanding these models to larger multi-institutional data sets and testing them in the context of clinical trials is needed to ensure safe clinical implementation. This review summarizes the current state of the art of machine learning models for prediction of head and neck cancer radiotherapy outcomes.

13.
Dev Neurosci ; 45(4): 210-222, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36822171

RESUMEN

Macrocephaly has been associated with neurodevelopmental disorders; however, it has been mainly studied in the context of pathological or high-risk populations and little is known about its impact, as an isolated trait, on brain development in general population. Electroencephalographic (EEG) power spectral density (PSD) and signal complexity have shown to be sensitive to neurodevelopment and its alterations. We aimed to investigate the impact of macrocephaly, as an isolated trait, on EEG signal as measured by PSD and multiscale entropy during the first year of life. We recorded high-density EEG resting-state activity of 74 healthy full-term infants, 50 control (26 girls), and 24 macrocephalic (12 girls) aged between 3 and 11 months. We used linear regression models to assess group and age effects on EEG PSD and signal complexity. Sex and brain volume measures, obtained via a 3D transfontanellar ultrasound, were also included into the models to evaluate their contribution. Our results showed lower PSD of the low alpha (8-10 Hz) frequency band and lower complexity in the macrocephalic group compared to the control group. In addition, we found an increase in low alpha (8.5-10 Hz) PSD and in the complexity index with age. These findings suggest that macrocephaly as an isolated trait has a significant impact on brain activity during the first year of life.


Asunto(s)
Electroencefalografía , Megalencefalia , Femenino , Humanos , Lactante , Entropía , Electroencefalografía/métodos , Encéfalo
14.
Opt Express ; 31(1): 396-410, 2023 Jan 02.
Artículo en Inglés | MEDLINE | ID: mdl-36606975

RESUMEN

Intra-arterial catheter guidance is instrumental to the success of minimally invasive procedures, such as percutaneous transluminal angioplasty. However, traditional device tracking methods, such as electromagnetic or infrared sensors, exhibits drawbacks such as magnetic interference or line of sight requirements. In this work, shape sensing of bends of different curvatures and lengths is demonstrated both asynchronously and in real-time using optical frequency domain reflectometry (OFDR) with a polymer extruded optical fiber triplet with enhanced backscattering properties. Simulations on digital phantoms showed that reconstruction accuracy is of the order of the interrogator's spatial resolution (millimeters) with sensing lengths of less than 1 m and a high SNR.


Asunto(s)
Cánula , Fibras Ópticas , Catéteres de Permanencia , Fantasmas de Imagen , Polímeros
15.
Ann Biomed Eng ; 51(5): 1028-1039, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36580223

RESUMEN

Four-dimensional (4D) flow magnetic resonance imaging (MRI) is a leading-edge imaging technique and has numerous medicinal applications. In vitro 4D flow MRI can offer some advantages over in vivo ones, especially in accurately controlling flow rate (gold standard), removing patient and user-specific variations, and minimizing animal testing. Here, a complete testing method and a respiratory-motion-simulating platform are proposed for in vitro validation of 4D flow MRI. A silicon phantom based on the hepatic arteries of a living pig is made. Under the free-breathing, a human volunteer's liver motion (inferior-superior direction) is tracked using a pencil-beam MRI navigator and is extracted and converted into velocity-distance pairs to program the respiratory-motion-simulating platform. With the magnitude displacement of about 1.3 cm, the difference between the motions obtained from the volunteer and our platform is ≤ 1 mm which is within the positioning error of the MRI navigator. The influence of the platform on the MRI signal-to-noise ratio can be eliminated even if the actuator is placed in the MRI room. The 4D flow measurement errors are respectively 0.4% (stationary phantom), 9.4% (gating window = 3 mm), 27.3% (gating window = 4 mm) and 33.1% (gating window = 7 mm). The vessel resolutions decreased with the increase of the gating window. The low-cost simulation system, assembled from commercially available components, is easy to be duplicated.


Asunto(s)
Imagenología Tridimensional , Imagen por Resonancia Magnética , Humanos , Animales , Porcinos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Abdomen , Movimiento (Física) , Hígado , Fantasmas de Imagen
16.
IEEE Trans Biomed Eng ; 70(5): 1692-1703, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36441884

RESUMEN

OBJECTIVE: Minimally invasive revascularization procedures such as percutaneous transluminal angioplasty seek to treat occlusions in peripheral arteries. However their ability to treat long occlusions are hampered by difficulties to monitor the location of intravascular devices such as guidewires using fluoroscopy which requires continuous radiation, and lack the capacity to measure physiological characteristics such as laminar blood flow close to occlusions. Fiber optic technologies provide means of tracking by measuring fibers under strain, however they are limited to known geometrical models and are not used to measure external variations. METHODS: We present a navigation framework based on optical frequency domain reflectometry (OFDR) using fully-distributed optical sensor gratings enhanced with ultraviolet exposure to track the three-dimensional shape and surrounding blood flow of intra-vascular guidewires. To process the strain information provided by the continuous gratings, a dual-branch model learning spatio-temporal features allows to predict the output measures based on scattered wavelength distributions. The first network determines the 3D shape appearance of the guidewire using the input backscattered wavelength shift data in combination with prior segmentations, while a second network (graph temporal convolution network) produces estimates of vascular flow velocities using ground-truth 4D-flow MRI acquisitions. RESULTS: Experiments performed on synthetic and animal models, as well as in a preliminary human trial shows the capability of the model to generate accurate 3D shape tracking and blood flow velocities differences below 2 cm/s, thus providing realistic physiologic and anatomical properties for intravascular techniques. CONCLUSION AND SIGNIFICANCE: The study demonstrates the feasibility of using the device clinically, and could be integrated within revascularization workflows for treating occlusions in arteries, since the navigation framework involves minimal manual intervention.


Asunto(s)
Procedimientos Endovasculares , Fibras Ópticas , Animales , Humanos , Arterias , Tecnología de Fibra Óptica , Velocidad del Flujo Sanguíneo
18.
Phys Med Biol ; 67(24)2022 12 13.
Artículo en Inglés | MEDLINE | ID: mdl-36223780

RESUMEN

Objective. Multi-parametric magnetic resonance imaging (mpMRI) has become an important tool for the detection of prostate cancer in the past two decades. Despite the high sensitivity of MRI for tissue characterization, it often suffers from a lack of specificity. Several well-established pre-processing tools are publicly available for improving image quality and removing both intra- and inter-patient variability in order to increase the diagnostic accuracy of MRI. To date, most of these pre-processing tools have largely been assessed individually. In this study we present a systematic evaluation of a multi-step mpMRI pre-processing pipeline to automate tumor localization within the prostate using a previously trained model.Approach. The study was conducted on 31 treatment-naïve prostate cancer patients with a PI-RADS-v2 compliant mpMRI examination. Multiple methods were compared for each pre-processing step: (1) bias field correction, (2) normalization, and (3) deformable multi-modal registration. Optimal parameter values were estimated for each step on the basis of relevant individual metrics. Tumor localization was then carried out via a model-based approach that takes both mpMRI and prior clinical knowledge features as input. A sequential optimization approach was adopted for determining the optimal parameters and techniques in each step of the pipeline.Main results. The application of bias field correction alone increased the accuracy of tumor localization (area under the curve (AUC) = 0.77;p-value = 0.004) over unprocessed data (AUC = 0.74). Adding normalization to the pre-processing pipeline further improved diagnostic accuracy of the model to an AUC of 0.85 (p-value = 0.000 12). Multi-modal registration of apparent diffusion coefficient images to T2-weighted images improved the alignment of tumor locations in all but one patient, resulting in a slight decrease in accuracy (AUC = 0.84;p-value = 0.30).Significance. Overall, our findings suggest that the combined effect of multiple pre-processing steps with optimal values has the ability to improve the quantitative classification of prostate cancer using mpMRI. Clinical trials: NCT03378856 and NCT03367702.


Asunto(s)
Imágenes de Resonancia Magnética Multiparamétrica , Neoplasias de la Próstata , Masculino , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Imagen por Resonancia Magnética/métodos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Próstata/patología , Probabilidad , Estudios Retrospectivos
19.
Med Image Anal ; 82: 102624, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36208571

RESUMEN

An important challenge and limiting factor in deep learning methods for medical imaging segmentation is the lack of available of annotated data to properly train models. For the specific task of tumor segmentation, the process entails clinicians labeling every slice of volumetric scans for every patient, which becomes prohibitive at the scale of datasets required to train neural networks to optimal performance. To address this, we propose a novel semi-supervised framework that allows training any segmentation (encoder-decoder) model using only information readily available in radiological data, namely the presence of a tumor in the image, in addition to a few annotated images. Specifically, we conjecture that a generative model performing domain translation on this weak label - healthy vs diseased scans - helps achieve tumor segmentation. The proposed GenSeg method first disentangles tumoral tissue from healthy "background" tissue. The latent representation is separated into (1) the common background information across both domains, and (2) the unique tumoral information. GenSeg then achieves diseased-to-healthy image translation by decoding a healthy version of the image from just the common representation, as well as a residual image that allows adding back the tumors. The same decoder that produces this residual tumor image, also outputs a tumor segmentation. Implicit data augmentation is achieved by re-using the same framework for healthy-to-diseased image translation, where a residual tumor image is produced from a prior distribution. By performing both image translation and segmentation simultaneously, GenSeg allows training on only partially annotated datasets. To test the framework, we trained U-Net-like architectures using GenSeg and evaluated their performance on 3 variants of a synthetic task, as well as on 2 benchmark datasets: brain tumor segmentation in MRI (derived from BraTS) and liver metastasis segmentation in CT (derived from LiTS). Our method outperforms the baseline semi-supervised (autoencoder and mean teacher) and supervised segmentation methods, with improvements ranging between 8-14% Dice score on the brain task and 5-8% on the liver task, when only 1% of the training images were annotated. These results show the proposed framework is ideal at addressing the problem of training deep segmentation models when a large portion of the available data is unlabeled and unpaired, a common issue in tumor segmentation.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasia Residual , Redes Neurales de la Computación , Imagen por Resonancia Magnética
20.
J Biomed Opt ; 27(9)2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-36045491

RESUMEN

SIGNIFICANCE: The diagnosis of prostate cancer (PCa) and focal treatment by brachytherapy are limited by the lack of precise intraoperative information to target tumors during biopsy collection and radiation seed placement. Image-guidance techniques could improve the safety and diagnostic yield of biopsy collection as well as increase the efficacy of radiotherapy. AIM: To estimate the accuracy of PCa detection using in situ Raman spectroscopy (RS) in a pilot in-human clinical study and assess biochemical differences between in vivo and ex vivo measurements. APPROACH: A new miniature RS fiber-optics system equipped with an electromagnetic (EM) tracker was guided by trans-rectal ultrasound-guided imaging, fused with preoperative magnetic resonance imaging to acquire 49 spectra in situ (in vivo) from 18 PCa patients. In addition, 179 spectra were acquired ex vivo in fresh prostate samples from 14 patients who underwent radical prostatectomy. Two machine-learning models were trained to discriminate cancer from normal prostate tissue from both in situ and ex vivo datasets. RESULTS: A support vector machine (SVM) model was trained on the in situ dataset and its performance was evaluated using leave-one-patient-out cross validation from 28 normal prostate measurements and 21 in-tumor measurements. The model performed at 86% sensitivity and 72% specificity. Similarly, an SVM model was trained with the ex vivo dataset from 152 normal prostate measurements and 27 tumor measurements showing reduced cancer detection performance mostly attributable to spatial registration inaccuracies between probe measurements and histology assessment. A qualitative comparison between in situ and ex vivo measurements demonstrated a one-to-one correspondence and similar ratios between the main Raman bands (e.g., amide I-II bands, phenylalanine). CONCLUSIONS: PCa detection can be achieved using RS and machine learning models for image-guidance applications using in situ measurements during prostate biopsy procedures.


Asunto(s)
Próstata , Neoplasias de la Próstata , Biopsia , Humanos , Biopsia Guiada por Imagen/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Próstata/diagnóstico por imagen , Próstata/patología , Próstata/cirugía , Prostatectomía/métodos , Neoplasias de la Próstata/patología , Espectrometría Raman/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA