Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 108
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Dev Neurosci ; 45(4): 210-222, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36822171

RESUMO

Macrocephaly has been associated with neurodevelopmental disorders; however, it has been mainly studied in the context of pathological or high-risk populations and little is known about its impact, as an isolated trait, on brain development in general population. Electroencephalographic (EEG) power spectral density (PSD) and signal complexity have shown to be sensitive to neurodevelopment and its alterations. We aimed to investigate the impact of macrocephaly, as an isolated trait, on EEG signal as measured by PSD and multiscale entropy during the first year of life. We recorded high-density EEG resting-state activity of 74 healthy full-term infants, 50 control (26 girls), and 24 macrocephalic (12 girls) aged between 3 and 11 months. We used linear regression models to assess group and age effects on EEG PSD and signal complexity. Sex and brain volume measures, obtained via a 3D transfontanellar ultrasound, were also included into the models to evaluate their contribution. Our results showed lower PSD of the low alpha (8-10 Hz) frequency band and lower complexity in the macrocephalic group compared to the control group. In addition, we found an increase in low alpha (8.5-10 Hz) PSD and in the complexity index with age. These findings suggest that macrocephaly as an isolated trait has a significant impact on brain activity during the first year of life.


Assuntos
Eletroencefalografia , Megalencefalia , Feminino , Humanos , Lactente , Entropia , Eletroencefalografia/métodos , Encéfalo
2.
Radiology ; 309(1): e230659, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37787678

RESUMO

Background Screening for nonalcoholic fatty liver disease (NAFLD) is suboptimal due to the subjective interpretation of US images. Purpose To evaluate the agreement and diagnostic performance of radiologists and a deep learning model in grading hepatic steatosis in NAFLD at US, with biopsy as the reference standard. Materials and Methods This retrospective study included patients with NAFLD and control patients without hepatic steatosis who underwent abdominal US and contemporaneous liver biopsy from September 2010 to October 2019. Six readers visually graded steatosis on US images twice, 2 weeks apart. Reader agreement was assessed with use of κ statistics. Three deep learning techniques applied to B-mode US images were used to classify dichotomized steatosis grades. Classification performance of human radiologists and the deep learning model for dichotomized steatosis grades (S0, S1, S2, and S3) was assessed with area under the receiver operating characteristic curve (AUC) on a separate test set. Results The study included 199 patients (mean age, 53 years ± 13 [SD]; 101 men). On the test set (n = 52), radiologists had fair interreader agreement (0.34 [95% CI: 0.31, 0.37]) for classifying steatosis grades S0 versus S1 or higher, while AUCs were between 0.49 and 0.84 for radiologists and 0.85 (95% CI: 0.83, 0.87) for the deep learning model. For S0 or S1 versus S2 or S3, radiologists had fair interreader agreement (0.30 [95% CI: 0.27, 0.33]), while AUCs were between 0.57 and 0.76 for radiologists and 0.73 (95% CI: 0.71, 0.75) for the deep learning model. For S2 or lower versus S3, radiologists had fair interreader agreement (0.37 [95% CI: 0.33, 0.40]), while AUCs were between 0.52 and 0.81 for radiologists and 0.67 (95% CI: 0.64, 0.69) for the deep learning model. Conclusion Deep learning approaches applied to B-mode US images provided comparable performance with human readers for detection and grading of hepatic steatosis. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Tuthill in this issue.


Assuntos
Aprendizado Profundo , Técnicas de Imagem por Elasticidade , Hepatopatia Gordurosa não Alcoólica , Masculino , Humanos , Pessoa de Meia-Idade , Hepatopatia Gordurosa não Alcoólica/diagnóstico por imagem , Hepatopatia Gordurosa não Alcoólica/patologia , Fígado/diagnóstico por imagem , Fígado/patologia , Estudos Retrospectivos , Técnicas de Imagem por Elasticidade/métodos , Curva ROC , Biópsia/métodos
3.
J Transl Med ; 21(1): 507, 2023 07 27.
Artigo em Inglês | MEDLINE | ID: mdl-37501197

RESUMO

BACKGROUND: Finding a noninvasive radiomic surrogate of tumor immune features could help identify patients more likely to respond to novel immune checkpoint inhibitors. Particularly, CD73 is an ectonucleotidase that catalyzes the breakdown of extracellular AMP into immunosuppressive adenosine, which can be blocked by therapeutic antibodies. High CD73 expression in colorectal cancer liver metastasis (CRLM) resected with curative intent is associated with early recurrence and shorter patient survival. The aim of this study was hence to evaluate whether machine learning analysis of preoperative liver CT-scan could estimate high vs low CD73 expression in CRLM and whether such radiomic score would have a prognostic significance. METHODS: We trained an Attentive Interpretable Tabular Learning (TabNet) model to predict, from preoperative CT images, stratified expression levels of CD73 (CD73High vs. CD73Low) assessed by immunofluorescence (IF) on tissue microarrays. Radiomic features were extracted from 160 segmented CRLM of 122 patients with matched IF data, preprocessed and used to train the predictive model. We applied a five-fold cross-validation and validated the performance on a hold-out test set. RESULTS: TabNet provided areas under the receiver operating characteristic curve of 0.95 (95% CI 0.87 to 1.0) and 0.79 (0.65 to 0.92) on the training and hold-out test sets respectively, and outperformed other machine learning models. The TabNet-derived score, termed rad-CD73, was positively correlated with CD73 histological expression in matched CRLM (Spearman's ρ = 0.6004; P < 0.0001). The median time to recurrence (TTR) and disease-specific survival (DSS) after CRLM resection in rad-CD73High vs rad-CD73Low patients was 13.0 vs 23.6 months (P = 0.0098) and 53.4 vs 126.0 months (P = 0.0222), respectively. The prognostic value of rad-CD73 was independent of the standard clinical risk score, for both TTR (HR = 2.11, 95% CI 1.30 to 3.45, P < 0.005) and DSS (HR = 1.88, 95% CI 1.11 to 3.18, P = 0.020). CONCLUSIONS: Our findings reveal promising results for non-invasive CT-scan-based prediction of CD73 expression in CRLM and warrant further validation as to whether rad-CD73 could assist oncologists as a biomarker of prognosis and response to immunotherapies targeting the adenosine pathway.


Assuntos
Neoplasias Colorretais , Neoplasias Hepáticas , Humanos , Adenosina , Neoplasias Hepáticas/diagnóstico por imagem , Prognóstico , Estudos Retrospectivos , Tomografia Computadorizada por Raios X , 5'-Nucleotidase
4.
Opt Express ; 31(1): 396-410, 2023 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-36606975

RESUMO

Intra-arterial catheter guidance is instrumental to the success of minimally invasive procedures, such as percutaneous transluminal angioplasty. However, traditional device tracking methods, such as electromagnetic or infrared sensors, exhibits drawbacks such as magnetic interference or line of sight requirements. In this work, shape sensing of bends of different curvatures and lengths is demonstrated both asynchronously and in real-time using optical frequency domain reflectometry (OFDR) with a polymer extruded optical fiber triplet with enhanced backscattering properties. Simulations on digital phantoms showed that reconstruction accuracy is of the order of the interrogator's spatial resolution (millimeters) with sensing lengths of less than 1 m and a high SNR.


Assuntos
Cânula , Fibras Ópticas , Cateteres de Demora , Imagens de Fantasmas , Polímeros
5.
J Appl Clin Med Phys ; 23(8): e13655, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35661390

RESUMO

PURPOSE: External radiation therapy planning is a highly complex and tedious process as it involves treating large target volumes, prescribing several levels of doses, as well as avoiding irradiating critical structures such as organs at risk close to the tumor target. This requires highly trained dosimetrists and physicists to generate a personalized plan and adapt it as treatment evolves, thus affecting the overall tumor control and patient outcomes. Our aim is to achieve accurate dose predictions for head and neck (H&N) cancer patients on a challenging in-house dataset that reflects realistic variability and to further compare and validate the method on a public dataset. METHODS: We propose a three-dimensional (3D) deep neural network that combines a hierarchically dense architecture with an attention U-net (HDA U-net). We investigate a domain knowledge objective, incorporating a weighted mean squared error (MSE) with a dose-volume histogram (DVH) loss function. The proposed HDA U-net using the MSE-DVH loss function is compared with two state-of-the-art U-net variants on two radiotherapy datasets of H&N cases. These include reference dose plans, computed tomography (CT) information, organs at risk (OARs), and planning target volume (PTV) delineations. All models were evaluated using coverage, homogeneity, and conformity metrics as well as mean dose error and DVH curves. RESULTS: Overall, the proposed architecture outperformed the comparative state-of-the-art methods, reaching 0.95 (0.98) on D95 coverage, 1.06 (1.07) on the maximum dose value, 0.10 (0.08) on homogeneity, 0.53 (0.79) on conformity index, and attaining the lowest mean dose error on PTVs of 1.7% (1.4%) for the in-house (public) dataset. The improvements are statistically significant ( p < 0.05 $p<0.05$ ) for the homogeneity and maximum dose value compared with the closest baseline. All models offer a near real-time prediction, measured between 0.43 and 0.88 s per volume. CONCLUSION: The proposed method achieved similar performance on both realistic in-house data and public data compared to the attention U-net with a DVH loss, and outperformed other methods such as HD U-net and HDA U-net with standard MSE losses. The use of the DVH objective for training showed consistent improvements to the baselines on most metrics, supporting its added benefit in H&N cancer cases. The quick prediction time of the proposed method allows for real-time applications, providing physicians a method to generate an objective end goal for the dosimetrist to use as reference for planning. This could considerably reduce the number of iterations between the two expert physicians thus reducing the overall treatment planning time.


Assuntos
Neoplasias de Cabeça e Pescoço , Radioterapia de Intensidade Modulada , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Órgãos em Risco , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/métodos
6.
Radiographics ; 41(5): 1427-1445, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34469211

RESUMO

Deep learning is a class of machine learning methods that has been successful in computer vision. Unlike traditional machine learning methods that require hand-engineered feature extraction from input images, deep learning methods learn the image features by which to classify data. Convolutional neural networks (CNNs), the core of deep learning methods for imaging, are multilayered artificial neural networks with weighted connections between neurons that are iteratively adjusted through repeated exposure to training data. These networks have numerous applications in radiology, particularly in image classification, object detection, semantic segmentation, and instance segmentation. The authors provide an update on a recent primer on deep learning for radiologists, and they review terminology, data requirements, and recent trends in the design of CNNs; illustrate building blocks and architectures adapted to computer vision tasks, including generative architectures; and discuss training and validation, performance metrics, visualization, and future directions. Familiarity with the key concepts described will help radiologists understand advances of deep learning in medical imaging and facilitate clinical adoption of these techniques. Online supplemental material is available for this article. ©RSNA, 2021.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem , Humanos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Redes Neurais de Computação , Radiologistas
7.
PLoS Med ; 17(8): e1003281, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32797086

RESUMO

BACKGROUND: Prostate cancer (PC) is the most frequently diagnosed cancer in North American men. Pathologists are in critical need of accurate biomarkers to characterize PC, particularly to confirm the presence of intraductal carcinoma of the prostate (IDC-P), an aggressive histopathological variant for which therapeutic options are now available. Our aim was to identify IDC-P with Raman micro-spectroscopy (RµS) and machine learning technology following a protocol suitable for routine clinical histopathology laboratories. METHODS AND FINDINGS: We used RµS to differentiate IDC-P from PC, as well as PC and IDC-P from benign tissue on formalin-fixed paraffin-embedded first-line radical prostatectomy specimens (embedded in tissue microarrays [TMAs]) from 483 patients treated in 3 Canadian institutions between 1993 and 2013. The main measures were the presence or absence of IDC-P and of PC, regardless of the clinical outcomes. The median age at radical prostatectomy was 62 years. Most of the specimens from the first cohort (Centre hospitalier de l'Université de Montréal) were of Gleason score 3 + 3 = 6 (51%) while most of the specimens from the 2 other cohorts (University Health Network and Centre hospitalier universitaire de Québec-Université Laval) were of Gleason score 3 + 4 = 7 (51% and 52%, respectively). Most of the 483 patients were pT2 stage (44%-69%), and pT3a (22%-49%) was more frequent than pT3b (9%-12%). To investigate the prostate tissue of each patient, 2 consecutive sections of each TMA block were cut. The first section was transferred onto a glass slide to perform immunohistochemistry with H&E counterstaining for cell identification. The second section was placed on an aluminum slide, dewaxed, and then used to acquire an average of 7 Raman spectra per specimen (between 4 and 24 Raman spectra, 4 acquisitions/TMA core). Raman spectra of each cell type were then analyzed to retrieve tissue-specific molecular information and to generate classification models using machine learning technology. Models were trained and cross-validated using data from 1 institution. Accuracy, sensitivity, and specificity were 87% ± 5%, 86% ± 6%, and 89% ± 8%, respectively, to differentiate PC from benign tissue, and 95% ± 2%, 96% ± 4%, and 94% ± 2%, respectively, to differentiate IDC-P from PC. The trained models were then tested on Raman spectra from 2 independent institutions, reaching accuracies, sensitivities, and specificities of 84% and 86%, 84% and 87%, and 81% and 82%, respectively, to diagnose PC, and of 85% and 91%, 85% and 88%, and 86% and 93%, respectively, for the identification of IDC-P. IDC-P could further be differentiated from high-grade prostatic intraepithelial neoplasia (HGPIN), a pre-malignant intraductal proliferation that can be mistaken as IDC-P, with accuracies, sensitivities, and specificities > 95% in both training and testing cohorts. As we used stringent criteria to diagnose IDC-P, the main limitation of our study is the exclusion of borderline, difficult-to-classify lesions from our datasets. CONCLUSIONS: In this study, we developed classification models for the analysis of RµS data to differentiate IDC-P, PC, and benign tissue, including HGPIN. RµS could be a next-generation histopathological technique used to reinforce the identification of high-risk PC patients and lead to more precise diagnosis of IDC-P.


Assuntos
Carcinoma Intraductal não Infiltrante/diagnóstico por imagem , Aprendizado de Máquina/normas , Microscopia Óptica não Linear/normas , Neoplasias da Próstata/diagnóstico por imagem , Idoso , Canadá/epidemiologia , Carcinoma Intraductal não Infiltrante/epidemiologia , Carcinoma Intraductal não Infiltrante/patologia , Estudos de Casos e Controles , Estudos de Coortes , Humanos , Masculino , Pessoa de Meia-Idade , Microscopia Óptica não Linear/métodos , Neoplasias da Próstata/epidemiologia , Neoplasias da Próstata/patologia , Reprodutibilidade dos Testes , Estudos Retrospectivos
8.
J Digit Imaging ; 33(4): 937-945, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32193665

RESUMO

In developed countries, colorectal cancer is the second cause of cancer-related mortality. Chemotherapy is considered a standard treatment for colorectal liver metastases (CLM). Among patients who develop CLM, the assessment of patient response to chemotherapy is often required to determine the need for second-line chemotherapy and eligibility for surgery. However, while FOLFOX-based regimens are typically used for CLM treatment, the identification of responsive patients remains elusive. Computer-aided diagnosis systems may provide insight in the classification of liver metastases identified on diagnostic images. In this paper, we propose a fully automated framework based on deep convolutional neural networks (DCNN) which first differentiates treated and untreated lesions to identify new lesions appearing on CT scans, followed by a fully connected neural networks to predict from untreated lesions in pre-treatment computed tomography (CT) for patients with CLM undergoing chemotherapy, their response to a FOLFOX with Bevacizumab regimen as first-line of treatment. The ground truth for assessment of treatment response was histopathology-determined tumor regression grade. Our DCNN approach trained on 444 lesions from 202 patients achieved accuracies of 91% for differentiating treated and untreated lesions, and 78% for predicting the response to FOLFOX-based chemotherapy regimen. Experimental results showed that our method outperformed traditional machine learning algorithms and may allow for the early detection of non-responsive patients.


Assuntos
Neoplasias Hepáticas , Neoplasias Colorretais/diagnóstico por imagem , Neoplasias Colorretais/tratamento farmacológico , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/tratamento farmacológico , Neoplasias Hepáticas/secundário , Aprendizado de Máquina , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
9.
Opt Express ; 27(10): 13895-13909, 2019 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-31163847

RESUMO

We propose a novel device defined as Random Optical Grating by Ultraviolet or ultrafast laser Exposure (ROGUE), a new type of fiber Bragg grating (FBG), exhibiting a weak reflection over a large bandwidth, which is independent of the length of the grating. This FBG is fabricated simply by dithering the phase randomly during the writing process. This grating has an enhanced backscatter, several orders of magnitude above typical Rayleigh backscatter of standard SMF-28 optical fiber. The grating is used in distributed sensing using optical frequency domain reflectometry (OFDR), allowing a significant increase in signal to noise ratio for strain and temperature measurement. This enhancement results in significantly lower strain or temperature noise level and accuracy error, without sacrificing the spatial resolution. Using this method, we show a sensor with a backscatter level 50 dB higher than standard unexposed SMF-28, which can thus compensate for increased loss in the system.

10.
Analyst ; 144(22): 6517-6532, 2019 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-31647061

RESUMO

Raman spectroscopy is a promising tool for neurosurgical guidance and cancer research. Quantitative analysis of the Raman signal from living tissues is, however, limited. Their molecular composition is convoluted and influenced by clinical factors, and access to data is limited. To ensure acceptance of this technology by clinicians and cancer scientists, we need to adapt the analytical methods to more closely model the Raman-generating process. Our objective is to use feature engineering to develop a new representation for spectral data specifically tailored for brain diagnosis that improves interpretability of the Raman signal while retaining enough information to accurately predict tissue content. The method consists of band fitting of Raman bands which consistently appear in the brain Raman literature, and the generation of new features representing the pairwise interaction between bands and the interaction between bands and patient age. Our technique was applied to a dataset of 547 in situ Raman spectra from 65 patients undergoing glioma resection. It showed superior predictive capacities to a principal component analysis dimensionality reduction. After analysis through a Bayesian framework, we were able to identify the oncogenic processes that characterize glioma: increased nucleic acid content, overexpression of type IV collagen and shift in the primary metabolic engine. Our results demonstrate how this mathematical transformation of the Raman signal allows the first biological, statistically robust analysis of in vivo Raman spectra from brain tissue.


Assuntos
Neoplasias Encefálicas/metabolismo , Glioma/metabolismo , Análise Espectral Raman/métodos , Teorema de Bayes , Neoplasias Encefálicas/química , Colágeno Tipo IV/metabolismo , Conjuntos de Dados como Assunto , Feminino , Glioma/química , Humanos , Cuidados Intraoperatórios , Luz , Masculino , Pessoa de Meia-Idade , Ácidos Nucleicos/metabolismo , Análise de Componente Principal , Estudos Retrospectivos
11.
Radiographics ; 37(7): 2113-2131, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29131760

RESUMO

Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. ©RSNA, 2017.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizagem , Redes Neurais de Computação , Sistemas de Informação em Radiologia , Radiologia/educação , Algoritmos , Humanos , Aprendizado de Máquina
12.
Eur Spine J ; 25(10): 3104-3113, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-26851954

RESUMO

PURPOSE: The classification of three-dimensional (3D) spinal deformities remains an open question in adolescent idiopathic scoliosis. Recent studies have investigated pattern classification based on explicit clinical parameters. An emerging trend however seeks to simplify complex spine geometries and capture the predominant modes of variability of the deformation. The objective of this study is to perform a 3D characterization and morphology analysis of the thoracic and thoraco/lumbar scoliotic spines (cross-sectional study). The presence of subgroups within all Lenke types will be investigated by analyzing a simplified representation of the geometric 3D reconstruction of a patient's spine, and to establish the basis for a new classification approach based on a machine learning algorithm. METHODS: Three-dimensional reconstructions of coronal and sagittal standing radiographs of 663 patients, for a total of 915 visits, covering all types of deformities in adolescent idiopathic scoliosis (single, double and triple curves) and reviewed by the 3D Classification Committee of the Scoliosis Research Society, were analyzed using a machine learning algorithm based on stacked auto-encoders. The codes produced for each 3D reconstruction would be then grouped together using an unsupervised clustering method. For each identified cluster, Cobb angle and orientation of the plane of maximum curvature in the thoracic and lumbar curves, axial rotation of the apical vertebrae, kyphosis (T4-T12), lordosis (L1-S1) and pelvic incidence were obtained. No assumptions were made regarding grouping tendencies in the data nor were the number of clusters predefined. RESULTS: Eleven groups were revealed from the 915 visits, wherein the location of the main curve, kyphosis and lordosis were the three major discriminating factors with slight overlap between groups. Two main groups emerge among the eleven different clusters of patients: a first with small thoracic deformities and large lumbar deformities, while the other with large thoracic deformities and small lumbar curvature. The main factor that allowed identifying eleven distinct subgroups within the surgical patients (major curves) from Lenke type-1 to type-6 curves, was the location of the apical vertebra as identified by the planes of maximum curvature obtained in both thoracic and thoraco/lumbar segments. Both hypokyphotic and hyperkypothic clusters were primarily composed of Lenke 1-4 curve type patients, while a hyperlordotic cluster was composed of Lenke 5 and 6 curve type patients. CONCLUSION: The stacked auto-encoder analysis technique helped to simplify the complex nature of 3D spine models, while preserving the intrinsic properties that are typically measured with explicit parameters derived from the 3D reconstruction.


Assuntos
Imageamento Tridimensional/métodos , Vértebras Lombares/diagnóstico por imagem , Escoliose/classificação , Vértebras Torácicas/diagnóstico por imagem , Adolescente , Algoritmos , Análise de Variância , Estudos Transversais , Feminino , Humanos , Incidência , Cifose/diagnóstico por imagem , Lordose/diagnóstico por imagem , Masculino , Estudos Retrospectivos , Escoliose/diagnóstico por imagem
13.
Neuroimage ; 98: 528-36, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24780696

RESUMO

Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Medula Espinal/anatomia & histologia , Medula Espinal/patologia , Algoritmos , Humanos , Variações Dependentes do Observador , Traumatismos da Medula Espinal/patologia
14.
Phys Med Biol ; 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38981593

RESUMO

OBJECTIVE: Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons. Approach. We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network. Results. The proposed model achieves a mean absolute error (MAE) of $18.76 (5.167)$ in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of $0.95 (0.09)$ and a Frechet inception distance (FID) of $145.60 (8.38)$. The model yields a MAE of $26.83 (8.27)$ to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of $0.73 (0.06)$ and a FID distance equal to $122.58 (7.55)$. The improvement of our model over other state-of-the-art GAN approaches is of $3.8\%$, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio (PSNRs) of $27.89 (2.22)$ and $26.08 (2.95)$ to synthesize MRI from CT input. Significance. The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.

15.
Int J Comput Assist Radiol Surg ; 19(6): 1103-1111, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38573566

RESUMO

PURPOSE: Cancer confirmation in the operating room (OR) is crucial to improve local control in cancer therapies. Histopathological analysis remains the gold standard, but there is a lack of real-time in situ cancer confirmation to support margin confirmation or remnant tissue. Raman spectroscopy (RS), as a label-free optical technique, has proven its power in cancer detection and, when integrated into a robotic assistance system, can positively impact the efficiency of procedures and the quality of life of patients, avoiding potential recurrence. METHODS: A workflow is proposed where a 6-DOF robotic system (optical camera + MECA500 robotic arm) assists the characterization of fresh tissue samples using RS. Three calibration methods are compared for the robot, and the temporal efficiency is compared with standard hand-held analysis. For healthy/cancerous tissue discrimination, a 1D-convolutional neural network is proposed and tested on three ex vivo datasets (brain, breast, and prostate) containing processed RS and histopathology ground truth. RESULTS: The robot achieves a minimum error of 0.20 mm (0.12) on a set of 30 test landmarks and demonstrates significant time reduction in 4 of the 5 proposed tasks. The proposed classification model can identify brain, breast, and prostate cancer with an accuracy of 0.83 (0.02), 0.93 (0.01), and 0.71 (0.01), respectively. CONCLUSION: Automated RS analysis with deep learning demonstrates promising classification performance compared to commonly used support vector machines. Robotic assistance in tissue characterization can contribute to highly accurate, rapid, and robust biopsy analysis in the OR. These two elements are an important step toward real-time cancer confirmation using RS and OR integration.


Assuntos
Neoplasias da Mama , Neoplasias da Próstata , Procedimentos Cirúrgicos Robóticos , Análise Espectral Raman , Humanos , Análise Espectral Raman/métodos , Neoplasias da Próstata/patologia , Neoplasias da Próstata/diagnóstico , Procedimentos Cirúrgicos Robóticos/métodos , Neoplasias da Mama/patologia , Masculino , Feminino , Salas Cirúrgicas , Biópsia/métodos , Neoplasias Encefálicas/patologia , Neoplasias Encefálicas/diagnóstico
16.
NPJ Digit Med ; 7(1): 138, 2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38783037

RESUMO

The coronary angiogram is the gold standard for evaluating the severity of coronary artery disease stenoses. Presently, the assessment is conducted visually by cardiologists, a method that lacks standardization. This study introduces DeepCoro, a ground-breaking AI-driven pipeline that integrates advanced vessel tracking and a video-based Swin3D model that was trained and validated on a dataset comprised of 182,418 coronary angiography videos spanning 5 years. DeepCoro achieved a notable precision of 71.89% in identifying coronary artery segments and demonstrated a mean absolute error of 20.15% (95% CI: 19.88-20.40) and a classification AUROC of 0.8294 (95% CI: 0.8215-0.8373) in stenosis percentage prediction compared to traditional cardiologist assessments. When compared to two expert interventional cardiologists, DeepCoro achieved lower variability than the clinical reports (19.09%; 95% CI: 18.55-19.58 vs 21.00%; 95% CI: 20.20-21.76, respectively). In addition, DeepCoro can be fine-tuned to a different modality type. When fine-tuned on quantitative coronary angiography assessments, DeepCoro attained an even lower mean absolute error of 7.75% (95% CI: 7.37-8.07), underscoring the reduced variability inherent to this method. This study establishes DeepCoro as an innovative video-based, adaptable tool in coronary artery disease analysis, significantly enhancing the precision and reliability of stenosis assessment.

17.
Can J Cardiol ; 2024 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-38825181

RESUMO

Large language models (LLMs) have emerged as powerful tools in artificial intelligence, demonstrating remarkable capabilities in natural language processing and generation. In this article, we explore the potential applications of LLMs in enhancing cardiovascular care and research. We discuss how LLMs can be used to simplify complex medical information, improve patient-physician communication, and automate tasks such as summarising medical articles and extracting key information. In addition, we highlight the role of LLMs in categorising and analysing unstructured data, such as medical notes and test results, which could revolutionise data handling and interpretation in cardiovascular research. However, we also emphasise the limitations and challenges associated with LLMs, including potential biases, reasoning opacity, and the need for rigourous validation in medical contexts. This review provides a practical guide for cardiovascular professionals to understand and harness the power of LLMs while navigating their limitations. We conclude by discussing the future directions and implications of LLMs in transforming cardiovascular care and research.

18.
Can J Cardiol ; 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38885787

RESUMO

The potential of artificial intelligence (AI) in medicine lies in its ability to enhance clinicians' capacity to analyse medical images, thereby improving diagnostic precision and accuracy and thus enhancing current tests. However, the integration of AI within health care is fraught with difficulties. Heterogeneity among health care system applications, reliance on proprietary closed-source software, and rising cybersecurity threats pose significant challenges. Moreover, before their deployment in clinical settings, AI models must demonstrate their effectiveness across a wide range of scenarios and must be validated by prospective studies, but doing so requires testing in an environment mirroring the clinical workflow, which is difficult to achieve without dedicated software. Finally, the use of AI techniques in health care raises significant legal and ethical issues, such as the protection of patient privacy, the prevention of bias, and the monitoring of the device's safety and effectiveness for regulatory compliance. This review describes challenges to AI integration in health care and provides guidelines on how to move forward. We describe an open-source solution that we developed that integrates AI models into the Picture Archives Communication System (PACS), called PACS-AI. This approach aims to increase the evaluation of AI models by facilitating their integration and validation with existing medical imaging databases. PACS-AI may overcome many current barriers to AI deployment and offer a pathway toward responsible, fair, and effective deployment of AI models in health care. In addition, we propose a list of criteria and guidelines that AI researchers should adopt when publishing a medical AI model to enhance standardisation and reproducibility.

19.
Sci Robot ; 9(87): eadh8702, 2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38354257

RESUMO

Using external actuation sources to navigate untethered drug-eluting microrobots in the bloodstream offers great promise in improving the selectivity of drug delivery, especially in oncology, but the current field forces are difficult to maintain with enough strength inside the human body (>70-centimeter-diameter range) to achieve this operation. Here, we present an algorithm to predict the optimal patient position with respect to gravity during endovascular microrobot navigation. Magnetic resonance navigation, using magnetic field gradients in clinical magnetic resonance imaging (MRI), is combined with the algorithm to improve the targeting efficiency of magnetic microrobots (MMRs). Using a dedicated microparticle injector, a high-precision MRI-compatible balloon inflation system, and a clinical MRI, MMRs were successfully steered into targeted lobes via the hepatic arteries of living pigs. The distribution ratio of the microrobots (roughly 2000 MMRs per pig) in the right liver lobe increased from 47.7 to 86.4% and increased in the left lobe from 52.2 to 84.1%. After passing through multiple vascular bifurcations, the number of MMRs reaching four different target liver lobes had a 1.7- to 2.6-fold increase in the navigation groups compared with the control group. Performing simulations on 19 patients with hepatocellular carcinoma (HCC) demonstrated that the proposed technique can meet the need for hepatic embolization in patients with HCC. Our technology offers selectable direction for actuator-based navigation of microrobots at the human scale.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Robótica , Humanos , Animais , Suínos , Artéria Hepática/diagnóstico por imagem , Neoplasias Hepáticas/diagnóstico por imagem
20.
Magn Reson Med ; 69(2): 553-62, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22488794

RESUMO

There has been a resurgent interest in intravoxel incoherent motion (IVIM) MR imaging to obtain perfusion as well as diffusion information on lesions, in which the diffusion was modeled as Gaussian diffusion. However, it was observed that this diffusion deviated from expected monoexponential decay at high b-values and the reported perfusion in prostate is contrary to the findings in dynamic contrast-enhanced (DCE) MRI studies and angiogenesis. Thus, this work is to evaluate the effect of different b-values on IVIM perfusion fractions (f) and diffusion coefficients (D) for prostate cancer detection. The results show that both parameters depended heavily on the b-values, and those derived without the highest b-value correlated best with the results from DCE-MRI studies; specifically, f was significantly elevated (7.2% vs. 3.7%) in tumors when compared with normal tissues, in accordance with the volume transfer constant (K(trans); 0.39 vs. 0.18 min(-1)) and plasma fractional volume (v(p) ; 8.4% vs. 3.4%). In conclusion, it is critical to choose an appropriate range of b-values in studies or include the non-Gaussian diffusion contribution to obtain unbiased IVIM measurements. These measurements could eliminate the need for DCE-MRI, which is especially relevant in patients who cannot receive intravenous gadolinium-based contrast media.


Assuntos
Algoritmos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Neovascularização Patológica/patologia , Imagem de Perfusão/métodos , Neoplasias da Próstata/patologia , Idoso , Humanos , Aumento da Imagem/métodos , Masculino , Pessoa de Meia-Idade , Neovascularização Patológica/etiologia , Neoplasias da Próstata/complicações , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Carga Tumoral
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA