Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 112
Filtrar
1.
Dev Neurosci ; 45(4): 210-222, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36822171

RESUMO

Macrocephaly has been associated with neurodevelopmental disorders; however, it has been mainly studied in the context of pathological or high-risk populations and little is known about its impact, as an isolated trait, on brain development in general population. Electroencephalographic (EEG) power spectral density (PSD) and signal complexity have shown to be sensitive to neurodevelopment and its alterations. We aimed to investigate the impact of macrocephaly, as an isolated trait, on EEG signal as measured by PSD and multiscale entropy during the first year of life. We recorded high-density EEG resting-state activity of 74 healthy full-term infants, 50 control (26 girls), and 24 macrocephalic (12 girls) aged between 3 and 11 months. We used linear regression models to assess group and age effects on EEG PSD and signal complexity. Sex and brain volume measures, obtained via a 3D transfontanellar ultrasound, were also included into the models to evaluate their contribution. Our results showed lower PSD of the low alpha (8-10 Hz) frequency band and lower complexity in the macrocephalic group compared to the control group. In addition, we found an increase in low alpha (8.5-10 Hz) PSD and in the complexity index with age. These findings suggest that macrocephaly as an isolated trait has a significant impact on brain activity during the first year of life.


Assuntos
Eletroencefalografia , Megalencefalia , Feminino , Humanos , Lactente , Entropia , Eletroencefalografia/métodos , Encéfalo
2.
Radiology ; 309(1): e230659, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37787678

RESUMO

Background Screening for nonalcoholic fatty liver disease (NAFLD) is suboptimal due to the subjective interpretation of US images. Purpose To evaluate the agreement and diagnostic performance of radiologists and a deep learning model in grading hepatic steatosis in NAFLD at US, with biopsy as the reference standard. Materials and Methods This retrospective study included patients with NAFLD and control patients without hepatic steatosis who underwent abdominal US and contemporaneous liver biopsy from September 2010 to October 2019. Six readers visually graded steatosis on US images twice, 2 weeks apart. Reader agreement was assessed with use of κ statistics. Three deep learning techniques applied to B-mode US images were used to classify dichotomized steatosis grades. Classification performance of human radiologists and the deep learning model for dichotomized steatosis grades (S0, S1, S2, and S3) was assessed with area under the receiver operating characteristic curve (AUC) on a separate test set. Results The study included 199 patients (mean age, 53 years ± 13 [SD]; 101 men). On the test set (n = 52), radiologists had fair interreader agreement (0.34 [95% CI: 0.31, 0.37]) for classifying steatosis grades S0 versus S1 or higher, while AUCs were between 0.49 and 0.84 for radiologists and 0.85 (95% CI: 0.83, 0.87) for the deep learning model. For S0 or S1 versus S2 or S3, radiologists had fair interreader agreement (0.30 [95% CI: 0.27, 0.33]), while AUCs were between 0.57 and 0.76 for radiologists and 0.73 (95% CI: 0.71, 0.75) for the deep learning model. For S2 or lower versus S3, radiologists had fair interreader agreement (0.37 [95% CI: 0.33, 0.40]), while AUCs were between 0.52 and 0.81 for radiologists and 0.67 (95% CI: 0.64, 0.69) for the deep learning model. Conclusion Deep learning approaches applied to B-mode US images provided comparable performance with human readers for detection and grading of hepatic steatosis. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Tuthill in this issue.


Assuntos
Aprendizado Profundo , Técnicas de Imagem por Elasticidade , Hepatopatia Gordurosa não Alcoólica , Masculino , Humanos , Pessoa de Meia-Idade , Hepatopatia Gordurosa não Alcoólica/diagnóstico por imagem , Hepatopatia Gordurosa não Alcoólica/patologia , Fígado/diagnóstico por imagem , Fígado/patologia , Estudos Retrospectivos , Técnicas de Imagem por Elasticidade/métodos , Curva ROC , Biópsia/métodos
3.
J Transl Med ; 21(1): 507, 2023 07 27.
Artigo em Inglês | MEDLINE | ID: mdl-37501197

RESUMO

BACKGROUND: Finding a noninvasive radiomic surrogate of tumor immune features could help identify patients more likely to respond to novel immune checkpoint inhibitors. Particularly, CD73 is an ectonucleotidase that catalyzes the breakdown of extracellular AMP into immunosuppressive adenosine, which can be blocked by therapeutic antibodies. High CD73 expression in colorectal cancer liver metastasis (CRLM) resected with curative intent is associated with early recurrence and shorter patient survival. The aim of this study was hence to evaluate whether machine learning analysis of preoperative liver CT-scan could estimate high vs low CD73 expression in CRLM and whether such radiomic score would have a prognostic significance. METHODS: We trained an Attentive Interpretable Tabular Learning (TabNet) model to predict, from preoperative CT images, stratified expression levels of CD73 (CD73High vs. CD73Low) assessed by immunofluorescence (IF) on tissue microarrays. Radiomic features were extracted from 160 segmented CRLM of 122 patients with matched IF data, preprocessed and used to train the predictive model. We applied a five-fold cross-validation and validated the performance on a hold-out test set. RESULTS: TabNet provided areas under the receiver operating characteristic curve of 0.95 (95% CI 0.87 to 1.0) and 0.79 (0.65 to 0.92) on the training and hold-out test sets respectively, and outperformed other machine learning models. The TabNet-derived score, termed rad-CD73, was positively correlated with CD73 histological expression in matched CRLM (Spearman's ρ = 0.6004; P < 0.0001). The median time to recurrence (TTR) and disease-specific survival (DSS) after CRLM resection in rad-CD73High vs rad-CD73Low patients was 13.0 vs 23.6 months (P = 0.0098) and 53.4 vs 126.0 months (P = 0.0222), respectively. The prognostic value of rad-CD73 was independent of the standard clinical risk score, for both TTR (HR = 2.11, 95% CI 1.30 to 3.45, P < 0.005) and DSS (HR = 1.88, 95% CI 1.11 to 3.18, P = 0.020). CONCLUSIONS: Our findings reveal promising results for non-invasive CT-scan-based prediction of CD73 expression in CRLM and warrant further validation as to whether rad-CD73 could assist oncologists as a biomarker of prognosis and response to immunotherapies targeting the adenosine pathway.


Assuntos
Neoplasias Colorretais , Neoplasias Hepáticas , Humanos , Adenosina , Neoplasias Hepáticas/diagnóstico por imagem , Prognóstico , Estudos Retrospectivos , Tomografia Computadorizada por Raios X , 5'-Nucleotidase
4.
Opt Express ; 31(1): 396-410, 2023 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-36606975

RESUMO

Intra-arterial catheter guidance is instrumental to the success of minimally invasive procedures, such as percutaneous transluminal angioplasty. However, traditional device tracking methods, such as electromagnetic or infrared sensors, exhibits drawbacks such as magnetic interference or line of sight requirements. In this work, shape sensing of bends of different curvatures and lengths is demonstrated both asynchronously and in real-time using optical frequency domain reflectometry (OFDR) with a polymer extruded optical fiber triplet with enhanced backscattering properties. Simulations on digital phantoms showed that reconstruction accuracy is of the order of the interrogator's spatial resolution (millimeters) with sensing lengths of less than 1 m and a high SNR.


Assuntos
Cânula , Fibras Ópticas , Cateteres de Demora , Imagens de Fantasmas , Polímeros
5.
J Appl Clin Med Phys ; 23(8): e13655, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35661390

RESUMO

PURPOSE: External radiation therapy planning is a highly complex and tedious process as it involves treating large target volumes, prescribing several levels of doses, as well as avoiding irradiating critical structures such as organs at risk close to the tumor target. This requires highly trained dosimetrists and physicists to generate a personalized plan and adapt it as treatment evolves, thus affecting the overall tumor control and patient outcomes. Our aim is to achieve accurate dose predictions for head and neck (H&N) cancer patients on a challenging in-house dataset that reflects realistic variability and to further compare and validate the method on a public dataset. METHODS: We propose a three-dimensional (3D) deep neural network that combines a hierarchically dense architecture with an attention U-net (HDA U-net). We investigate a domain knowledge objective, incorporating a weighted mean squared error (MSE) with a dose-volume histogram (DVH) loss function. The proposed HDA U-net using the MSE-DVH loss function is compared with two state-of-the-art U-net variants on two radiotherapy datasets of H&N cases. These include reference dose plans, computed tomography (CT) information, organs at risk (OARs), and planning target volume (PTV) delineations. All models were evaluated using coverage, homogeneity, and conformity metrics as well as mean dose error and DVH curves. RESULTS: Overall, the proposed architecture outperformed the comparative state-of-the-art methods, reaching 0.95 (0.98) on D95 coverage, 1.06 (1.07) on the maximum dose value, 0.10 (0.08) on homogeneity, 0.53 (0.79) on conformity index, and attaining the lowest mean dose error on PTVs of 1.7% (1.4%) for the in-house (public) dataset. The improvements are statistically significant ( p < 0.05 $p<0.05$ ) for the homogeneity and maximum dose value compared with the closest baseline. All models offer a near real-time prediction, measured between 0.43 and 0.88 s per volume. CONCLUSION: The proposed method achieved similar performance on both realistic in-house data and public data compared to the attention U-net with a DVH loss, and outperformed other methods such as HD U-net and HDA U-net with standard MSE losses. The use of the DVH objective for training showed consistent improvements to the baselines on most metrics, supporting its added benefit in H&N cancer cases. The quick prediction time of the proposed method allows for real-time applications, providing physicians a method to generate an objective end goal for the dosimetrist to use as reference for planning. This could considerably reduce the number of iterations between the two expert physicians thus reducing the overall treatment planning time.


Assuntos
Neoplasias de Cabeça e Pescoço , Radioterapia de Intensidade Modulada , Neoplasias de Cabeça e Pescoço/radioterapia , Humanos , Órgãos em Risco , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos , Radioterapia de Intensidade Modulada/métodos
6.
Radiographics ; 41(5): 1427-1445, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34469211

RESUMO

Deep learning is a class of machine learning methods that has been successful in computer vision. Unlike traditional machine learning methods that require hand-engineered feature extraction from input images, deep learning methods learn the image features by which to classify data. Convolutional neural networks (CNNs), the core of deep learning methods for imaging, are multilayered artificial neural networks with weighted connections between neurons that are iteratively adjusted through repeated exposure to training data. These networks have numerous applications in radiology, particularly in image classification, object detection, semantic segmentation, and instance segmentation. The authors provide an update on a recent primer on deep learning for radiologists, and they review terminology, data requirements, and recent trends in the design of CNNs; illustrate building blocks and architectures adapted to computer vision tasks, including generative architectures; and discuss training and validation, performance metrics, visualization, and future directions. Familiarity with the key concepts described will help radiologists understand advances of deep learning in medical imaging and facilitate clinical adoption of these techniques. Online supplemental material is available for this article. ©RSNA, 2021.


Assuntos
Aprendizado Profundo , Diagnóstico por Imagem , Humanos , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina , Redes Neurais de Computação , Radiologistas
7.
PLoS Med ; 17(8): e1003281, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32797086

RESUMO

BACKGROUND: Prostate cancer (PC) is the most frequently diagnosed cancer in North American men. Pathologists are in critical need of accurate biomarkers to characterize PC, particularly to confirm the presence of intraductal carcinoma of the prostate (IDC-P), an aggressive histopathological variant for which therapeutic options are now available. Our aim was to identify IDC-P with Raman micro-spectroscopy (RµS) and machine learning technology following a protocol suitable for routine clinical histopathology laboratories. METHODS AND FINDINGS: We used RµS to differentiate IDC-P from PC, as well as PC and IDC-P from benign tissue on formalin-fixed paraffin-embedded first-line radical prostatectomy specimens (embedded in tissue microarrays [TMAs]) from 483 patients treated in 3 Canadian institutions between 1993 and 2013. The main measures were the presence or absence of IDC-P and of PC, regardless of the clinical outcomes. The median age at radical prostatectomy was 62 years. Most of the specimens from the first cohort (Centre hospitalier de l'Université de Montréal) were of Gleason score 3 + 3 = 6 (51%) while most of the specimens from the 2 other cohorts (University Health Network and Centre hospitalier universitaire de Québec-Université Laval) were of Gleason score 3 + 4 = 7 (51% and 52%, respectively). Most of the 483 patients were pT2 stage (44%-69%), and pT3a (22%-49%) was more frequent than pT3b (9%-12%). To investigate the prostate tissue of each patient, 2 consecutive sections of each TMA block were cut. The first section was transferred onto a glass slide to perform immunohistochemistry with H&E counterstaining for cell identification. The second section was placed on an aluminum slide, dewaxed, and then used to acquire an average of 7 Raman spectra per specimen (between 4 and 24 Raman spectra, 4 acquisitions/TMA core). Raman spectra of each cell type were then analyzed to retrieve tissue-specific molecular information and to generate classification models using machine learning technology. Models were trained and cross-validated using data from 1 institution. Accuracy, sensitivity, and specificity were 87% ± 5%, 86% ± 6%, and 89% ± 8%, respectively, to differentiate PC from benign tissue, and 95% ± 2%, 96% ± 4%, and 94% ± 2%, respectively, to differentiate IDC-P from PC. The trained models were then tested on Raman spectra from 2 independent institutions, reaching accuracies, sensitivities, and specificities of 84% and 86%, 84% and 87%, and 81% and 82%, respectively, to diagnose PC, and of 85% and 91%, 85% and 88%, and 86% and 93%, respectively, for the identification of IDC-P. IDC-P could further be differentiated from high-grade prostatic intraepithelial neoplasia (HGPIN), a pre-malignant intraductal proliferation that can be mistaken as IDC-P, with accuracies, sensitivities, and specificities > 95% in both training and testing cohorts. As we used stringent criteria to diagnose IDC-P, the main limitation of our study is the exclusion of borderline, difficult-to-classify lesions from our datasets. CONCLUSIONS: In this study, we developed classification models for the analysis of RµS data to differentiate IDC-P, PC, and benign tissue, including HGPIN. RµS could be a next-generation histopathological technique used to reinforce the identification of high-risk PC patients and lead to more precise diagnosis of IDC-P.


Assuntos
Carcinoma Intraductal não Infiltrante/diagnóstico por imagem , Aprendizado de Máquina/normas , Microscopia Óptica não Linear/normas , Neoplasias da Próstata/diagnóstico por imagem , Idoso , Canadá/epidemiologia , Carcinoma Intraductal não Infiltrante/epidemiologia , Carcinoma Intraductal não Infiltrante/patologia , Estudos de Casos e Controles , Estudos de Coortes , Humanos , Masculino , Pessoa de Meia-Idade , Microscopia Óptica não Linear/métodos , Neoplasias da Próstata/epidemiologia , Neoplasias da Próstata/patologia , Reprodutibilidade dos Testes , Estudos Retrospectivos
8.
J Digit Imaging ; 33(4): 937-945, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32193665

RESUMO

In developed countries, colorectal cancer is the second cause of cancer-related mortality. Chemotherapy is considered a standard treatment for colorectal liver metastases (CLM). Among patients who develop CLM, the assessment of patient response to chemotherapy is often required to determine the need for second-line chemotherapy and eligibility for surgery. However, while FOLFOX-based regimens are typically used for CLM treatment, the identification of responsive patients remains elusive. Computer-aided diagnosis systems may provide insight in the classification of liver metastases identified on diagnostic images. In this paper, we propose a fully automated framework based on deep convolutional neural networks (DCNN) which first differentiates treated and untreated lesions to identify new lesions appearing on CT scans, followed by a fully connected neural networks to predict from untreated lesions in pre-treatment computed tomography (CT) for patients with CLM undergoing chemotherapy, their response to a FOLFOX with Bevacizumab regimen as first-line of treatment. The ground truth for assessment of treatment response was histopathology-determined tumor regression grade. Our DCNN approach trained on 444 lesions from 202 patients achieved accuracies of 91% for differentiating treated and untreated lesions, and 78% for predicting the response to FOLFOX-based chemotherapy regimen. Experimental results showed that our method outperformed traditional machine learning algorithms and may allow for the early detection of non-responsive patients.


Assuntos
Neoplasias Hepáticas , Neoplasias Colorretais/diagnóstico por imagem , Neoplasias Colorretais/tratamento farmacológico , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/tratamento farmacológico , Neoplasias Hepáticas/secundário , Aprendizado de Máquina , Redes Neurais de Computação , Tomografia Computadorizada por Raios X
9.
Opt Express ; 27(10): 13895-13909, 2019 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-31163847

RESUMO

We propose a novel device defined as Random Optical Grating by Ultraviolet or ultrafast laser Exposure (ROGUE), a new type of fiber Bragg grating (FBG), exhibiting a weak reflection over a large bandwidth, which is independent of the length of the grating. This FBG is fabricated simply by dithering the phase randomly during the writing process. This grating has an enhanced backscatter, several orders of magnitude above typical Rayleigh backscatter of standard SMF-28 optical fiber. The grating is used in distributed sensing using optical frequency domain reflectometry (OFDR), allowing a significant increase in signal to noise ratio for strain and temperature measurement. This enhancement results in significantly lower strain or temperature noise level and accuracy error, without sacrificing the spatial resolution. Using this method, we show a sensor with a backscatter level 50 dB higher than standard unexposed SMF-28, which can thus compensate for increased loss in the system.

10.
Analyst ; 144(22): 6517-6532, 2019 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-31647061

RESUMO

Raman spectroscopy is a promising tool for neurosurgical guidance and cancer research. Quantitative analysis of the Raman signal from living tissues is, however, limited. Their molecular composition is convoluted and influenced by clinical factors, and access to data is limited. To ensure acceptance of this technology by clinicians and cancer scientists, we need to adapt the analytical methods to more closely model the Raman-generating process. Our objective is to use feature engineering to develop a new representation for spectral data specifically tailored for brain diagnosis that improves interpretability of the Raman signal while retaining enough information to accurately predict tissue content. The method consists of band fitting of Raman bands which consistently appear in the brain Raman literature, and the generation of new features representing the pairwise interaction between bands and the interaction between bands and patient age. Our technique was applied to a dataset of 547 in situ Raman spectra from 65 patients undergoing glioma resection. It showed superior predictive capacities to a principal component analysis dimensionality reduction. After analysis through a Bayesian framework, we were able to identify the oncogenic processes that characterize glioma: increased nucleic acid content, overexpression of type IV collagen and shift in the primary metabolic engine. Our results demonstrate how this mathematical transformation of the Raman signal allows the first biological, statistically robust analysis of in vivo Raman spectra from brain tissue.


Assuntos
Neoplasias Encefálicas/metabolismo , Glioma/metabolismo , Análise Espectral Raman/métodos , Teorema de Bayes , Neoplasias Encefálicas/química , Colágeno Tipo IV/metabolismo , Conjuntos de Dados como Assunto , Feminino , Glioma/química , Humanos , Cuidados Intraoperatórios , Luz , Masculino , Pessoa de Meia-Idade , Ácidos Nucleicos/metabolismo , Análise de Componente Principal , Estudos Retrospectivos
11.
Radiographics ; 37(7): 2113-2131, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29131760

RESUMO

Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. ©RSNA, 2017.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizagem , Redes Neurais de Computação , Sistemas de Informação em Radiologia , Radiologia/educação , Algoritmos , Humanos , Aprendizado de Máquina
12.
Eur Spine J ; 25(10): 3104-3113, 2016 10.
Artigo em Inglês | MEDLINE | ID: mdl-26851954

RESUMO

PURPOSE: The classification of three-dimensional (3D) spinal deformities remains an open question in adolescent idiopathic scoliosis. Recent studies have investigated pattern classification based on explicit clinical parameters. An emerging trend however seeks to simplify complex spine geometries and capture the predominant modes of variability of the deformation. The objective of this study is to perform a 3D characterization and morphology analysis of the thoracic and thoraco/lumbar scoliotic spines (cross-sectional study). The presence of subgroups within all Lenke types will be investigated by analyzing a simplified representation of the geometric 3D reconstruction of a patient's spine, and to establish the basis for a new classification approach based on a machine learning algorithm. METHODS: Three-dimensional reconstructions of coronal and sagittal standing radiographs of 663 patients, for a total of 915 visits, covering all types of deformities in adolescent idiopathic scoliosis (single, double and triple curves) and reviewed by the 3D Classification Committee of the Scoliosis Research Society, were analyzed using a machine learning algorithm based on stacked auto-encoders. The codes produced for each 3D reconstruction would be then grouped together using an unsupervised clustering method. For each identified cluster, Cobb angle and orientation of the plane of maximum curvature in the thoracic and lumbar curves, axial rotation of the apical vertebrae, kyphosis (T4-T12), lordosis (L1-S1) and pelvic incidence were obtained. No assumptions were made regarding grouping tendencies in the data nor were the number of clusters predefined. RESULTS: Eleven groups were revealed from the 915 visits, wherein the location of the main curve, kyphosis and lordosis were the three major discriminating factors with slight overlap between groups. Two main groups emerge among the eleven different clusters of patients: a first with small thoracic deformities and large lumbar deformities, while the other with large thoracic deformities and small lumbar curvature. The main factor that allowed identifying eleven distinct subgroups within the surgical patients (major curves) from Lenke type-1 to type-6 curves, was the location of the apical vertebra as identified by the planes of maximum curvature obtained in both thoracic and thoraco/lumbar segments. Both hypokyphotic and hyperkypothic clusters were primarily composed of Lenke 1-4 curve type patients, while a hyperlordotic cluster was composed of Lenke 5 and 6 curve type patients. CONCLUSION: The stacked auto-encoder analysis technique helped to simplify the complex nature of 3D spine models, while preserving the intrinsic properties that are typically measured with explicit parameters derived from the 3D reconstruction.


Assuntos
Imageamento Tridimensional/métodos , Vértebras Lombares/diagnóstico por imagem , Escoliose/classificação , Vértebras Torácicas/diagnóstico por imagem , Adolescente , Algoritmos , Análise de Variância , Estudos Transversais , Feminino , Humanos , Incidência , Cifose/diagnóstico por imagem , Lordose/diagnóstico por imagem , Masculino , Estudos Retrospectivos , Escoliose/diagnóstico por imagem
13.
Neuroimage ; 98: 528-36, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24780696

RESUMO

Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Medula Espinal/anatomia & histologia , Medula Espinal/patologia , Algoritmos , Humanos , Variações Dependentes do Observador , Traumatismos da Medula Espinal/patologia
14.
Phys Med Biol ; 69(15)2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-38981593

RESUMO

Objective.Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons.Approach.We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network.Results.The proposed model achieves a mean absolute error (MAE) of18.76±5.167in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of0.95±0.09and a Frechet inception distance (FID) of145.60±8.38. The model yields a MAE of26.83±8.27to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of0.73±0.06and a FID distance equal to122.58±7.55. The improvement of our model over other state-of-the-art GAN approaches is of 3.8%, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio of27.89±2.22and26.08±2.95to synthesize MRI from CT input.Significance.The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.


Assuntos
Neoplasias de Cabeça e Pescoço , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Humanos , Imageamento por Ressonância Magnética/métodos , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Neoplasias de Cabeça e Pescoço/radioterapia , Imageamento Tridimensional/métodos , Imagem Multimodal/métodos , Redes Neurais de Computação
15.
IEEE Int Conf Robot Autom ; 2024: 17594-17601, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-39463806

RESUMO

In minimally invasive procedures such as biopsies and prostate cancer brachytherapy, accurate needle placement remains challenging due to limitations in current tracking methods related to interference, reliability, resolution or image contrast. This often leads to frequent needle adjustments and reinsertions. To address these shortcomings, we introduce an optimized needle shape-sensing method using a fully distributed grating-based sensor. The proposed method uses simple trigonometric and geometric modeling of the fiber using optical frequency domain reflectometry (OFDR), without requiring prior knowledge of tissue properties or needle deflection shape and amplitude. Our optimization process includes a reproducible calibration process and a novel tip curvature compensation method. We validate our approach through experiments in artificial isotropic and inhomogeneous animal tissues, establishing ground truth using 3D stereo vision and cone beam computed tomography (CBCT) acquisitions, respectively. Our results yield an average RMSE ranging from 0.58 ± 0.21 mm to 0.66 ± 0.20 mm depending on the chosen spatial resolution, achieving the submillimeter accuracy required for interventional procedures.

16.
Med Image Anal ; 97: 103287, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39111265

RESUMO

Deep neural networks are commonly used for automated medical image segmentation, but models will frequently struggle to generalize well across different imaging modalities. This issue is particularly problematic due to the limited availability of annotated data, both in the target as well as the source modality, making it difficult to deploy these models on a larger scale. To overcome these challenges, we propose a new semi-supervised training strategy called MoDATTS. Our approach is designed for accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets. An image-to-image translation strategy between modalities is used to produce synthetic but annotated images and labels in the desired modality and improve generalization to the unannotated target modality. We also use powerful vision transformer architectures for both image translation (TransUNet) and segmentation (Medformer) tasks and introduce an iterative self-training procedure in the later task to further close the domain gap between modalities, thus also training on unlabeled images in the target modality. MoDATTS additionally allows the possibility to exploit image-level labels with a semi-supervised objective that encourages the model to disentangle tumors from the background. This semi-supervised methodology helps in particular to maintain downstream segmentation performance when pixel-level label scarcity is also present in the source modality dataset, or when the source dataset contains healthy controls. The proposed model achieves superior performance compared to other methods from participating teams in the CrossMoDA 2022 vestibular schwannoma (VS) segmentation challenge, as evidenced by its reported top Dice score of 0.87±0.04 for the VS segmentation. MoDATTS also yields consistent improvements in Dice scores over baselines on a cross-modality adult brain gliomas segmentation task composed of four different contrasts from the BraTS 2020 challenge dataset, where 95% of a target supervised model performance is reached when no target modality annotations are available. We report that 99% and 100% of this maximum performance can be attained if 20% and 50% of the target data is additionally annotated, which further demonstrates that MoDATTS can be leveraged to reduce the annotation burden.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Redes Neurais de Computação , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Aprendizado Profundo , Aprendizado de Máquina Supervisionado , Processamento de Imagem Assistida por Computador/métodos
17.
Int J Comput Assist Radiol Surg ; 19(6): 1103-1111, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38573566

RESUMO

PURPOSE: Cancer confirmation in the operating room (OR) is crucial to improve local control in cancer therapies. Histopathological analysis remains the gold standard, but there is a lack of real-time in situ cancer confirmation to support margin confirmation or remnant tissue. Raman spectroscopy (RS), as a label-free optical technique, has proven its power in cancer detection and, when integrated into a robotic assistance system, can positively impact the efficiency of procedures and the quality of life of patients, avoiding potential recurrence. METHODS: A workflow is proposed where a 6-DOF robotic system (optical camera + MECA500 robotic arm) assists the characterization of fresh tissue samples using RS. Three calibration methods are compared for the robot, and the temporal efficiency is compared with standard hand-held analysis. For healthy/cancerous tissue discrimination, a 1D-convolutional neural network is proposed and tested on three ex vivo datasets (brain, breast, and prostate) containing processed RS and histopathology ground truth. RESULTS: The robot achieves a minimum error of 0.20 mm (0.12) on a set of 30 test landmarks and demonstrates significant time reduction in 4 of the 5 proposed tasks. The proposed classification model can identify brain, breast, and prostate cancer with an accuracy of 0.83 (0.02), 0.93 (0.01), and 0.71 (0.01), respectively. CONCLUSION: Automated RS analysis with deep learning demonstrates promising classification performance compared to commonly used support vector machines. Robotic assistance in tissue characterization can contribute to highly accurate, rapid, and robust biopsy analysis in the OR. These two elements are an important step toward real-time cancer confirmation using RS and OR integration.


Assuntos
Neoplasias da Mama , Neoplasias da Próstata , Procedimentos Cirúrgicos Robóticos , Análise Espectral Raman , Humanos , Análise Espectral Raman/métodos , Neoplasias da Próstata/patologia , Neoplasias da Próstata/diagnóstico , Procedimentos Cirúrgicos Robóticos/métodos , Neoplasias da Mama/patologia , Masculino , Feminino , Salas Cirúrgicas , Biópsia/métodos , Neoplasias Encefálicas/patologia , Neoplasias Encefálicas/diagnóstico
18.
Med Image Anal ; 99: 103346, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39423564

RESUMO

Colorectal liver metastases (CLM) affect almost half of all colon cancer patients and the response to systemic chemotherapy plays a crucial role in patient survival. While oncologists typically use tumor grading scores, such as tumor regression grade (TRG), to establish an accurate prognosis on patient outcomes, including overall survival (OS) and time-to-recurrence (TTR), these traditional methods have several limitations. They are subjective, time-consuming, and require extensive expertise, which limits their scalability and reliability. Additionally, existing approaches for prognosis prediction using machine learning mostly rely on radiological imaging data, but recently histological images have been shown to be relevant for survival predictions by allowing to fully capture the complex microenvironmental and cellular characteristics of the tumor. To address these limitations, we propose an end-to-end approach for automated prognosis prediction using histology slides stained with Hematoxylin and Eosin (H&E) and Hematoxylin Phloxine Saffron (HPS). We first employ a Generative Adversarial Network (GAN) for slide normalization to reduce staining variations and improve the overall quality of the images that are used as input to our prediction pipeline. We propose a semi-supervised model to perform tissue classification from sparse annotations, producing segmentation and feature maps. Specifically, we use an attention-based approach that weighs the importance of different slide regions in producing the final classification results. Finally, we exploit the extracted features for the metastatic nodules and surrounding tissue to train a prognosis model. In parallel, we train a vision Transformer model in a knowledge distillation framework to replicate and enhance the performance of the prognosis prediction. We evaluate our approach on an in-house clinical dataset of 258 CLM patients, achieving superior performance compared to other comparative models with a c-index of 0.804 (0.014) for OS and 0.735 (0.016) for TTR, as well as on two public datasets. The proposed approach achieves an accuracy of 86.9% to 90.3% in predicting TRG dichotomization. For the 3-class TRG classification task, the proposed approach yields an accuracy of 78.5% to 82.1%, outperforming the comparative methods. Our proposed pipeline can provide automated prognosis for pathologists and oncologists, and can greatly promote precision medicine progress in managing CLM patients.

19.
PLoS One ; 19(9): e0307815, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39259736

RESUMO

OBJECTIVE: The purpose of this study was to determine and compare the performance of pre-treatment clinical risk score (CRS), radiomics models based on computed (CT), and their combination for predicting time to recurrence (TTR) and disease-specific survival (DSS) in patients with colorectal cancer liver metastases. METHODS: We retrospectively analyzed a prospectively maintained registry of 241 patients treated with systemic chemotherapy and surgery for colorectal cancer liver metastases. Radiomics features were extracted from baseline, pre-treatment, contrast-enhanced CT images. Multiple aggregation strategies were investigated for cases with multiple metastases. Radiomics signatures were derived using feature selection methods. Random survival forests (RSF) and neural network survival models (DeepSurv) based on radiomics features, alone or combined with CRS, were developed to predict TTR and DSS. Leveraging survival models predictions, classification models were trained to predict TTR within 18 months and DSS within 3 years. Classification performance was assessed with area under the receiver operating characteristic curve (AUC) on the test set. RESULTS: For TTR prediction, the concordance index (95% confidence interval) was 0.57 (0.57-0.57) for CRS, 0.61 (0.60-0.61) for RSF in combination with CRS, and 0.70 (0.68-0.73) for DeepSurv in combination with CRS. For DSS prediction, the concordance index was 0.59 (0.59-0.59) for CRS, 0.57 (0.56-0.57) for RSF in combination with CRS, and 0.60 (0.58-0.61) for DeepSurv in combination with CRS. For TTR classification, the AUC was 0.33 (0.33-0.33) for CRS, 0.77 (0.75-0.78) for radiomics signature alone, and 0.58 (0.57-0.59) for DeepSurv score alone. For DSS classification, the AUC was 0.61 (0.61-0.61) for CRS, 0.57 (0.56-0.57) for radiomics signature, and 0.75 (0.74-0.76) for DeepSurv score alone. CONCLUSION: Radiomics-based survival models outperformed CRS for TTR prediction. More accurate, noninvasive, and early prediction of patient outcome may help reduce exposure to ineffective yet toxic chemotherapy or high-risk major hepatectomies.


Assuntos
Neoplasias Colorretais , Neoplasias Hepáticas , Tomografia Computadorizada por Raios X , Humanos , Neoplasias Hepáticas/secundário , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Neoplasias Colorretais/patologia , Neoplasias Colorretais/diagnóstico por imagem , Neoplasias Colorretais/cirurgia , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Tomografia Computadorizada por Raios X/métodos , Estudos Retrospectivos , Prognóstico , Recidiva Local de Neoplasia/diagnóstico por imagem , Recidiva Local de Neoplasia/patologia , Resultado do Tratamento , Adulto , Radiômica
20.
Can J Cardiol ; 40(10): 1774-1787, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38825181

RESUMO

Large language models (LLMs) have emerged as powerful tools in artificial intelligence, demonstrating remarkable capabilities in natural language processing and generation. In this article, we explore the potential applications of LLMs in enhancing cardiovascular care and research. We discuss how LLMs can be used to simplify complex medical information, improve patient-physician communication, and automate tasks such as summarising medical articles and extracting key information. In addition, we highlight the role of LLMs in categorising and analysing unstructured data, such as medical notes and test results, which could revolutionise data handling and interpretation in cardiovascular research. However, we also emphasise the limitations and challenges associated with LLMs, including potential biases, reasoning opacity, and the need for rigourous validation in medical contexts. This review provides a practical guide for cardiovascular professionals to understand and harness the power of LLMs while navigating their limitations. We conclude by discussing the future directions and implications of LLMs in transforming cardiovascular care and research.


Assuntos
Doenças Cardiovasculares , Humanos , Doenças Cardiovasculares/terapia , Processamento de Linguagem Natural , Inteligência Artificial , Cardiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA