Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Radiol Med ; 2024 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-38724697

RESUMO

PURPOSE: To investigate the feasibility of an artificial intelligence (AI)-based semi-automated segmentation for the extraction of ultrasound (US)-derived radiomics features in the characterization of focal breast lesions (FBLs). MATERIAL AND METHODS: Two expert radiologists classified according to US BI-RADS criteria 352 FBLs detected in 352 patients (237 at Center A and 115 at Center B). An AI-based semi-automated segmentation was used to build a machine learning (ML) model on the basis of B-mode US of 237 images (center A) and then validated on an external cohort of B-mode US images of 115 patients (Center B). RESULTS: A total of 202 of 352 (57.4%) FBLs were benign, and 150 of 352 (42.6%) were malignant. The AI-based semi-automated segmentation achieved a success rate of 95.7% for one reviewer and 96% for the other, without significant difference (p = 0.839). A total of 15 (4.3%) and 14 (4%) of 352 semi-automated segmentations were not accepted due to posterior acoustic shadowing at B-Mode US and 13 and 10 of them corresponded to malignant lesions, respectively. In the validation cohort, the characterization made by the expert radiologist yielded values of sensitivity, specificity, PPV and NPV of 0.933, 0.9, 0.857, 0.955, respectively. The ML model obtained values of sensitivity, specificity, PPV and NPV of 0.544, 0.6, 0.416, 0.628, respectively. The combined assessment of radiologists and ML model yielded values of sensitivity, specificity, PPV and NPV of 0.756, 0.928, 0.872, 0.855, respectively. CONCLUSION: AI-based semi-automated segmentation is feasible, allowing an instantaneous and reproducible extraction of US-derived radiomics features of FBLs. The combination of radiomics and US BI-RADS classification led to a potential decrease of unnecessary biopsy but at the expense of a not negligible increase of potentially missed cancers.

2.
Eur Radiol Exp ; 8(1): 26, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38438821

RESUMO

An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians' decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between "shallow" learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and "deep" learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Algoritmos , Aprendizado de Máquina , Redes Neurais de Computação
3.
J Imaging Inform Med ; 37(3): 1038-1053, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38351223

RESUMO

Breast microcalcifications are observed in 80% of mammograms, and a notable proportion can lead to invasive tumors. However, diagnosing microcalcifications is a highly complicated and error-prone process due to their diverse sizes, shapes, and subtle variations. In this study, we propose a radiomic signature that effectively differentiates between healthy tissue, benign microcalcifications, and malignant microcalcifications. Radiomic features were extracted from a proprietary dataset, composed of 380 healthy tissue, 136 benign, and 242 malignant microcalcifications ROIs. Subsequently, two distinct signatures were selected to differentiate between healthy tissue and microcalcifications (detection task) and between benign and malignant microcalcifications (classification task). Machine learning models, namely Support Vector Machine, Random Forest, and XGBoost, were employed as classifiers. The shared signature selected for both tasks was then used to train a multi-class model capable of simultaneously classifying healthy, benign, and malignant ROIs. A significant overlap was discovered between the detection and classification signatures. The performance of the models was highly promising, with XGBoost exhibiting an AUC-ROC of 0.830, 0.856, and 0.876 for healthy, benign, and malignant microcalcifications classification, respectively. The intrinsic interpretability of radiomic features, and the use of the Mean Score Decrease method for model introspection, enabled models' clinical validation. In fact, the most important features, namely GLCM Contrast, FO Minimum and FO Entropy, were compared and found important in other studies on breast cancer.


Assuntos
Neoplasias da Mama , Calcinose , Mamografia , Humanos , Calcinose/diagnóstico por imagem , Calcinose/patologia , Feminino , Mamografia/métodos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Neoplasias da Mama/diagnóstico , Mama/diagnóstico por imagem , Mama/patologia , Aprendizado de Máquina , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Máquina de Vetores de Suporte , Doenças Mamárias/diagnóstico por imagem , Doenças Mamárias/patologia , Doenças Mamárias/diagnóstico , Doenças Mamárias/classificação , Radiômica
4.
Brain Sci ; 14(1)2024 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-38248300

RESUMO

Migraine is a burdensome neurological disorder that still lacks clear and easily accessible diagnostic biomarkers. Furthermore, a straightforward pathway is hard to find for migraineurs' management, so the search for response predictors has become urgent. Nowadays, artificial intelligence (AI) has pervaded almost every aspect of our lives, and medicine has not been missed. Its applications are nearly limitless, and the ability to use machine learning approaches has given researchers a chance to give huge amounts of data new insights. When it comes to migraine, AI may play a fundamental role, helping clinicians and patients in many ways. For example, AI-based models can increase diagnostic accuracy, especially for non-headache specialists, and may help in correctly classifying the different groups of patients. Moreover, AI models analysing brain imaging studies reveal promising results in identifying disease biomarkers. Regarding migraine management, AI applications showed value in identifying outcome measures, the best treatment choices, and therapy response prediction. In the present review, the authors introduce the various and most recent clinical applications of AI regarding migraine.

5.
Sensors (Basel) ; 23(12)2023 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-37420843

RESUMO

Melanoma is a malignant cancer type which develops when DNA damage occurs (mainly due to environmental factors such as ultraviolet rays). Often, melanoma results in intense and aggressive cell growth that, if not caught in time, can bring one toward death. Thus, early identification at the initial stage is fundamental to stopping the spread of cancer. In this paper, a ViT-based architecture able to classify melanoma versus non-cancerous lesions is presented. The proposed predictive model is trained and tested on public skin cancer data from the ISIC challenge, and the obtained results are highly promising. Different classifier configurations are considered and analyzed in order to find the most discriminating one. The best one reached an accuracy of 0.948, sensitivity of 0.928, specificity of 0.967, and AUROC of 0.948.


Assuntos
Melanoma , Neoplasias Cutâneas , Humanos , Dermoscopia/métodos , Melanoma/diagnóstico , Neoplasias Cutâneas/diagnóstico , Neoplasias Cutâneas/patologia , Dano ao DNA
6.
Brain Sci ; 13(5)2023 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-37239276

RESUMO

BACKGROUND: Hereditary transthyretin amyloidosis with polyneuropathy (ATTRv) is an adult-onset multisystemic disease, affecting the peripheral nerves, heart, gastrointestinal tract, eyes, and kidneys. Nowadays, several treatment options are available; thus, avoiding misdiagnosis is crucial to starting therapy in early disease stages. However, clinical diagnosis may be difficult, as the disease may present with unspecific symptoms and signs. We hypothesize that the diagnostic process may benefit from the use of machine learning (ML). METHODS: 397 patients referring to neuromuscular clinics in 4 centers from the south of Italy with neuropathy and at least 1 more red flag, as well as undergoing genetic testing for ATTRv, were considered. Then, only probands were considered for analysis. Hence, a cohort of 184 patients, 93 with positive and 91 (age- and sex-matched) with negative genetics, was considered for the classification task. The XGBoost (XGB) algorithm was trained to classify positive and negative TTR mutation patients. The SHAP method was used as an explainable artificial intelligence algorithm to interpret the model findings. RESULTS: diabetes, gender, unexplained weight loss, cardiomyopathy, bilateral carpal tunnel syndrome (CTS), ocular symptoms, autonomic symptoms, ataxia, renal dysfunction, lumbar canal stenosis, and history of autoimmunity were used for the model training. The XGB model showed an accuracy of 0.707 ± 0.101, a sensitivity of 0.712 ± 0.147, a specificity of 0.704 ± 0.150, and an AUC-ROC of 0.752 ± 0.107. Using the SHAP explanation, it was confirmed that unexplained weight loss, gastrointestinal symptoms, and cardiomyopathy showed a significant association with the genetic diagnosis of ATTRv, while bilateral CTS, diabetes, autoimmunity, and ocular and renal involvement were associated with a negative genetic test. CONCLUSIONS: Our data show that ML might potentially be a useful instrument to identify patients with neuropathy that should undergo genetic testing for ATTRv. Unexplained weight loss and cardiomyopathy are relevant red flags in ATTRv in the south of Italy. Further studies are needed to confirm these findings.

7.
Semin Ultrasound CT MR ; 44(3): 194-204, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37245884

RESUMO

Gastrointestinal stromal tumors (GISTs) arise from the interstitial cells of Cajal in the gastrointestinal tract and are the most common intestinal tumors. Usually GISTs are asymptomatic, especially small tumors that may not cause any symptoms and may be found accidentally on abdominal CT scans. Discovering of inhibitor of receptor tyrosine kinases has changed the outcome of patients with high-risk GISTs. This paper will focus on the role of imaging in diagnosis, characterization and follow-up. We shall also report our local experience in radiomics evaluation of GISTs.


Assuntos
Neoplasias Gastrointestinais , Tumores do Estroma Gastrointestinal , Radiômica , Humanos , Tumores do Estroma Gastrointestinal/diagnóstico , Neoplasias Gastrointestinais/diagnóstico , Tomografia Computadorizada por Raios X/métodos
8.
Heliyon ; 9(5): e15984, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37215845

RESUMO

Introduction: The aim of our study was to evaluate the feasibility of texture analysis of epicardial fat (EF) and thoracic subcutaneous fat (TSF) in patients undergoing cardiac CT (CCT). Materials and methods: We compared a consecutive population of 30 patients with BMI ≤25 kg/m2 (Group A, 60.6 ± 13.7 years) with a control population of 30 patients with BMI >25 kg/m2 (Group B, 63.3 ± 11 years). A dedicated computer application for quantification of EF and a texture analysis application for the study of EF and TSF were employed. Results: The volume of EF was higher in group B (mean 116.1 cm3 vs. 86.3 cm3, p = 0.014), despite no differences were found neither in terms of mean density (-69.5 ± 5 HU vs. -68 ± 5 HU, p = 0.28), nor in terms of quartiles distribution (Q1, p = 0.83; Q2, p = 0.22, Q3, p = 0.83, Q4, p = 0.34). The discriminating parameters of the histogram class were mean (p = 0.02), 0,1st (p = 0.001), 10th (p = 0.002), and 50th percentiles (p = 0.02). DifVarnc was the discriminating parameter of the co-occurrence matrix class (p = 0.007).The TSF thickness was 15 ± 6 mm in group A and 19.5 ± 5 mm in group B (p = 0.003). The TSF had a mean density of -97 ± 19 HU in group A and -95.8 ± 19 HU in group B (p = 0.75). The discriminating parameters of texture analysis were 10th (p = 0.03), 50th (p = 0.01), 90th percentiles (p = 0.04), S(0,1)SumAverg (p = 0.02), S(1,-1)SumOfSqs (p = 0.02), S(3,0)Contrast (p = 0.03), S(3,0)SumAverg (p = 0.02), S(4,0)SumAverg (p = 0.04), Horzl_RLNonUni (p = 0.02), and Vertl_LngREmph (p = 0.0005). Conclusions: Texture analysis provides distinctive radiomic parameters of EF and TSF. EF and TSF had different radiomic features as the BMI varies.

9.
J Imaging ; 9(2)2023 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-36826951

RESUMO

Radiomic analysis allows for the detection of imaging biomarkers supporting decision-making processes in clinical environments, from diagnosis to prognosis. Frequently, the original set of radiomic features is augmented by considering high-level features, such as wavelet transforms. However, several wavelets families (so called kernels) are able to generate different multi-resolution representations of the original image, and which of them produces more salient images is not yet clear. In this study, an in-depth analysis is performed by comparing different wavelet kernels and by evaluating their impact on predictive capabilities of radiomic models. A dataset composed of 1589 chest X-ray images was used for COVID-19 prognosis prediction as a case study. Random forest, support vector machine, and XGBoost were trained (on a subset of 1103 images) after a rigorous feature selection strategy to build-up the predictive models. Next, to evaluate the models generalization capability on unseen data, a test phase was performed (on a subset of 486 images). The experimental findings showed that Bior1.5, Coif1, Haar, and Sym2 kernels guarantee better and similar performance for all three machine learning models considered. Support vector machine and random forest showed comparable performance, and they were better than XGBoost. Additionally, random forest proved to be the most stable model, ensuring an appropriate balance between sensitivity and specificity.

10.
J Imaging ; 7(4)2021 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-34460513

RESUMO

Structural and metabolic imaging are fundamental for diagnosis, treatment and follow-up in oncology. Beyond the well-established diagnostic imaging applications, ultrasounds are currently emerging in the clinical practice as a noninvasive technology for therapy. Indeed, the sound waves can be used to increase the temperature inside the target solid tumors, leading to apoptosis or necrosis of neoplastic tissues. The Magnetic resonance-guided focused ultrasound surgery (MRgFUS) technology represents a valid application of this ultrasound property, mainly used in oncology and neurology. In this paper; patient safety during MRgFUS treatments was investigated by a series of experiments in a tissue-mimicking phantom and performing ex vivo skin samples, to promptly identify unwanted temperature rises. The acquired MR images, used to evaluate the temperature in the treated areas, were analyzed to compare classical proton resonance frequency (PRF) shift techniques and referenceless thermometry methods to accurately assess the temperature variations. We exploited radial basis function (RBF) neural networks for referenceless thermometry and compared the results against interferometric optical fiber measurements. The experimental measurements were obtained using a set of interferometric optical fibers aimed at quantifying temperature variations directly in the sonication areas. The temperature increases during the treatment were not accurately detected by MRI-based referenceless thermometry methods, and more sensitive measurement systems, such as optical fibers, would be required. In-depth studies about these aspects are needed to monitor temperature and improve safety during MRgFUS treatments.

11.
J Med Signals Sens ; 10(3): 158-173, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33062608

RESUMO

BACKGROUND: Deep learning methods have become popular for their high-performance rate in the classification and detection of events in computer vision tasks. Transfer learning paradigm is widely adopted to apply pretrained convolutional neural network (CNN) on medical domains overcoming the problem of the scarcity of public datasets. Some investigations to assess transfer learning knowledge inference abilities in the context of mammogram screening and possible combinations with unsupervised techniques are in progress. METHODS: We propose a novel technique for the detection of suspicious regions in mammograms that consist of the combination of two approaches based on scale invariant feature transform (SIFT) keypoints and transfer learning with pretrained CNNs such as PyramidNet and AlexNet fine-tuned on digital mammograms generated by different mammography devices. Preprocessing, feature extraction, and selection steps characterize the SIFT-based method, while the deep learning network validates the candidate suspicious regions detected by the SIFT method. RESULTS: The experiments conducted on both mini-MIAS dataset and our new public dataset Suspicious Region Detection on Mammogram from PP (SuReMaPP) of 384 digital mammograms exhibit high performances compared to several state-of-the-art methods. Our solution reaches 98% of sensitivity and 90% of specificity on SuReMaPP and 94% of sensitivity and 91% of specificity on mini-MIAS. CONCLUSIONS: The experimental sessions conducted so far prompt us to further investigate the powerfulness of transfer learning over different CNNs and possible combinations with unsupervised techniques. Transfer learning performances' accuracy may decrease when the training and testing images come out from mammography devices with different properties.

13.
J Biomed Inform ; 108: 103479, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32561444

RESUMO

The ever-increasing amount of biomedical data is enabling new large-scale studies, even though ad hoc computational solutions are required. The most recent Machine Learning (ML) and Artificial Intelligence (AI) techniques have been achieving outstanding performance and an important impact in clinical research, aiming at precision medicine, as well as improving healthcare workflows. However, the inherent heterogeneity and uncertainty in the healthcare information sources pose new compelling challenges for clinicians in their decision-making tasks. Only the proper combination of AI and human intelligence capabilities, by explicitly taking into account effective and safe interaction paradigms, can permit the delivery of care that outperforms what either can do separately. Therefore, Human-Computer Interaction (HCI) plays a crucial role in the design of software oriented to decision-making in medicine. In this work, we systematically review and discuss several research fields strictly linked to HCI and clinical decision-making, by subdividing the articles into six themes, namely: Interfaces, Visualization, Electronic Health Records, Devices, Usability, and Clinical Decision Support Systems. However, these articles typically present overlaps among the themes, revealing that HCI inter-connects multiple topics. With the goal of focusing on HCI and design aspects, the articles under consideration were grouped into four clusters. The advances in AI can effectively support the physicians' cognitive processes, which certainly play a central role in decision-making tasks because the human mental behavior cannot be completely emulated and captured; the human mind might solve a complex problem even without a statistically significant amount of data by relying upon domain knowledge. For this reason, technology must focus on interactive solutions for supporting the physicians effectively in their daily activities, by exploiting their unique knowledge and evidence-based reasoning, as well as improving the various aspects highlighted in this review.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Medicina de Precisão , Inteligência Artificial , Computadores , Humanos , Fluxo de Trabalho
14.
Comput Biol Med ; 114: 103424, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31521896

RESUMO

Many studies have shown that epicardial fat is associated with a higher risk of heart diseases. Accurate epicardial adipose tissue quantification is still an open research issue. Considering that manual approaches are generally user-dependent and time-consuming, computer-assisted tools can considerably improve the result repeatability as well as reduce the time required for performing an accurate segmentation. Unfortunately, fully automatic strategies might not always identify the Region of Interest (ROI) correctly. Moreover, they could require user interaction for handling unexpected events. This paper proposes a semi-automatic method for Epicardial Fat Volume (EFV) segmentation and quantification. Unlike supervised Machine Learning approaches, the method does not require any initial training or modeling phase to set up the system. As a further key novelty, the method also yields a subdivision into quartiles of the adipose tissue density. Quartile-based analysis conveys information about fat densities distribution, enabling an in-depth study towards a possible correlation between fat amounts, fat distribution, and heart diseases. Experimental tests were performed on 50 Calcium Score (CaSc) series and 95 Coronary Computed Tomography Angiography (CorCTA) series. Area-based and distance-based metrics were used to evaluate the segmentation accuracy, by obtaining Dice Similarity Coefficient (DSC) = 93.74% and Mean Absolute Distance (MAD) = 2.18 for CaSc, as well as DSC = 92.48% and MAD = 2.87 for CorCTA. Moreover, the Pearson and Spearman coefficients were computed for quantifying the correlation between the ground-truth EFV and the corresponding automated measurement, by obtaining 0.9591 and 0.9490 for CaSc, and 0.9513 and 0.9319 for CorCTA, respectively. In conclusion, the proposed EFV quantification and analysis method represents a clinically useable tool assisting the cardiologist to gain insights into a specific clinical scenario and leading towards personalized diagnosis and therapy.


Assuntos
Tecido Adiposo/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Pericárdio/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Adulto , Algoritmos , Aprendizado Profundo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
15.
Comput Methods Programs Biomed ; 176: 159-172, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31200903

RESUMO

BACKGROUND AND OBJECTIVES: Image segmentation represents one of the most challenging issues in medical image analysis to distinguish among different adjacent tissues in a body part. In this context, appropriate image pre-processing tools can improve the result accuracy achieved by computer-assisted segmentation methods. Taking into consideration images with a bimodal intensity distribution, image binarization can be used to classify the input pictorial data into two classes, given a threshold intensity value. Unfortunately, adaptive thresholding techniques for two-class segmentation work properly only for images characterized by bimodal histograms. We aim at overcoming these limitations and automatically determining a suitable optimal threshold for bimodal Magnetic Resonance (MR) images, by designing an intelligent image analysis framework tailored to effectively assist the physicians during their decision-making tasks. METHODS: In this work, we present a novel evolutionary framework for image enhancement, automatic global thresholding, and segmentation, which is here applied to different clinical scenarios involving bimodal MR image analysis: (i) uterine fibroid segmentation in MR guided Focused Ultrasound Surgery, and (ii) brain metastatic cancer segmentation in neuro-radiosurgery therapy. Our framework exploits MedGA as a pre-processing stage. MedGA is an image enhancement method based on Genetic Algorithms that improves the threshold selection, obtained by the efficient Iterative Optimal Threshold Selection algorithm, between the underlying sub-distributions in a nearly bimodal histogram. RESULTS: The results achieved by the proposed evolutionary framework were quantitatively evaluated, showing that the use of MedGA as a pre-processing stage outperforms the conventional image enhancement methods (i.e., histogram equalization, bi-histogram equalization, Gamma transformation, and sigmoid transformation), in terms of both MR image enhancement and segmentation evaluation metrics. CONCLUSIONS: Thanks to this framework, MR image segmentation accuracy is considerably increased, allowing for measurement repeatability in clinical workflows. The proposed computational solution could be well-suited for other clinical contexts requiring MR image analysis and segmentation, aiming at providing useful insights for differential diagnosis and prognosis.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Leiomioma/diagnóstico por imagem , Imageamento por Ressonância Magnética , Algoritmos , Simulação por Computador , Tomada de Decisões , Feminino , Humanos , Neurocirurgia , Radiocirurgia , Software
16.
J Biomed Inform ; 88: 37-52, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30419365

RESUMO

Computer applications for diagnostic medical imaging provide generally a wide range of tools to support physicians in their daily diagnosis activities. Unfortunately, some functionalities are specialized for specific diseases or imaging modalities, while other ones are useless for the images under investigation. Nevertheless, the corresponding Graphical User Interface (GUI) widgets are still present on the screen reducing the image visualization area. As a consequence, the physician may be affected by cognitive overload and visual stress causing a degradation of performances, mainly due to unuseful widgets. In clinical environments, a GUI must represent a sequence of steps for image investigation following a well-defined workflow. This paper proposes a software framework aimed at addressing the issues outlined before. Specifically, we designed a DICOM based mechanism of data-driven GUI generation, referring to the examined body part and imaging modality as well as to the medical image analysis task to perform. In this way, the self-configuring GUI is generated on-the-fly, so that just specific functionalities are active according to the current clinical scenario. Such a solution provides also a tight integration with the DICOM standard, which considers various aspects of the technology in medicine but does not address GUI specification issues. The proposed workflow is designed for diagnostic workstations with a local file system on an interchange media acting inside or outside the hospital ward. Accordingly, the DICOMDIR conceptual data model, defined by a hierarchical structure, is exploited and extended to include the GUI information thanks to a new Information Object Module (IOM), which reuses the DICOM information model. The proposed framework exploits the DICOM standard representing an enabling technology for an auto-consistent solution in medical diagnostic applications. In this paper we present a detailed description of the framework, its software design, and a proof-of-concept implementation as a suitable plug-in of the OsiriX imaging software.


Assuntos
Gráficos por Computador , Informática Médica/métodos , Sistemas de Informação em Radiologia , Interface Usuário-Computador , Algoritmos , Encéfalo/diagnóstico por imagem , Cognição , Computadores , Sistemas de Apoio a Decisões Clínicas , Diagnóstico por Imagem/métodos , Estudos de Viabilidade , Humanos , Imageamento por Ressonância Magnética , Reconhecimento Automatizado de Padrão , Software
17.
Comput Methods Programs Biomed ; 144: 77-96, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28495008

RESUMO

BACKGROUND AND OBJECTIVES: Nowadays, clinical practice in Gamma Knife treatments is generally based on MRI anatomical information alone. However, the joint use of MRI and PET images can be useful for considering both anatomical and metabolic information about the lesion to be treated. In this paper we present a co-segmentation method to integrate the segmented Biological Target Volume (BTV), using [11C]-Methionine-PET (MET-PET) images, and the segmented Gross Target Volume (GTV), on the respective co-registered MR images. The resulting volume gives enhanced brain tumor information to be used in stereotactic neuro-radiosurgery treatment planning. GTV often does not match entirely with BTV, which provides metabolic information about brain lesions. For this reason, PET imaging is valuable and it could be used to provide complementary information useful for treatment planning. In this way, BTV can be used to modify GTV, enhancing Clinical Target Volume (CTV) delineation. METHODS: A novel fully automatic multimodal PET/MRI segmentation method for Leksell Gamma Knife® treatments is proposed. This approach improves and combines two computer-assisted and operator-independent single modality methods, previously developed and validated, to segment BTV and GTV from PET and MR images, respectively. In addition, the GTV is utilized to combine the superior contrast of PET images with the higher spatial resolution of MRI, obtaining a new BTV, called BTVMRI. A total of 19 brain metastatic tumors, undergone stereotactic neuro-radiosurgery, were retrospectively analyzed. A framework for the evaluation of multimodal PET/MRI segmentation is also presented. Overlap-based and spatial distance-based metrics were considered to quantify similarity concerning PET and MRI segmentation approaches. Statistics was also included to measure correlation among the different segmentation processes. Since it is not possible to define a gold-standard CTV according to both MRI and PET images without treatment response assessment, the feasibility and the clinical value of BTV integration in Gamma Knife treatment planning were considered. Therefore, a qualitative evaluation was carried out by three experienced clinicians. RESULTS: The achieved experimental results showed that GTV and BTV segmentations are statistically correlated (Spearman's rank correlation coefficient: 0.898) but they have low similarity degree (average Dice Similarity Coefficient: 61.87 ± 14.64). Therefore, volume measurements as well as evaluation metrics values demonstrated that MRI and PET convey different but complementary imaging information. GTV and BTV could be combined to enhance treatment planning. In more than 50% of cases the CTV was strongly or moderately conditioned by metabolic imaging. Especially, BTVMRI enhanced the CTV more accurately than BTV in 25% of cases. CONCLUSIONS: The proposed fully automatic multimodal PET/MRI segmentation method is a valid operator-independent methodology helping the clinicians to define a CTV that includes both metabolic and morphologic information. BTVMRI and GTV should be considered for a comprehensive treatment planning.


Assuntos
Neoplasias Encefálicas/radioterapia , Imageamento por Ressonância Magnética , Imagem Multimodal , Tomografia por Emissão de Pósitrons , Radiocirurgia/métodos , Planejamento da Radioterapia Assistida por Computador , Humanos
18.
Med Biol Eng Comput ; 55(6): 897-908, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-27638108

RESUMO

An algorithm for delineating complex head and neck cancers in positron emission tomography (PET) images is presented in this article. An enhanced random walk (RW) algorithm with automatic seed detection is proposed and used to make the segmentation process feasible in the event of inhomogeneous lesions with bifurcations. In addition, an adaptive probability threshold and a k-means based clustering technique have been integrated in the proposed enhanced RW algorithm. The new threshold is capable of following the intensity changes between adjacent slices along the whole cancer volume, leading to an operator-independent algorithm. Validation experiments were first conducted on phantom studies: High Dice similarity coefficients, high true positive volume fractions, and low Hausdorff distance confirm the accuracy of the proposed method. Subsequently, forty head and neck lesions were segmented in order to evaluate the clinical feasibility of the proposed approach against the most common segmentation algorithms. Experimental results show that the proposed algorithm is more accurate and robust than the most common algorithms in the literature. Finally, the proposed method also shows real-time performance, addressing the physician's requirements in a radiotherapy environment.


Assuntos
Neoplasias de Cabeça e Pescoço/diagnóstico , Tomografia por Emissão de Pósitrons/métodos , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas
19.
Br J Radiol ; 89(1062): 20150773, 2016 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26987374

RESUMO

OBJECTIVE: The aim of the study was to compare epicardial adipose tissue (EAT) characteristics assessed with coronary calcium score (CS) and CT coronary angiography (CTCA) image data sets. METHODS: In 76 patients (mean age 59 ± 13 years) who underwent CS and CTCA owing to suspected coronary artery disease (CAD), EAT was quantified in terms of density (Hounsfield units), thickness and volume. The EAT volume was extracted with a semi-automatic software. RESULTS: A moderate correlation was found between EAT density in CS and CTCA image data sets (-100 ± 19 HU vs -70 ± 24 HU; p < 0.05, r = 0.55). The distribution of EAT was not symmetrical with a maximal thickness at the right atrioventricular groove (14.2 ± 5.3 mm in CS, 15.7 ± 5 mm in CTCA; p > 0.05, r = 0.76). The EAT volume resulted as 122 ± 50 cm(3) in CS and 86 ± 40 cm(3) in CTCA (Δ = 30%, p < 0.05, r = 0.92). After adjustment for post-contrast EAT attenuation difference (Δ = 30 HU), the volume was 101 ± 47 cm(3) (Δ = 17%, p < 0.05, r = 0.92). Based on EAT volume median values, no differences were found between groups with smaller and larger volumes in terms of Agatston score and CAD severity. CONCLUSION: CS and CTCA image data sets may be equally employed for EAT assessment; however, an underestimation of volume is found with the latter acquisition even after post-contrast attenuation adjustment. ADVANCES IN KNOWLEDGE: EAT may be measured by processing either the CS or CTCA image data sets.


Assuntos
Tecido Adiposo/diagnóstico por imagem , Angiografia por Tomografia Computadorizada/métodos , Angiografia Coronária/métodos , Doença da Artéria Coronariana/diagnóstico por imagem , Pericárdio/diagnóstico por imagem , Calcificação Vascular/diagnóstico por imagem , Adiposidade , Algoritmos , Feminino , Humanos , Imageamento Tridimensional/métodos , Masculino , Pessoa de Meia-Idade , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
Med Biol Eng Comput ; 54(7): 1071-84, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26530047

RESUMO

Uterine fibroids are benign tumors that can affect female patients during reproductive years. Magnetic resonance-guided focused ultrasound (MRgFUS) represents a noninvasive approach that uses thermal ablation principles to treat symptomatic fibroids. During traditional treatment planning, uterus, fibroids, and surrounding organs at risk must be manually marked on MR images by an operator. After treatment, an operator must segment, again manually, treated areas to evaluate the non-perfused volume (NPV) inside the fibroids. Both pre- and post-treatment procedures are time-consuming and operator-dependent. This paper presents a novel method, based on an advanced direct region detection model, for fibroid segmentation in MR images to address MRgFUS post-treatment segmentation issues. An incremental procedure is proposed: split-and-merge algorithm results are employed as multiple seed-region selections by an adaptive region growing procedure. The proposed approach segments multiple fibroids with different pixel intensity, even in the same MR image. The method was evaluated using area-based and distance-based metrics and was compared with other similar works in the literature. Segmentation results, performed on 14 patients, demonstrated the effectiveness of the proposed approach showing a sensitivity of 84.05 %, a specificity of 92.84 %, and a speedup factor of 1.56× with respect to classic region growing implementations (average values).


Assuntos
Ablação por Ultrassom Focalizado de Alta Intensidade/métodos , Processamento de Imagem Assistida por Computador , Leiomioma/diagnóstico por imagem , Leiomioma/terapia , Imagem por Ressonância Magnética Intervencionista/métodos , Algoritmos , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA