RESUMEN
This paper describes the process of developing a classification model for the effective detection of malignant melanoma, an aggressive type of cancer in skin lesions. Primary focus is given on fine-tuning and improving a state-of-the-art convolutional neural network (CNN) to obtain the optimal ROC-AUC score. The study investigates a variety of artificial intelligence (AI) clustering techniques to train the developed models on a combined dataset of images across data from the 2019 and 2020 IIM-ISIC Melanoma Classification Challenges. The models were evaluated using varying cross-fold validations, with the highest ROC-AUC reaching a score of 99.48%.
Asunto(s)
Inteligencia Artificial , Melanoma , Humanos , Dermoscopía/métodos , Melanoma/diagnóstico , Redes Neurales de la Computación , Análisis por Conglomerados , Melanoma Cutáneo MalignoRESUMEN
There is a growing demand for fast, accurate computation of clinical markers to improve renal function and anatomy assessment with a single study. However, conventional techniques have limitations leading to overestimations of kidney function or failure to provide sufficient spatial resolution to target the disease location. In contrast, the computer-aided analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) could generate significant markers, including the glomerular filtration rate (GFR) and time-intensity curves of the cortex and medulla for determining obstruction in the urinary tract. This paper presents a dual-stage fully modular framework for automatic renal compartment segmentation in 4D DCE-MRI volumes. (1) Memory-efficient 3D deep learning is integrated to localise each kidney by harnessing residual convolutional neural networks for improved convergence; segmentation is performed by efficiently learning spatial-temporal information coupled with boundary-preserving fully convolutional dense nets. (2) Renal contextual information is enhanced via non-linear transformation to segment the cortex and medulla. The proposed framework is evaluated on a paediatric dataset containing 60 4D DCE-MRI volumes exhibiting varying conditions affecting kidney function. Our technique outperforms a state-of-the-art approach based on a GrabCut and support vector machine classifier in mean dice similarity (DSC) by 3.8% and demonstrates higher statistical stability with lower standard deviation by 12.4% and 15.7% for cortex and medulla segmentation, respectively.
Asunto(s)
Medios de Contraste , Imagen por Resonancia Magnética , Biomarcadores , Niño , Humanos , Procesamiento de Imagen Asistido por Computador , Riñón/diagnóstico por imagen , Riñón/fisiología , Redes Neurales de la ComputaciónRESUMEN
Dementia is a syndrome that is characterised by the decline of different cognitive abilities. A high rate of deaths and high cost for detection, treatments, and patients care count amongst its consequences. Although there is no cure for dementia, a timely diagnosis helps in obtaining necessary support, appropriate medication, and maintenance, as far as possible, of engagement in intellectual, social, and physical activities. The early detection of Alzheimer Disease (AD) is considered to be of high importance for improving the quality of life of patients and their families. In particular, Virtual Reality (VR) is an expanding tool that can be used in order to assess cognitive abilities while navigating through a Virtual Environment (VE). The paper summarises common AD screening and diagnosis techniques focusing on the latest approaches that are based on Virtual Environments, behaviour analysis, and emotions recognition, aiming to provide more reliable and non-invasive diagnostics at home or in a clinical environment. Furthermore, different AD diagnosis evaluation methods and metrics are presented and discussed together with an overview of the different datasets.
Asunto(s)
Enfermedad de Alzheimer , Realidad Virtual , Enfermedad de Alzheimer/diagnóstico , Cognición , Diagnóstico Precoz , Humanos , Calidad de VidaRESUMEN
The accurate 3D reconstruction of organs from radiological scans is an essential tool in computer-aided diagnosis (CADx) and plays a critical role in clinical, biomedical and forensic science research. The structure and shape of the organ, combined with morphological measurements such as volume and curvature, can provide significant guidance towards establishing progression or severity of a condition, and thus support improved diagnosis and therapy planning. Furthermore, the classification and stratification of organ abnormalities aim to explore and investigate organ deformations following injury, trauma and illness. This paper presents a framework for automatic morphological feature extraction in computer-aided 3D organ reconstructions following organ segmentation in 3D radiological scans. Two different magnetic resonance imaging (MRI) datasets are evaluated. Using the MRI scans of 85 adult volunteers, the overall mean volume for the pancreas organ is 69.30 ± 32.50cm3, and the 3D global curvature is (35.23 ± 6.83) × 10-3. Another experiment evaluates the MRI scans of 30 volunteers, and achieves mean liver volume of 1547.48 ± 204.19cm3 and 3D global curvature (19.87 ± 3.62) × 10- 3. Both experiments highlight a negative correlation between 3D curvature and volume with a statistical difference (p < 0.0001). Such a tool can support the investigation into organ related conditions such as obesity, type 2 diabetes mellitus and liver disease.
Asunto(s)
Imagenología Tridimensional/métodos , Hígado/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Páncreas/diagnóstico por imagen , Adulto , Algoritmos , Femenino , Humanos , Hígado/anatomía & histología , Masculino , Páncreas/anatomía & histologíaRESUMEN
Accurate, quantitative segmentation of anatomical structures in radiological scans, such as Magnetic Resonance Imaging (MRI) and Computer Tomography (CT), can produce significant biomarkers and can be integrated into computer-aided assisted diagnosis (CADx) systems to support the interpretation of medical images from multi-protocol scanners. However, there are serious challenges towards developing robust automated segmentation techniques, including high variations in anatomical structure and size, the presence of edge-based artefacts, and heavy un-controlled breathing that can produce blurred motion-based artefacts. This paper presents a novel computing approach for automatic organ and muscle segmentation in medical images from multiple modalities by harnessing the advantages of deep learning techniques in a two-part process. (1) a 3D encoder-decoder, Rb-UNet, builds a localisation model and a 3D Tiramisu network generates a boundary-preserving segmentation model for each target structure; (2) the fully trained Rb-UNet predicts a 3D bounding box encapsulating the target structure of interest, after which the fully trained Tiramisu model performs segmentation to reveal detailed organ or muscle boundaries. The proposed approach is evaluated on six different datasets, including MRI, Dynamic Contrast Enhanced (DCE) MRI and CT scans targeting the pancreas, liver, kidneys and psoas-muscle and achieves quantitative measures of mean Dice similarity coefficient (DSC) that surpass or are comparable with the state-of-the-art. A qualitative evaluation performed by two independent radiologists verified the preservation of detailed organ and muscle boundaries.
RESUMEN
Automatic pancreas segmentation in 3D radiological scans is a critical, yet challenging task. As a prerequisite for computer-aided diagnosis (CADx) systems, accurate pancreas segmentation could generate both quantitative and qualitative information towards establishing the severity of a condition, and thus provide additional guidance for therapy planning. Since the pancreas is an organ of high inter-patient anatomical variability, previous segmentation approaches report lower quantitative accuracy scores in comparison to abdominal organs such as the liver or kidneys. This paper presents a novel approach for automatic pancreas segmentation in magnetic resonance imaging (MRI) and computer tomography (CT) scans. This method exploits 3D segmentation that, when coupled with geometrical and morphological characteristics of abdominal tissue, classifies distinct contours in tight pixel-range proximity as "pancreas" or "non-pancreas". There are three main stages to this approach: (1) identify a major pancreas region and apply contrast enhancement to differentiate between pancreatic and surrounding tissue; (2) perform 3D segmentation via continuous max-flow and min-cuts approach, structured forest edge detection, and a training dataset of annotated pancreata; (3) eliminate non-pancreatic contours from resultant segmentation via morphological operations on area, structure and connectivity between distinct contours. The proposed method is evaluated on a dataset containing 82 CT image volumes, achieving mean Dice Similarity coefficient (DSC) of 79.3⯱â¯4.4%. Two MRI datasets containing 216 and 132 image volumes are evaluated, achieving mean DSC 79.6⯱â¯5.7% and 81.6⯱â¯5.1% respectively. This approach is statistically stable, reflected by lower metrics in standard deviation in comparison to state-of-the-art approaches.
Asunto(s)
Imagen por Resonancia Magnética/métodos , Páncreas/diagnóstico por imagen , Páncreas/fisiopatología , Tomografía Computarizada por Rayos X , Algoritmos , Bases de Datos Factuales , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional , MasculinoRESUMEN
BACKGROUND: Multiparametric magnetic resonance imaging (mpMRI)-targeted prostate biopsies can improve detection of clinically significant prostate cancer and decrease the overdetection of insignificant cancers. It is unknown whether visual-registration targeting is sufficient or augmentation with image-fusion software is needed. OBJECTIVE: To assess concordance between the two methods. DESIGN, SETTING, AND PARTICIPANTS: We conducted a blinded, within-person randomised, paired validating clinical trial. From 2014 to 2016, 141 men who had undergone a prior (positive or negative) transrectal ultrasound biopsy and had a discrete lesion on mpMRI (score 3-5) requiring targeted transperineal biopsy were enrolled at a UK academic hospital; 129 underwent both biopsy strategies and completed the study. INTERVENTION: The order of performing biopsies using visual registration and a computer-assisted MRI/ultrasound image-fusion system (SmartTarget) on each patient was randomised. The equipment was reset between biopsy strategies to mitigate incorporation bias. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: The proportion of clinically significant prostate cancer (primary outcome: Gleason pattern ≥3+4=7, maximum cancer core length ≥4mm; secondary outcome: Gleason pattern ≥4+3=7, maximum cancer core length ≥6mm) detected by each method was compared using McNemar's test of paired proportions. RESULTS AND LIMITATIONS: The two strategies combined detected 93 clinically significant prostate cancers (72% of the cohort). Each strategy detected 80/93 (86%) of these cancers; each strategy identified 13 cases missed by the other. Three patients experienced adverse events related to biopsy (urinary retention, urinary tract infection, nausea, and vomiting). No difference in urinary symptoms, erectile function, or quality of life between baseline and follow-up (median 10.5 wk) was observed. The key limitations were lack of parallel-group randomisation and a limit on the number of targeted cores. CONCLUSIONS: Visual-registration and image-fusion targeting strategies combined had the highest detection rate for clinically significant cancers. Targeted prostate biopsy should be performed using both strategies together. PATIENT SUMMARY: We compared two prostate cancer biopsy strategies: visual registration and image fusion. A combination of the two strategies found the most clinically important cancers and should be used together whenever targeted biopsy is being performed.
Asunto(s)
Biopsia Guiada por Imagen/métodos , Imagen por Resonancia Magnética , Imagen Multimodal , Neoplasias de la Próstata/patología , Ultrasonografía , Anciano , Reacciones Falso Negativas , Humanos , Masculino , Persona de Mediana Edad , Clasificación del Tumor , Estudios Prospectivos , Medición de Riesgo , Método Simple CiegoRESUMEN
PURPOSE: Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. METHODS: A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. RESULTS: Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. CONCLUSIONS: The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems.