RESUMO
BACKGROUND: Magnetic Resonance acquisition is a time consuming process, making it susceptible to patient motion during scanning. Even motion in the order of a millimeter can introduce severe blurring and ghosting artifacts, potentially necessitating re-acquisition. Magnetic Resonance Imaging (MRI) can be accelerated by acquiring only a fraction of k-space, combined with advanced reconstruction techniques leveraging coil sensitivity profiles and prior knowledge. Artificial intelligence (AI)-based reconstruction techniques have recently been popularized, but generally assume an ideal setting without intra-scan motion. PURPOSE: To retrospectively detect and quantify the severity of motion artifacts in undersampled MRI data. This may prove valuable as a safety mechanism for AI-based approaches, provide useful information to the reconstruction method, or prompt for re-acquisition while the patient is still in the scanner. METHODS: We developed a deep learning approach that detects and quantifies motion artifacts in undersampled brain MRI. We demonstrate that synthetically motion-corrupted data can be leveraged to train the convolutional neural network (CNN)-based motion artifact estimator, generalizing well to real-world data. Additionally, we leverage the motion artifact estimator by using it as a selector for a motion-robust reconstruction model in case a considerable amount of motion was detected, and a high data consistency model otherwise. RESULTS: Training and validation were performed on 4387 and 1304 synthetically motion-corrupted images and their uncorrupted counterparts, respectively. Testing was performed on undersampled in vivo motion-corrupted data from 28 volunteers, where our model distinguished head motion from motion-free scans with 91% and 96% accuracy when trained on synthetic and on real data, respectively. It predicted a manually defined quality label ('Good', 'Medium' or 'Bad' quality) correctly in 76% and 85% of the time when trained on synthetic and real data, respectively. When used as a selector it selected the appropriate reconstruction network 93% of the time, achieving near optimal SSIM values. CONCLUSIONS: The proposed method quantified motion artifact severity in undersampled MRI data with high accuracy, enabling real-time motion artifact detection that can help improve the safety and quality of AI-based reconstructions.
Assuntos
Artefatos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Movimento , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Humanos , Inteligência Artificial , Encéfalo/diagnóstico por imagem , Aprendizado ProfundoRESUMO
Research exploring CycleGAN-based synthetic image generation has recently accelerated in the medical community due to its ability to leverage unpaired images effectively. However, a commonly established drawback of the CycleGAN, the introduction of artifacts in generated images, makes it unreliable for medical imaging use cases. In an attempt to address this, we explore the effect of structure losses on the CycleGAN and propose a generalized frequency-based loss that aims at preserving the content in the frequency domain. We apply this loss to the use-case of cone-beam computed tomography (CBCT) translation to computed tomography (CT)-like quality. Synthetic CT (sCT) images generated from our methods are compared against baseline CycleGAN along with other existing structure losses proposed in the literature. Our methods (MAE: 85.5, MSE: 20433, NMSE: 0.026, PSNR: 30.02, SSIM: 0.935) quantitatively and qualitatively improve over the baseline CycleGAN (MAE: 88.8, MSE: 24244, NMSE: 0.03, PSNR: 29.37, SSIM: 0.935) across all investigated metrics and are more robust than existing methods. Furthermore, no observable artifacts or loss in image quality were observed. Finally, we demonstrated that sCTs generated using our methods have superior performance compared to the original CBCT images on selected downstream tasks.
Assuntos
Tomografia Computadorizada de Feixe Cônico , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Artefatos , BenchmarkingRESUMO
This paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (H&N) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in H&N cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs.
Assuntos
Neoplasias de Cabeça e Pescoço , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Fluordesoxiglucose F18 , Neoplasias de Cabeça e Pescoço/diagnóstico por imagem , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Tomografia por Emissão de Pósitrons/métodos , Carga TumoralRESUMO
INTRODUCTION: Arch bars are regularly used in the management of dentoalveolar and minimally displaced fractures of the maxilla or mandible and luxation or avulsion of teeth by maxillofacial surgeons. The procedure for arch bar placement has remained unchanged over the period of years, and this coupled with the difficulty in maintenance of oral hygiene is a problem that begs attention. TECHNIQUE: We have devised a technique to overcome these hurdles and achieve adequate intramaxillary splinting. The technique uses an arch wire and ligature wire assembly instead of the conventional arch bar. CONCLUSION: This technique is easy to learn and can thus be used as a rapid yet robust alternative to the conventional arch bar for dentoalveolar and minimally displaced fractures.
RESUMO
Intracortical microelectrode arrays record multi-unit extracellular activity for neurophysiology studies and for brain-machine interface applications. The common first step is neural spike-detection; a process complicated by common-noise signals from motion artifacts, electromyographic activity, and electric field pickup, especially in awake/behaving subjects. Often common-noise spikes are very similar to neural spikes in their magnitude, spectral, and temporal features. Provided sufficient spacing exists between electrodes of the array, a local neural spike is rarely recorded on multiple electrodes simultaneously. This is not true for distant common-noise sources. Two new techniques compatible with standard spike-detection schemes are introduced and evaluated. The first method, virtual referencing (VR), takes the average recording from all functional electrodes in the array (represents the signal from a virtual-electrode at the array's center) and subtracts it from the test electrode signal. The second method, inter-electrode correlation (IEC), computes a correlation coefficient between threshold exceeding candidate spike segments on the test electrode and concurrent segments from remaining electrodes. When sufficient correlation is detected, the candidate spike is rejected as originating from a distant common-noise source. The performance of these algorithms was compared with standard thresholding and differential referencing approaches using neural recordings from un-anaesthetized rats. By evaluating characteristics of mean-spike waveforms generated by each method under different levels of common-noise, it was found that IEC consistently offered the most robust means of neural spike-detection. Furthermore, IEC's rejection of supra-threshold events not likely originating from local neurons significantly reduces data handling for downstream spike sorting and processing operations.
Assuntos
Potenciais de Ação/fisiologia , Artefatos , Córtex Cerebral/citologia , Microeletrodos , Neurônios/fisiologia , Interface Usuário-Computador , Animais , Ratos , Valores de Referência , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador , Estatística como AssuntoRESUMO
Implanted intra-cortical micro-electrode arrays record multi-unit extracellular spike activity that is used in deciphering the neural basis for adaptation, learning, plasticity and as command signal for brain-machine interfaces (BMI). Detection of spike activity is the first step in successful implementation of all the aforementioned applications. However, with awake and behaving animals, micro-electrode arrays typically also record non-neuronal signals induced by the animal's movement, feeding and grooming actions. The spectral and temporal nature of these artifacts is similar to neural spikes, which complicates accurate detection. The distal source and higher strength of non-neuronal signals result in their near simultaneous registration on most electrodes, while neural spiking event is rarely recorded on more than one electrode of an array. This difference is utilized in identifying non-neuronal content from acquired data by performing a correlation analysis. The efficacy of the method is evaluated by comparing outcomes from algorithms that use absolute threshold and Principal Component Analysis (PCA) as a means of identifying neural spikes with the same methods incorporating correlation analysis.